1pth(3) GNU Portable Threads pth(3)
2
3
4
6 pthsem - GNU Portable Threads
7
9 pthsem 2.0.7 (08-Jun-2006) based on GNU Pth
10
12 Global Library Management
13 pth_init, pth_kill, pth_ctrl, pth_version.
14
15 Thread Attribute Handling
16 pth_attr_of, pth_attr_new, pth_attr_init, pth_attr_set,
17 pth_attr_get, pth_attr_destroy.
18
19 Thread Control
20 pth_spawn, pth_once, pth_self, pth_suspend, pth_resume, pth_yield,
21 pth_nap, pth_wait, pth_cancel, pth_abort, pth_raise, pth_join,
22 pth_exit.
23
24 Utilities
25 pth_fdmode, pth_time, pth_timeout, pth_sfiodisc.
26
27 Cancellation Management
28 pth_cancel_point, pth_cancel_state.
29
30 Event Handling
31 pth_event, pth_event_typeof, pth_event_extract, pth_event_concat,
32 pth_event_isolate, pth_event_walk, pth_event_status,
33 pth_event_free.
34
35 Key-Based Storage
36 pth_key_create, pth_key_delete, pth_key_setdata, pth_key_getdata.
37
38 Message Port Communication
39 pth_msgport_create, pth_msgport_destroy, pth_msgport_find, pth_msg‐
40 port_pending, pth_msgport_put, pth_msgport_get, pth_msgport_reply.
41
42 Thread Cleanups
43 pth_cleanup_push, pth_cleanup_pop.
44
45 Process Forking
46 pth_atfork_push, pth_atfork_pop, pth_fork.
47
48 Synchronization
49 pth_mutex_init, pth_mutex_acquire, pth_mutex_release,
50 pth_rwlock_init, pth_rwlock_acquire, pth_rwlock_release,
51 pth_cond_init, pth_cond_await, pth_cond_notify, pth_barrier_init,
52 pth_barrier_reach.
53
54 Semaphore support
55 pth_sem_init, pth_sem_dec, pth_sem_dec_value, pth_sem_inc,
56 pth_sem_inc_value, pth_sem_set_value, pth_sem_get_value.
57
58 User-Space Context
59 pth_uctx_create, pth_uctx_make, pth_uctx_switch, pth_uctx_destroy.
60
61 Generalized POSIX Replacement API
62 pth_sigwait_ev, pth_accept_ev, pth_connect_ev, pth_select_ev,
63 pth_poll_ev, pth_read_ev, pth_readv_ev, pth_write_ev,
64 pth_writev_ev, pth_recv_ev, pth_recvfrom_ev, pth_send_ev,
65 pth_sendto_ev.
66
67 Standard POSIX Replacement API
68 pth_nanosleep, pth_usleep, pth_sleep, pth_waitpid, pth_system,
69 pth_sigmask, pth_sigwait, pth_accept, pth_connect, pth_select,
70 pth_pselect, pth_poll, pth_read, pth_readv, pth_write, pth_writev,
71 pth_pread, pth_pwrite, pth_recv, pth_recvfrom, pth_send,
72 pth_sendto.
73
75 ____ _ _
76 ⎪ _ \⎪ ⎪_⎪ ⎪__
77 ⎪ ⎪_) ⎪ __⎪ '_ \ ``Only those who attempt
78 ⎪ __/⎪ ⎪_⎪ ⎪ ⎪ ⎪ the absurd can achieve
79 ⎪_⎪ \__⎪_⎪ ⎪_⎪ the impossible.''
80
81 Pth is a very portable POSIX/ANSI-C based library for Unix platforms
82 which provides non-preemptive priority-based scheduling for multiple
83 threads of execution (aka `multithreading') inside event-driven appli‐
84 cations. All threads run in the same address space of the application
85 process, but each thread has its own individual program counter, run-
86 time stack, signal mask and "errno" variable.
87
88 The thread scheduling itself is done in a cooperative way, i.e., the
89 threads are managed and dispatched by a priority- and event-driven non-
90 preemptive scheduler. The intention is that this way both better porta‐
91 bility and run-time performance is achieved than with preemptive sched‐
92 uling. The event facility allows threads to wait until various types of
93 internal and external events occur, including pending I/O on file
94 descriptors, asynchronous signals, elapsed timers, pending I/O on mes‐
95 sage ports, thread and process termination, and even results of custom‐
96 ized callback functions.
97
98 Pth also provides an optional emulation API for POSIX.1c threads
99 (`Pthreads') which can be used for backward compatibility to existing
100 multithreaded applications. See Pth's pthread(3) manual page for
101 details.
102
103 Threading Background
104
105 When programming event-driven applications, usually servers, lots of
106 regular jobs and one-shot requests have to be processed in parallel.
107 To efficiently simulate this parallel processing on uniprocessor
108 machines, we use `multitasking' -- that is, we have the application ask
109 the operating system to spawn multiple instances of itself. On Unix,
110 typically the kernel implements multitasking in a preemptive and prior‐
111 ity-based way through heavy-weight processes spawned with fork(2).
112 These processes usually do not share a common address space. Instead
113 they are clearly separated from each other, and are created by direct
114 cloning a process address space (although modern kernels use memory
115 segment mapping and copy-on-write semantics to avoid unnecessary copy‐
116 ing of physical memory).
117
118 The drawbacks are obvious: Sharing data between the processes is com‐
119 plicated, and can usually only be done efficiently through shared mem‐
120 ory (but which itself is not very portable). Synchronization is compli‐
121 cated because of the preemptive nature of the Unix scheduler (one has
122 to use atomic locks, etc). The machine's resources can be exhausted
123 very quickly when the server application has to serve too many long-
124 running requests (heavy-weight processes cost memory). And when each
125 request spawns a sub-process to handle it, the server performance and
126 responsiveness is horrible (heavy-weight processes cost time to spawn).
127 Finally, the server application doesn't scale very well with the load
128 because of these resource problems. In practice, lots of tricks are
129 usually used to overcome these problems - ranging from pre-forked sub-
130 process pools to semi-serialized processing, etc.
131
132 One of the most elegant ways to solve these resource- and data-sharing
133 problems is to have multiple light-weight threads of execution inside a
134 single (heavy-weight) process, i.e., to use multithreading. Those
135 threads usually improve responsiveness and performance of the applica‐
136 tion, often improve and simplify the internal program structure, and
137 most important, require less system resources than heavy-weight pro‐
138 cesses. Threads are neither the optimal run-time facility for all types
139 of applications, nor can all applications benefit from them. But at
140 least event-driven server applications usually benefit greatly from
141 using threads.
142
143 The World of Threading
144
145 Even though lots of documents exists which describe and define the
146 world of threading, to understand Pth, you need only basic knowledge
147 about threading. The following definitions of thread-related terms
148 should at least help you understand thread programming enough to allow
149 you to use Pth.
150
151 o process vs. thread
152 A process on Unix systems consists of at least the following funda‐
153 mental ingredients: virtual memory table, program code, program
154 counter, heap memory, stack memory, stack pointer, file descriptor
155 set, signal table. On every process switch, the kernel saves and
156 restores these ingredients for the individual processes. On the other
157 hand, a thread consists of only a private program counter, stack mem‐
158 ory, stack pointer and signal table. All other ingredients, in par‐
159 ticular the virtual memory, it shares with the other threads of the
160 same process.
161
162 o kernel-space vs. user-space threading
163 Threads on a Unix platform traditionally can be implemented either
164 inside kernel-space or user-space. When threads are implemented by
165 the kernel, the thread context switches are performed by the kernel
166 without the application's knowledge. Similarly, when threads are
167 implemented in user-space, the thread context switches are performed
168 by an application library, without the kernel's knowledge. There also
169 are hybrid threading approaches where, typically, a user-space
170 library binds one or more user-space threads to one or more kernel-
171 space threads (there usually called light-weight processes - or in
172 short LWPs).
173
174 User-space threads are usually more portable and can perform faster
175 and cheaper context switches (for instance via swapcontext(2) or
176 setjmp(3)/longjmp(3)) than kernel based threads. On the other hand,
177 kernel-space threads can take advantage of multiprocessor machines
178 and don't have any inherent I/O blocking problems. Kernel-space
179 threads are usually scheduled in preemptive way side-by-side with the
180 underlying processes. User-space threads on the other hand use either
181 preemptive or non-preemptive scheduling.
182
183 o preemptive vs. non-preemptive thread scheduling
184 In preemptive scheduling, the scheduler lets a thread execute until a
185 blocking situation occurs (usually a function call which would block)
186 or the assigned timeslice elapses. Then it detracts control from the
187 thread without a chance for the thread to object. This is usually
188 realized by interrupting the thread through a hardware interrupt sig‐
189 nal (for kernel-space threads) or a software interrupt signal (for
190 user-space threads), like "SIGALRM" or "SIGVTALRM". In non-preemptive
191 scheduling, once a thread received control from the scheduler it
192 keeps it until either a blocking situation occurs (again a function
193 call which would block and instead switches back to the scheduler) or
194 the thread explicitly yields control back to the scheduler in a coop‐
195 erative way.
196
197 o concurrency vs. parallelism
198 Concurrency exists when at least two threads are in progress at the
199 same time. Parallelism arises when at least two threads are executing
200 simultaneously. Real parallelism can be only achieved on multiproces‐
201 sor machines, of course. But one also usually speaks of parallelism
202 or high concurrency in the context of preemptive thread scheduling
203 and of low concurrency in the context of non-preemptive thread sched‐
204 uling.
205
206 o responsiveness
207 The responsiveness of a system can be described by the user visible
208 delay until the system responses to an external request. When this
209 delay is small enough and the user doesn't recognize a noticeable
210 delay, the responsiveness of the system is considered good. When the
211 user recognizes or is even annoyed by the delay, the responsiveness
212 of the system is considered bad.
213
214 o reentrant, thread-safe and asynchronous-safe functions
215 A reentrant function is one that behaves correctly if it is called
216 simultaneously by several threads and then also executes simultane‐
217 ously. Functions that access global state, such as memory or files,
218 of course, need to be carefully designed in order to be reentrant.
219 Two traditional approaches to solve these problems are caller-sup‐
220 plied states and thread-specific data.
221
222 Thread-safety is the avoidance of data races, i.e., situations in
223 which data is set to either correct or incorrect value depending upon
224 the (unpredictable) order in which multiple threads access and modify
225 the data. So a function is thread-safe when it still behaves semanti‐
226 cally correct when called simultaneously by several threads (it is
227 not required that the functions also execute simultaneously). The
228 traditional approach to achieve thread-safety is to wrap a function
229 body with an internal mutual exclusion lock (aka `mutex'). As you
230 should recognize, reentrant is a stronger attribute than thread-safe,
231 because it is harder to achieve and results especially in no run-time
232 contention between threads. So, a reentrant function is always
233 thread-safe, but not vice versa.
234
235 Additionally there is a related attribute for functions named asyn‐
236 chronous-safe, which comes into play in conjunction with signal han‐
237 dlers. This is very related to the problem of reentrant functions. An
238 asynchronous-safe function is one that can be called safe and without
239 side-effects from within a signal handler context. Usually very few
240 functions are of this type, because an application is very restricted
241 in what it can perform from within a signal handler (especially what
242 system functions it is allowed to call). The reason mainly is,
243 because only a few system functions are officially declared by POSIX
244 as guaranteed to be asynchronous-safe. Asynchronous-safe functions
245 usually have to be already reentrant.
246
247 User-Space Threads
248
249 User-space threads can be implemented in various way. The two tradi‐
250 tional approaches are:
251
252 1. Matrix-based explicit dispatching between small units of execution:
253
254 Here the global procedures of the application are split into small
255 execution units (each is required to not run for more than a few
256 milliseconds) and those units are implemented by separate functions.
257 Then a global matrix is defined which describes the execution (and
258 perhaps even dependency) order of these functions. The main server
259 procedure then just dispatches between these units by calling one
260 function after each other controlled by this matrix. The threads are
261 created by more than one jump-trail through this matrix and by
262 switching between these jump-trails controlled by corresponding
263 occurred events.
264
265 This approach gives the best possible performance, because one can
266 fine-tune the threads of execution by adjusting the matrix, and the
267 scheduling is done explicitly by the application itself. It is also
268 very portable, because the matrix is just an ordinary data struc‐
269 ture, and functions are a standard feature of ANSI C.
270
271 The disadvantage of this approach is that it is complicated to write
272 large applications with this approach, because in those applications
273 one quickly gets hundreds(!) of execution units and the control flow
274 inside such an application is very hard to understand (because it is
275 interrupted by function borders and one always has to remember the
276 global dispatching matrix to follow it). Additionally, all threads
277 operate on the same execution stack. Although this saves memory, it
278 is often nasty, because one cannot switch between threads in the
279 middle of a function. Thus the scheduling borders are the function
280 borders.
281
282 2. Context-based implicit scheduling between threads of execution:
283
284 Here the idea is that one programs the application as with forked
285 processes, i.e., one spawns a thread of execution and this runs from
286 the begin to the end without an interrupted control flow. But the
287 control flow can be still interrupted - even in the middle of a
288 function. Actually in a preemptive way, similar to what the kernel
289 does for the heavy-weight processes, i.e., every few milliseconds
290 the user-space scheduler switches between the threads of execution.
291 But the thread itself doesn't recognize this and usually (except for
292 synchronization issues) doesn't have to care about this.
293
294 The advantage of this approach is that it's very easy to program,
295 because the control flow and context of a thread directly follows a
296 procedure without forced interrupts through function borders. Addi‐
297 tionally, the programming is very similar to a traditional and well
298 understood fork(2) based approach.
299
300 The disadvantage is that although the general performance is
301 increased, compared to using approaches based on heavy-weight pro‐
302 cesses, it is decreased compared to the matrix-approach above.
303 Because the implicit preemptive scheduling does usually a lot more
304 context switches (every user-space context switch costs some over‐
305 head even when it is a lot cheaper than a kernel-level context
306 switch) than the explicit cooperative/non-preemptive scheduling.
307 Finally, there is no really portable POSIX/ANSI-C based way to
308 implement user-space preemptive threading. Either the platform
309 already has threads, or one has to hope that some semi-portable
310 package exists for it. And even those semi-portable packages usually
311 have to deal with assembler code and other nasty internals and are
312 not easy to port to forthcoming platforms.
313
314 So, in short: the matrix-dispatching approach is portable and fast, but
315 nasty to program. The thread scheduling approach is easy to program,
316 but suffers from synchronization and portability problems caused by its
317 preemptive nature.
318
319 The Compromise of Pth
320
321 But why not combine the good aspects of both approaches while avoiding
322 their bad aspects? That's the goal of Pth. Pth implements easy-to-pro‐
323 gram threads of execution, but avoids the problems of preemptive sched‐
324 uling by using non-preemptive scheduling instead.
325
326 This sounds like, and is, a useful approach. Nevertheless, one has to
327 keep the implications of non-preemptive thread scheduling in mind when
328 working with Pth. The following list summarizes a few essential points:
329
330 o Pth provides maximum portability, but NOT the fanciest features.
331
332 This is, because it uses a nifty and portable POSIX/ANSI-C approach
333 for thread creation (and this way doesn't require any platform depen‐
334 dent assembler hacks) and schedules the threads in non-preemptive way
335 (which doesn't require unportable facilities like "SIGVTALRM"). On
336 the other hand, this way not all fancy threading features can be
337 implemented. Nevertheless the available facilities are enough to
338 provide a robust and full-featured threading system.
339
340 o Pth increases the responsiveness and concurrency of an event-driven
341 application, but NOT the concurrency of number-crunching applica‐
342 tions.
343
344 The reason is the non-preemptive scheduling. Number-crunching appli‐
345 cations usually require preemptive scheduling to achieve concurrency
346 because of their long CPU bursts. For them, non-preemptive scheduling
347 (even together with explicit yielding) provides only the old concept
348 of `coroutines'. On the other hand, event driven applications benefit
349 greatly from non-preemptive scheduling. They have only short CPU
350 bursts and lots of events to wait on, and this way run faster under
351 non-preemptive scheduling because no unnecessary context switching
352 occurs, as it is the case for preemptive scheduling. That's why Pth
353 is mainly intended for server type applications, although there is no
354 technical restriction.
355
356 o Pth requires thread-safe functions, but NOT reentrant functions.
357
358 This nice fact exists again because of the nature of non-preemptive
359 scheduling, where a function isn't interrupted and this way cannot be
360 reentered before it returned. This is a great portability benefit,
361 because thread-safety can be achieved more easily than reentrance
362 possibility. Especially this means that under Pth more existing
363 third-party libraries can be used without side-effects than it's the
364 case for other threading systems.
365
366 o Pth doesn't require any kernel support, but can NOT benefit from mul‐
367 tiprocessor machines.
368
369 This means that Pth runs on almost all Unix kernels, because the ker‐
370 nel does not need to be aware of the Pth threads (because they are
371 implemented entirely in user-space). On the other hand, it cannot
372 benefit from the existence of multiprocessors, because for this, ker‐
373 nel support would be needed. In practice, this is no problem, because
374 multiprocessor systems are rare, and portability is almost more
375 important than highest concurrency.
376
377 The life cycle of a thread
378
379 To understand the Pth Application Programming Interface (API), it helps
380 to first understand the life cycle of a thread in the Pth threading
381 system. It can be illustrated with the following directed graph:
382
383 NEW
384 ⎪
385 V
386 +---> READY ---+
387 ⎪ ^ ⎪
388 ⎪ ⎪ V
389 WAITING <--+-- RUNNING
390 ⎪
391 : V
392 SUSPENDED DEAD
393
394 When a new thread is created, it is moved into the NEW queue of the
395 scheduler. On the next dispatching for this thread, the scheduler picks
396 it up from there and moves it to the READY queue. This is a queue con‐
397 taining all threads which want to perform a CPU burst. There they are
398 queued in priority order. On each dispatching step, the scheduler
399 always removes the thread with the highest priority only. It then
400 increases the priority of all remaining threads by 1, to prevent them
401 from `starving'.
402
403 The thread which was removed from the READY queue is the new RUNNING
404 thread (there is always just one RUNNING thread, of course). The RUN‐
405 NING thread is assigned execution control. After this thread yields
406 execution (either explicitly by yielding execution or implicitly by
407 calling a function which would block) there are three possibilities:
408 Either it has terminated, then it is moved to the DEAD queue, or it has
409 events on which it wants to wait, then it is moved into the WAITING
410 queue. Else it is assumed it wants to perform more CPU bursts and imme‐
411 diately enters the READY queue again.
412
413 Before the next thread is taken out of the READY queue, the WAITING
414 queue is checked for pending events. If one or more events occurred,
415 the threads that are waiting on them are immediately moved to the READY
416 queue.
417
418 The purpose of the NEW queue has to do with the fact that in Pth a
419 thread never directly switches to another thread. A thread always
420 yields execution to the scheduler and the scheduler dispatches to the
421 next thread. So a freshly spawned thread has to be kept somewhere until
422 the scheduler gets a chance to pick it up for scheduling. That is what
423 the NEW queue is for.
424
425 The purpose of the DEAD queue is to support thread joining. When a
426 thread is marked to be unjoinable, it is directly kicked out of the
427 system after it terminated. But when it is joinable, it enters the DEAD
428 queue. There it remains until another thread joins it.
429
430 Finally, there is a special separated queue named SUSPENDED, to where
431 threads can be manually moved from the NEW, READY or WAITING queues by
432 the application. The purpose of this special queue is to temporarily
433 absorb suspended threads until they are again resumed by the applica‐
434 tion. Suspended threads do not cost scheduling or event handling
435 resources, because they are temporarily completely out of the sched‐
436 uler's scope. If a thread is resumed, it is moved back to the queue
437 from where it originally came and this way again enters the schedulers
438 scope.
439
441 In the following the Pth Application Programming Interface (API) is
442 discussed in detail. With the knowledge given above, it should now be
443 easy to understand how to program threads with this API. In good Unix
444 tradition, Pth functions use special return values ("NULL" in pointer
445 context, "FALSE" in boolean context and "-1" in integer context) to
446 indicate an error condition and set (or pass through) the "errno" sys‐
447 tem variable to pass more details about the error to the caller.
448
449 Global Library Management
450
451 The following functions act on the library as a whole. They are used
452 to initialize and shutdown the scheduler and fetch information from it.
453
454 int pth_init(void);
455 This initializes the Pth library. It has to be the first Pth API
456 function call in an application, and is mandatory. It's usually
457 done at the begin of the main() function of the application. This
458 implicitly spawns the internal scheduler thread and transforms the
459 single execution unit of the current process into a thread (the
460 `main' thread). It returns "TRUE" on success and "FALSE" on error.
461
462 int pth_kill(void);
463 This kills the Pth library. It should be the last Pth API function
464 call in an application, but is not really required. It's usually
465 done at the end of the main function of the application. At least,
466 it has to be called from within the main thread. It implicitly
467 kills all threads and transforms back the calling thread into the
468 single execution unit of the underlying process. The usual way to
469 terminate a Pth application is either a simple `"pth_exit(0);"' in
470 the main thread (which waits for all other threads to terminate,
471 kills the threading system and then terminates the process) or a
472 `"pth_kill(); exit(0)"' (which immediately kills the threading sys‐
473 tem and terminates the process). The pth_kill() return immediately
474 with a return code of "FALSE" if it is not called from within the
475 main thread. Else it kills the threading system and returns "TRUE".
476
477 long pth_ctrl(unsigned long query, ...);
478 This is a generalized query/control function for the Pth library.
479 The argument query is a bitmask formed out of one or more
480 "PTH_CTRL_"XXXX queries. Currently the following queries are sup‐
481 ported:
482
483 "PTH_CTRL_GETTHREADS"
484 This returns the total number of threads currently in exis‐
485 tence. This query actually is formed out of the combination of
486 queries for threads in a particular state, i.e., the
487 "PTH_CTRL_GETTHREADS" query is equal to the OR-combination of
488 all the following specialized queries:
489
490 "PTH_CTRL_GETTHREADS_NEW" for the number of threads in the new
491 queue (threads created via pth_spawn(3) but still not scheduled
492 once), "PTH_CTRL_GETTHREADS_READY" for the number of threads in
493 the ready queue (threads who want to do CPU bursts),
494 "PTH_CTRL_GETTHREADS_RUNNING" for the number of running threads
495 (always just one thread!), "PTH_CTRL_GETTHREADS_WAITING" for
496 the number of threads in the waiting queue (threads waiting for
497 events), "PTH_CTRL_GETTHREADS_SUSPENDED" for the number of
498 threads in the suspended queue (threads waiting to be resumed)
499 and "PTH_CTRL_GETTHREADS_DEAD" for the number of threads in the
500 new queue (terminated threads waiting for a join).
501
502 "PTH_CTRL_GETAVLOAD"
503 This requires a second argument of type `"float *"' (pointer to
504 a floating point variable). It stores a floating point value
505 describing the exponential averaged load of the scheduler in
506 this variable. The load is a function from the number of
507 threads in the ready queue of the schedulers dispatching unit.
508 So a load around 1.0 means there is only one ready thread (the
509 standard situation when the application has no high load). A
510 higher load value means there a more threads ready who want to
511 do CPU bursts. The average load value updates once per second
512 only. The return value for this query is always 0.
513
514 "PTH_CTRL_GETPRIO"
515 This requires a second argument of type `"pth_t"' which identi‐
516 fies a thread. It returns the priority (ranging from
517 "PTH_PRIO_MIN" to "PTH_PRIO_MAX") of the given thread.
518
519 "PTH_CTRL_GETNAME"
520 This requires a second argument of type `"pth_t"' which identi‐
521 fies a thread. It returns the name of the given thread, i.e.,
522 the return value of pth_ctrl(3) should be casted to a `"char
523 *"'.
524
525 "PTH_CTRL_DUMPSTATE"
526 This requires a second argument of type `"FILE *"' to which a
527 summary of the internal Pth library state is written to. The
528 main information which is currently written out is the current
529 state of the thread pool.
530
531 "PTH_CTRL_FAVOURNEW"
532 This requires a second argument of type `"int"' which specified
533 whether the GNU Pth scheduler favours new threads on startup,
534 i.e., whether they are moved from the new queue to the top
535 (argument is "TRUE") or middle (argument is "FALSE") of the
536 ready queue. The default is to favour new threads to make sure
537 they do not starve already at startup, although this slightly
538 violates the strict priority based scheduling.
539
540 The function returns "-1" on error.
541
542 long pth_version(void);
543 This function returns a hex-value `0xVRRTLL' which describes the
544 current Pth library version. V is the version, RR the revisions, LL
545 the level and T the type of the level (alphalevel=0, betalevel=1,
546 patchlevel=2, etc). For instance Pth version 1.0b1 is encoded as
547 0x100101. The reason for this unusual mapping is that this way the
548 version number is steadily increasing. The same value is also
549 available under compile time as "PTH_VERSION".
550
551 Thread Attribute Handling
552
553 Attribute objects are used in Pth for two things: First
554 stand-alone/unbound attribute objects are used to store attributes for
555 to be spawned threads. Bounded attribute objects are used to modify
556 attributes of already existing threads. The following attribute fields
557 exists in attribute objects:
558
559 "PTH_ATTR_PRIO" (read-write) ["int"]
560 Thread Priority between "PTH_PRIO_MIN" and "PTH_PRIO_MAX". The
561 default is "PTH_PRIO_STD".
562
563 "PTH_ATTR_NAME" (read-write) ["char *"]
564 Name of thread (up to 40 characters are stored only), mainly for
565 debugging purposes.
566
567 "PTH_ATTR_DISPATCHES" (read-write) ["int"]
568 In bounded attribute objects, this field is incremented every time
569 the context is switched to the associated thread.
570
571 "PTH_ATTR_JOINABLE" (read-write> ["int"]
572 The thread detachment type, "TRUE" indicates a joinable thread,
573 "FALSE" indicates a detached thread. When a thread is detached,
574 after termination it is immediately kicked out of the system
575 instead of inserted into the dead queue.
576
577 "PTH_ATTR_CANCEL_STATE" (read-write) ["unsigned int"]
578 The thread cancellation state, i.e., a combination of "PTH_CAN‐
579 CEL_ENABLE" or "PTH_CANCEL_DISABLE" and "PTH_CANCEL_DEFERRED" or
580 "PTH_CANCEL_ASYNCHRONOUS".
581
582 "PTH_ATTR_STACK_SIZE" (read-write) ["unsigned int"]
583 The thread stack size in bytes. Use lower values than 64 KB with
584 great care!
585
586 "PTH_ATTR_STACK_ADDR" (read-write) ["char *"]
587 A pointer to the lower address of a chunk of malloc(3)'ed memory
588 for the stack.
589
590 "PTH_ATTR_TIME_SPAWN" (read-only) ["pth_time_t"]
591 The time when the thread was spawned. This can be queried only
592 when the attribute object is bound to a thread.
593
594 "PTH_ATTR_TIME_LAST" (read-only) ["pth_time_t"]
595 The time when the thread was last dispatched. This can be queried
596 only when the attribute object is bound to a thread.
597
598 "PTH_ATTR_TIME_RAN" (read-only) ["pth_time_t"]
599 The total time the thread was running. This can be queried only
600 when the attribute object is bound to a thread.
601
602 "PTH_ATTR_START_FUNC" (read-only) ["void *(*)(void *)"]
603 The thread start function. This can be queried only when the
604 attribute object is bound to a thread.
605
606 "PTH_ATTR_START_ARG" (read-only) ["void *"]
607 The thread start argument. This can be queried only when the
608 attribute object is bound to a thread.
609
610 "PTH_ATTR_STATE" (read-only) ["pth_state_t"]
611 The scheduling state of the thread, i.e., either "PTH_STATE_NEW",
612 "PTH_STATE_READY", "PTH_STATE_WAITING", or "PTH_STATE_DEAD" This
613 can be queried only when the attribute object is bound to a thread.
614
615 "PTH_ATTR_EVENTS" (read-only) ["pth_event_t"]
616 The event ring the thread is waiting for. This can be queried only
617 when the attribute object is bound to a thread.
618
619 "PTH_ATTR_BOUND" (read-only) ["int"]
620 Whether the attribute object is bound ("TRUE") to a thread or not
621 ("FALSE").
622
623 The following API functions can be used to handle the attribute
624 objects:
625
626 pth_attr_t pth_attr_of(pth_t tid);
627 This returns a new attribute object bound to thread tid. Any
628 queries on this object directly fetch attributes from tid. And
629 attribute modifications directly change tid. Use such attribute
630 objects to modify existing threads.
631
632 pth_attr_t pth_attr_new(void);
633 This returns a new unbound attribute object. An implicit
634 pth_attr_init() is done on it. Any queries on this object just
635 fetch stored attributes from it. And attribute modifications just
636 change the stored attributes. Use such attribute objects to pre-
637 configure attributes for to be spawned threads.
638
639 int pth_attr_init(pth_attr_t attr);
640 This initializes an attribute object attr to the default values:
641 "PTH_ATTR_PRIO" := "PTH_PRIO_STD", "PTH_ATTR_NAME" := `"unknown"',
642 "PTH_ATTR_DISPATCHES" := 0, "PTH_ATTR_JOINABLE" := "TRUE",
643 "PTH_ATTR_CANCELSTATE" := "PTH_CANCEL_DEFAULT",
644 "PTH_ATTR_STACK_SIZE" := 64*1024 and "PTH_ATTR_STACK_ADDR" :=
645 "NULL". All other "PTH_ATTR_*" attributes are read-only attributes
646 and don't receive default values in attr, because they exists only
647 for bounded attribute objects.
648
649 int pth_attr_set(pth_attr_t attr, int field, ...);
650 This sets the attribute field field in attr to a value specified as
651 an additional argument on the variable argument list. The following
652 attribute fields and argument pairs can be used:
653
654 PTH_ATTR_PRIO int
655 PTH_ATTR_NAME char *
656 PTH_ATTR_DISPATCHES int
657 PTH_ATTR_JOINABLE int
658 PTH_ATTR_CANCEL_STATE unsigned int
659 PTH_ATTR_STACK_SIZE unsigned int
660 PTH_ATTR_STACK_ADDR char *
661
662 int pth_attr_get(pth_attr_t attr, int field, ...);
663 This retrieves the attribute field field in attr and stores its
664 value in the variable specified through a pointer in an additional
665 argument on the variable argument list. The following fields and
666 argument pairs can be used:
667
668 PTH_ATTR_PRIO int *
669 PTH_ATTR_NAME char **
670 PTH_ATTR_DISPATCHES int *
671 PTH_ATTR_JOINABLE int *
672 PTH_ATTR_CANCEL_STATE unsigned int *
673 PTH_ATTR_STACK_SIZE unsigned int *
674 PTH_ATTR_STACK_ADDR char **
675 PTH_ATTR_TIME_SPAWN pth_time_t *
676 PTH_ATTR_TIME_LAST pth_time_t *
677 PTH_ATTR_TIME_RAN pth_time_t *
678 PTH_ATTR_START_FUNC void *(**)(void *)
679 PTH_ATTR_START_ARG void **
680 PTH_ATTR_STATE pth_state_t *
681 PTH_ATTR_EVENTS pth_event_t *
682 PTH_ATTR_BOUND int *
683
684 int pth_attr_destroy(pth_attr_t attr);
685 This destroys a attribute object attr. After this attr is no longer
686 a valid attribute object.
687
688 Thread Control
689
690 The following functions control the threading itself and make up the
691 main API of the Pth library.
692
693 pth_t pth_spawn(pth_attr_t attr, void *(*entry)(void *), void *arg);
694 This spawns a new thread with the attributes given in attr (or
695 "PTH_ATTR_DEFAULT" for default attributes - which means that thread
696 priority, joinability and cancel state are inherited from the cur‐
697 rent thread) with the starting point at routine entry; the dispatch
698 count is not inherited from the current thread if attr is not spec‐
699 ified - rather, it is initialized to zero. This entry routine is
700 called as `pth_exit(entry(arg))' inside the new thread unit, i.e.,
701 entry's return value is fed to an implicit pth_exit(3). So the
702 thread can also exit by just returning. Nevertheless the thread can
703 also exit explicitly at any time by calling pth_exit(3). But keep
704 in mind that calling the POSIX function exit(3) still terminates
705 the complete process and not just the current thread.
706
707 There is no Pth-internal limit on the number of threads one can
708 spawn, except the limit implied by the available virtual memory.
709 Pth internally keeps track of thread in dynamic data structures.
710 The function returns "NULL" on error.
711
712 int pth_once(pth_once_t *ctrlvar, void (*func)(void *), void *arg);
713 This is a convenience function which uses a control variable of
714 type "pth_once_t" to make sure a constructor function func is
715 called only once as `func(arg)' in the system. In other words: Only
716 the first call to pth_once(3) by any thread in the system succeeds.
717 The variable referenced via ctrlvar should be declared as
718 `"pth_once_t" variable-name = "PTH_ONCE_INIT";' before calling this
719 function.
720
721 pth_t pth_self(void);
722 This just returns the unique thread handle of the currently running
723 thread. This handle itself has to be treated as an opaque entity
724 by the application. It's usually used as an argument to other
725 functions who require an argument of type "pth_t".
726
727 int pth_suspend(pth_t tid);
728 This suspends a thread tid until it is manually resumed again via
729 pth_resume(3). For this, the thread is moved to the SUSPENDED queue
730 and this way is completely out of the scheduler's event handling
731 and thread dispatching scope. Suspending the current thread is not
732 allowed. The function returns "TRUE" on success and "FALSE" on
733 errors.
734
735 int pth_resume(pth_t tid);
736 This function resumes a previously suspended thread tid, i.e. tid
737 has to stay on the SUSPENDED queue. The thread is moved to the NEW,
738 READY or WAITING queue (dependent on what its state was when the
739 pth_suspend(3) call were made) and this way again enters the event
740 handling and thread dispatching scope of the scheduler. The func‐
741 tion returns "TRUE" on success and "FALSE" on errors.
742
743 int pth_raise(pth_t tid, int sig)
744 This function raises a signal for delivery to thread tid only.
745 When one just raises a signal via raise(3) or kill(2), its deliv‐
746 ered to an arbitrary thread which has this signal not blocked.
747 With pth_raise(3) one can send a signal to a thread and its guaran‐
748 tees that only this thread gets the signal delivered. But keep in
749 mind that nevertheless the signals action is still configured
750 process-wide. When sig is 0 plain thread checking is performed,
751 i.e., `"pth_raise(tid, 0)"' returns "TRUE" when thread tid still
752 exists in the PTH system but doesn't send any signal to it.
753
754 int pth_yield(pth_t tid);
755 This explicitly yields back the execution control to the scheduler
756 thread. Usually the execution is implicitly transferred back to
757 the scheduler when a thread waits for an event. But when a thread
758 has to do larger CPU bursts, it can be reasonable to interrupt it
759 explicitly by doing a few pth_yield(3) calls to give other threads
760 a chance to execute, too. This obviously is the cooperating part
761 of Pth. A thread has not to yield execution, of course. But when
762 you want to program a server application with good response times
763 the threads should be cooperative, i.e., when they should split
764 their CPU bursts into smaller units with this call.
765
766 Usually one specifies tid as "NULL" to indicate to the scheduler
767 that it can freely decide which thread to dispatch next. But if
768 one wants to indicate to the scheduler that a particular thread
769 should be favored on the next dispatching step, one can specify
770 this thread explicitly. This allows the usage of the old concept of
771 coroutines where a thread/routine switches to a particular cooper‐
772 ating thread. If tid is not "NULL" and points to a new or ready
773 thread, it is guaranteed that this thread receives execution con‐
774 trol on the next dispatching step. If tid is in a different state
775 (that is, not in "PTH_STATE_NEW" or "PTH_STATE_READY") an error is
776 reported.
777
778 The function usually returns "TRUE" for success and only "FALSE"
779 (with "errno" set to "EINVAL") if tid specified an invalid or still
780 not new or ready thread.
781
782 int pth_nap(pth_time_t naptime);
783 This functions suspends the execution of the current thread until
784 naptime is elapsed. naptime is of type "pth_time_t" and this way
785 has theoretically a resolution of one microsecond. In practice you
786 should neither rely on this nor that the thread is awakened exactly
787 after naptime has elapsed. It's only guarantees that the thread
788 will sleep at least naptime. But because of the non-preemptive
789 nature of Pth it can last longer (when another thread kept the CPU
790 for a long time). Additionally the resolution is dependent of the
791 implementation of timers by the operating system and these usually
792 have only a resolution of 10 microseconds or larger. But usually
793 this isn't important for an application unless it tries to use this
794 facility for real time tasks.
795
796 int pth_wait(pth_event_t ev);
797 This is the link between the scheduler and the event facility (see
798 below for the various pth_event_xxx() functions). It's modeled like
799 select(2), i.e., one gives this function one or more events (in the
800 event ring specified by ev) on which the current thread wants to
801 wait. The scheduler awakes the thread when one ore more of them
802 occurred or failed after tagging them as such. The ev argument is a
803 pointer to an event ring which isn't changed except for the tag‐
804 ging. pth_wait(3) returns the number of occurred or failed events
805 and the application can use pth_event_status(3) to test which
806 events occurred or failed.
807
808 int pth_cancel(pth_t tid);
809 This cancels a thread tid. How the cancellation is done depends on
810 the cancellation state of tid which the thread can configure
811 itself. When its state is "PTH_CANCEL_DISABLE" a cancellation
812 request is just made pending. When it is "PTH_CANCEL_ENABLE" it
813 depends on the cancellation type what is performed. When its
814 "PTH_CANCEL_DEFERRED" again the cancellation request is just made
815 pending. But when its "PTH_CANCEL_ASYNCHRONOUS" the thread is imme‐
816 diately canceled before pth_cancel(3) returns. The effect of a
817 thread cancellation is equal to implicitly forcing the thread to
818 call `"pth_exit(PTH_CANCELED)"' at one of his cancellation points.
819 In Pth thread enter a cancellation point either explicitly via
820 pth_cancel_point(3) or implicitly by waiting for an event.
821
822 int pth_abort(pth_t tid);
823 This is the cruel way to cancel a thread tid. When it's already
824 dead and waits to be joined it just joins it (via `"pth_join("tid",
825 NULL)"') and this way kicks it out of the system. Else it forces
826 the thread to be not joinable and to allow asynchronous cancella‐
827 tion and then cancels it via `"pth_cancel("tid")"'.
828
829 int pth_join(pth_t tid, void **value);
830 This joins the current thread with the thread specified via tid.
831 It first suspends the current thread until the tid thread has ter‐
832 minated. Then it is awakened and stores the value of tid's
833 pth_exit(3) call into *value (if value and not "NULL") and returns
834 to the caller. A thread can be joined only when it has the
835 attribute "PTH_ATTR_JOINABLE" set to "TRUE" (the default). A thread
836 can only be joined once, i.e., after the pth_join(3) call the
837 thread tid is completely removed from the system.
838
839 void pth_exit(void *value);
840 This terminates the current thread. Whether it's immediately
841 removed from the system or inserted into the dead queue of the
842 scheduler depends on its join type which was specified at spawning
843 time. If it has the attribute "PTH_ATTR_JOINABLE" set to "FALSE",
844 it's immediately removed and value is ignored. Else the thread is
845 inserted into the dead queue and value remembered for a subsequent
846 pth_join(3) call by another thread.
847
848 Utilities
849
850 Utility functions.
851
852 int pth_fdmode(int fd, int mode);
853 This switches the non-blocking mode flag on file descriptor fd.
854 The argument mode can be "PTH_FDMODE_BLOCK" for switching fd into
855 blocking I/O mode, "PTH_FDMODE_NONBLOCK" for switching fd into non-
856 blocking I/O mode or "PTH_FDMODE_POLL" for just polling the current
857 mode. The current mode is returned (either "PTH_FDMODE_BLOCK" or
858 "PTH_FDMODE_NONBLOCK") or "PTH_FDMODE_ERROR" on error. Keep in mind
859 that since Pth 1.1 there is no longer a requirement to manually
860 switch a file descriptor into non-blocking mode in order to use it.
861 This is automatically done temporarily inside Pth. Instead when
862 you now switch a file descriptor explicitly into non-blocking mode,
863 pth_read(3) or pth_write(3) will never block the current thread.
864
865 pth_time_t pth_time(long sec, long usec);
866 This is a constructor for a "pth_time_t" structure which is a con‐
867 venient function to avoid temporary structure values. It returns a
868 pth_time_t structure which holds the absolute time value specified
869 by sec and usec.
870
871 pth_time_t pth_timeout(long sec, long usec);
872 This is a constructor for a "pth_time_t" structure which is a con‐
873 venient function to avoid temporary structure values. It returns a
874 pth_time_t structure which holds the absolute time value calculated
875 by adding sec and usec to the current time.
876
877 Sfdisc_t *pth_sfiodisc(void);
878 This functions is always available, but only reasonably usable when
879 Pth was built with Sfio support ("--with-sfio" option) and
880 "PTH_EXT_SFIO" is then defined by "pth.h". It is useful for appli‐
881 cations which want to use the comprehensive Sfio I/O library with
882 the Pth threading library. Then this function can be used to get an
883 Sfio discipline structure ("Sfdisc_t") which can be pushed onto
884 Sfio streams ("Sfio_t") in order to let this stream use
885 pth_read(3)/pth_write(2) instead of read(2)/write(2). The benefit
886 is that this way I/O on the Sfio stream does only block the current
887 thread instead of the whole process. The application has to free(3)
888 the "Sfdisc_t" structure when it is no longer needed. The Sfio
889 package can be found at http://www.research.att.com/sw/tools/sfio/.
890
891 Cancellation Management
892
893 Pth supports POSIX style thread cancellation via pth_cancel(3) and the
894 following two related functions:
895
896 void pth_cancel_state(int newstate, int *oldstate);
897 This manages the cancellation state of the current thread. When
898 oldstate is not "NULL" the function stores the old cancellation
899 state under the variable pointed to by oldstate. When newstate is
900 not 0 it sets the new cancellation state. oldstate is created
901 before newstate is set. A state is a combination of "PTH_CAN‐
902 CEL_ENABLE" or "PTH_CANCEL_DISABLE" and "PTH_CANCEL_DEFERRED" or
903 "PTH_CANCEL_ASYNCHRONOUS". "PTH_CANCEL_ENABLE⎪PTH_CANCEL_DEFERRED"
904 (or "PTH_CANCEL_DEFAULT") is the default state where cancellation
905 is possible but only at cancellation points. Use "PTH_CANCEL_DIS‐
906 ABLE" to complete disable cancellation for a thread and "PTH_CAN‐
907 CEL_ASYNCHRONOUS" for allowing asynchronous cancellations, i.e.,
908 cancellations which can happen at any time.
909
910 void pth_cancel_point(void);
911 This explicitly enter a cancellation point. When the current can‐
912 cellation state is "PTH_CANCEL_DISABLE" or no cancellation request
913 is pending, this has no side-effect and returns immediately. Else
914 it calls `"pth_exit(PTH_CANCELED)"'.
915
916 Event Handling
917
918 Pth has a very flexible event facility which is linked into the sched‐
919 uler through the pth_wait(3) function. The following functions provide
920 the handling of event rings.
921
922 pth_event_t pth_event(unsigned long spec, ...);
923 This creates a new event ring consisting of a single initial event.
924 The type of the generated event is specified by spec. The following
925 types are available:
926
927 "PTH_EVENT_FD"
928 This is a file descriptor event. One or more of
929 "PTH_UNTIL_FD_READABLE", "PTH_UNTIL_FD_WRITEABLE" or
930 "PTH_UNTIL_FD_EXCEPTION" have to be OR-ed into spec to specify
931 on which state of the file descriptor you want to wait. The
932 file descriptor itself has to be given as an additional argu‐
933 ment. Example: `"pth_event(PTH_EVENT_FD⎪PTH_UNTIL_FD_READABLE,
934 fd)"'.
935
936 "PTH_EVENT_SELECT"
937 This is a multiple file descriptor event modeled directly after
938 the select(2) call (actually it is also used to implement
939 pth_select(3) internally). It's a convenient way to wait for a
940 large set of file descriptors at once and at each file descrip‐
941 tor for a different type of state. Additionally as a nice side-
942 effect one receives the number of file descriptors which causes
943 the event to be occurred (using BSD semantics, i.e., when a
944 file descriptor occurred in two sets it's counted twice). The
945 arguments correspond directly to the select(2) function argu‐
946 ments except that there is no timeout argument (because time‐
947 outs already can be handled via "PTH_EVENT_TIME" events).
948
949 Example: `"pth_event(PTH_EVENT_SELECT, &rc, nfd, rfds, wfds,
950 efds)"' where "rc" has to be of type `"int *"', "nfd" has to be
951 of type `"int"' and "rfds", "wfds" and "efds" have to be of
952 type `"fd_set *"' (see select(2)). The number of occurred file
953 descriptors are stored in "rc".
954
955 "PTH_EVENT_SIGS"
956 This is a signal set event. The two additional arguments have
957 to be a pointer to a signal set (type `"sigset_t *"') and a
958 pointer to a signal number variable (type `"int *"'). This
959 event waits until one of the signals in the signal set
960 occurred. As a result the occurred signal number is stored in
961 the second additional argument. Keep in mind that the Pth
962 scheduler doesn't block signals automatically. So when you
963 want to wait for a signal with this event you've to block it
964 via sigprocmask(2) or it will be delivered without your notice.
965 Example: `"sigemptyset(&set); sigaddset(&set, SIGINT);
966 pth_event(PTH_EVENT_SIG, &set, &sig);"'.
967
968 "PTH_EVENT_TIME"
969 This is a time point event. The additional argument has to be
970 of type "pth_time_t" (usually on-the-fly generated via
971 pth_time(3)). This events waits until the specified time point
972 has elapsed. Keep in mind that the value is an absolute time
973 point and not an offset. When you want to wait for a specified
974 amount of time, you've to add the current time to the offset
975 (usually on-the-fly achieved via pth_timeout(3)). Example:
976 `"pth_event(PTH_EVENT_TIME, pth_timeout(2,0))"'.
977
978 "PTH_EVENT_MSG"
979 This is a message port event. The additional argument has to be
980 of type "pth_msgport_t". This events waits until one or more
981 messages were received on the specified message port. Example:
982 `"pth_event(PTH_EVENT_MSG, mp)"'.
983
984 "PTH_EVENT_TID"
985 This is a thread event. The additional argument has to be of
986 type "pth_t". One of "PTH_UNTIL_TID_NEW",
987 "PTH_UNTIL_TID_READY", "PTH_UNTIL_TID_WAITING" or
988 "PTH_UNTIL_TID_DEAD" has to be OR-ed into spec to specify on
989 which state of the thread you want to wait. Example:
990 `"pth_event(PTH_EVENT_TID⎪PTH_UNTIL_TID_DEAD, tid)"'.
991
992 "PTH_EVENT_FUNC"
993 This is a custom callback function event. Three additional
994 arguments have to be given with the following types: `"int
995 (*)(void *)"', `"void *"' and `"pth_time_t"'. The first is a
996 function pointer to a check function and the second argument is
997 a user-supplied context value which is passed to this function.
998 The scheduler calls this function on a regular basis (on his
999 own scheduler stack, so be very careful!) and the thread is
1000 kept sleeping while the function returns "FALSE". Once it
1001 returned "TRUE" the thread will be awakened. The check interval
1002 is defined by the third argument, i.e., the check function is
1003 polled again not until this amount of time elapsed. Example:
1004 `"pth_event(PTH_EVENT_FUNC, func, arg, pth_time(0,500000))"'.
1005
1006 "PTH_EVENT_SEM"
1007 This is a semaphore event. It waits for a semaphore, until it
1008 can be decremented. By default 1 is used for this, with the
1009 flag "PTH_UNTIL_COUNT" other values can be used. If the flag
1010 "PTH_UNTIL_DECREMENT" is used, the semaphore value is decre‐
1011 mented (so the lock is obtained), else the event is signaled,
1012 if it would be possible. Examples:
1013
1014 * pth_event(PTH_EVENT_SEM⎪PTH_UNTIL_DECREMENT⎪PTH_UNTIL_COUNT,
1015 &sem,2): event waits, utils the value of the semaphore is >= 2
1016 and subtracts then two from it
1017
1018 * pth_event(PTH_EVENT_SEM⎪PTH_UNTIL_COUNT, &sem,2): event
1019 waits, util the value of the semaphore is >= 2
1020
1021 * pth_event(PTH_EVENT_SEM⎪PTH_UNTIL_DECREMENT, &sem): event
1022 waits, util the value of the semaphore is >= 1 and subtracts
1023 then 1 from it
1024
1025 * pth_event(PTH_EVENT_SEM, &sem): event waits, util the value
1026 of the semaphore is >= 1
1027
1028 unsigned long pth_event_typeof(pth_event_t ev);
1029 This returns the type of event ev. It's a combination of the
1030 describing "PTH_EVENT_XX" and "PTH_UNTIL_XX" value. This is espe‐
1031 cially useful to know which arguments have to be supplied to the
1032 pth_event_extract(3) function.
1033
1034 int pth_event_extract(pth_event_t ev, ...);
1035 When pth_event(3) is treated like sprintf(3), then this function is
1036 sscanf(3), i.e., it is the inverse operation of pth_event(3). This
1037 means that it can be used to extract the ingredients of an event.
1038 The ingredients are stored into variables which are given as point‐
1039 ers on the variable argument list. Which pointers have to be
1040 present depends on the event type and has to be determined by the
1041 caller before via pth_event_typeof(3).
1042
1043 To make it clear, when you constructed ev via `"ev =
1044 pth_event(PTH_EVENT_FD, fd);"' you have to extract it via
1045 `"pth_event_extract(ev, &fd)"', etc. For multiple arguments of an
1046 event the order of the pointer arguments is the same as for
1047 pth_event(3). But always keep in mind that you have to always sup‐
1048 ply pointers to variables and these variables have to be of the
1049 same type as the argument of pth_event(3) required.
1050
1051 pth_event_t pth_event_concat(pth_event_t ev, ...);
1052 This concatenates one or more additional event rings to the event
1053 ring ev and returns ev. The end of the argument list has to be
1054 marked with a "NULL" argument. Use this function to create real
1055 events rings out of the single-event rings created by pth_event(3).
1056
1057 pth_event_t pth_event_isolate(pth_event_t ev);
1058 This isolates the event ev from possibly appended events in the
1059 event ring. When in ev only one event exists, this returns "NULL".
1060 When remaining events exists, they form a new event ring which is
1061 returned.
1062
1063 pth_event_t pth_event_walk(pth_event_t ev, int direction);
1064 This walks to the next (when direction is "PTH_WALK_NEXT") or pre‐
1065 views (when direction is "PTH_WALK_PREV") event in the event ring
1066 ev and returns this new reached event. Additionally
1067 "PTH_UNTIL_OCCURRED" can be OR-ed into direction to walk to the
1068 next/previous occurred event in the ring ev.
1069
1070 pth_status_t pth_event_status(pth_event_t ev);
1071 This returns the status of event ev. This is a fast operation
1072 because only a tag on ev is checked which was either set or still
1073 not set by the scheduler. In other words: This doesn't check the
1074 event itself, it just checks the last knowledge of the scheduler.
1075 The possible returned status codes are: "PTH_STATUS_PENDING" (event
1076 is still pending), "PTH_STATUS_OCCURRED" (event successfully
1077 occurred), "PTH_STATUS_FAILED" (event failed).
1078
1079 int pth_event_free(pth_event_t ev, int mode);
1080 This deallocates the event ev (when mode is "PTH_FREE_THIS") or all
1081 events appended to the event ring under ev (when mode is
1082 "PTH_FREE_ALL").
1083
1084 Key-Based Storage
1085
1086 The following functions provide thread-local storage through unique
1087 keys similar to the POSIX Pthread API. Use this for thread specific
1088 global data.
1089
1090 int pth_key_create(pth_key_t *key, void (*func)(void *));
1091 This created a new unique key and stores it in key. Additionally
1092 func can specify a destructor function which is called on the cur‐
1093 rent threads termination with the key.
1094
1095 int pth_key_delete(pth_key_t key);
1096 This explicitly destroys a key key.
1097
1098 int pth_key_setdata(pth_key_t key, const void *value);
1099 This stores value under key.
1100
1101 void *pth_key_getdata(pth_key_t key);
1102 This retrieves the value under key.
1103
1104 Message Port Communication
1105
1106 The following functions provide message ports which can be used for
1107 efficient and flexible inter-thread communication.
1108
1109 pth_msgport_t pth_msgport_create(const char *name);
1110 This returns a pointer to a new message port. If name name is not
1111 "NULL", the name can be used by other threads via pth_msg‐
1112 port_find(3) to find the message port in case they do not know
1113 directly the pointer to the message port.
1114
1115 void pth_msgport_destroy(pth_msgport_t mp);
1116 This destroys a message port mp. Before all pending messages on it
1117 are replied to their origin message port.
1118
1119 pth_msgport_t pth_msgport_find(const char *name);
1120 This finds a message port in the system by name and returns the
1121 pointer to it.
1122
1123 int pth_msgport_pending(pth_msgport_t mp);
1124 This returns the number of pending messages on message port mp.
1125
1126 int pth_msgport_put(pth_msgport_t mp, pth_message_t *m);
1127 This puts (or sends) a message m to message port mp.
1128
1129 pth_message_t *pth_msgport_get(pth_msgport_t mp);
1130 This gets (or receives) the top message from message port mp.
1131 Incoming messages are always kept in a queue, so there can be more
1132 pending messages, of course.
1133
1134 int pth_msgport_reply(pth_message_t *m);
1135 This replies a message m to the message port of the sender.
1136
1137 Thread Cleanups
1138
1139 Per-thread cleanup functions.
1140
1141 int pth_cleanup_push(void (*handler)(void *), void *arg);
1142 This pushes the routine handler onto the stack of cleanup routines
1143 for the current thread. These routines are called in LIFO order
1144 when the thread terminates.
1145
1146 int pth_cleanup_pop(int execute);
1147 This pops the top-most routine from the stack of cleanup routines
1148 for the current thread. When execute is "TRUE" the routine is addi‐
1149 tionally called.
1150
1151 Process Forking
1152
1153 The following functions provide some special support for process fork‐
1154 ing situations inside the threading environment.
1155
1156 int pth_atfork_push(void (*prepare)(void *), void (*)(void *parent),
1157 void (*)(void *child), void *arg);
1158 This function declares forking handlers to be called before and
1159 after pth_fork(3), in the context of the thread that called
1160 pth_fork(3). The prepare handler is called before fork(2) process‐
1161 ing commences. The parent handler is called after fork(2) pro‐
1162 cessing completes in the parent process. The child handler is
1163 called after fork(2) processing completed in the child process. If
1164 no handling is desired at one or more of these three points, the
1165 corresponding handler can be given as "NULL". Each handler is
1166 called with arg as the argument.
1167
1168 The order of calls to pth_atfork_push(3) is significant. The parent
1169 and child handlers are called in the order in which they were
1170 established by calls to pth_atfork_push(3), i.e., FIFO. The prepare
1171 fork handlers are called in the opposite order, i.e., LIFO.
1172
1173 int pth_atfork_pop(void);
1174 This removes the top-most handlers on the forking handler stack
1175 which were established with the last pth_atfork_push(3) call. It
1176 returns "FALSE" when no more handlers couldn't be removed from the
1177 stack.
1178
1179 pid_t pth_fork(void);
1180 This is a variant of fork(2) with the difference that the current
1181 thread only is forked into a separate process, i.e., in the parent
1182 process nothing changes while in the child process all threads are
1183 gone except for the scheduler and the calling thread. When you
1184 really want to duplicate all threads in the current process you
1185 should use fork(2) directly. But this is usually not reasonable.
1186 Additionally this function takes care of forking handlers as estab‐
1187 lished by pth_fork_push(3).
1188
1189 Synchronization
1190
1191 The following functions provide synchronization support via mutual
1192 exclusion locks (mutex), read-write locks (rwlock), condition variables
1193 (cond) and barriers (barrier). Keep in mind that in a non-preemptive
1194 threading system like Pth this might sound unnecessary at the first
1195 look, because a thread isn't interrupted by the system. Actually when
1196 you have a critical code section which doesn't contain any pth_xxx()
1197 functions, you don't need any mutex to protect it, of course.
1198
1199 But when your critical code section contains any pth_xxx() function the
1200 chance is high that these temporarily switch to the scheduler. And this
1201 way other threads can make progress and enter your critical code sec‐
1202 tion, too. This is especially true for critical code sections which
1203 implicitly or explicitly use the event mechanism.
1204
1205 int pth_mutex_init(pth_mutex_t *mutex);
1206 This dynamically initializes a mutex variable of type
1207 `"pth_mutex_t"'. Alternatively one can also use static initializa‐
1208 tion via `"pth_mutex_t mutex = PTH_MUTEX_INIT"'.
1209
1210 int pth_mutex_acquire(pth_mutex_t *mutex, int try, pth_event_t ev);
1211 This acquires a mutex mutex. If the mutex is already locked by
1212 another thread, the current threads execution is suspended until
1213 the mutex is unlocked again or additionally the extra events in ev
1214 occurred (when ev is not "NULL"). Recursive locking is explicitly
1215 supported, i.e., a thread is allowed to acquire a mutex more than
1216 once before its released. But it then also has be released the same
1217 number of times until the mutex is again lockable by others. When
1218 try is "TRUE" this function never suspends execution. Instead it
1219 returns "FALSE" with "errno" set to "EBUSY".
1220
1221 int pth_mutex_release(pth_mutex_t *mutex);
1222 This decrements the recursion locking count on mutex and when it is
1223 zero it releases the mutex mutex.
1224
1225 int pth_rwlock_init(pth_rwlock_t *rwlock);
1226 This dynamically initializes a read-write lock variable of type
1227 `"pth_rwlock_t"'. Alternatively one can also use static initial‐
1228 ization via `"pth_rwlock_t rwlock = PTH_RWLOCK_INIT"'.
1229
1230 int pth_rwlock_acquire(pth_rwlock_t *rwlock, int op, int try,
1231 pth_event_t ev);
1232 This acquires a read-only (when op is "PTH_RWLOCK_RD") or a read-
1233 write (when op is "PTH_RWLOCK_RW") lock rwlock. When the lock is
1234 only locked by other threads in read-only mode, the lock succeeds.
1235 But when one thread holds a read-write lock, all locking attempts
1236 suspend the current thread until this lock is released again. Addi‐
1237 tionally in ev events can be given to let the locking timeout, etc.
1238 When try is "TRUE" this function never suspends execution. Instead
1239 it returns "FALSE" with "errno" set to "EBUSY".
1240
1241 int pth_rwlock_release(pth_rwlock_t *rwlock);
1242 This releases a previously acquired (read-only or read-write) lock.
1243
1244 int pth_cond_init(pth_cond_t *cond);
1245 This dynamically initializes a condition variable variable of type
1246 `"pth_cond_t"'. Alternatively one can also use static initializa‐
1247 tion via `"pth_cond_t cond = PTH_COND_INIT"'.
1248
1249 int pth_cond_await(pth_cond_t *cond, pth_mutex_t *mutex, pth_event_t
1250 ev);
1251 This awaits a condition situation. The caller has to follow the
1252 semantics of the POSIX condition variables: mutex has to be
1253 acquired before this function is called. The execution of the cur‐
1254 rent thread is then suspended either until the events in ev
1255 occurred (when ev is not "NULL") or cond was notified by another
1256 thread via pth_cond_notify(3). While the thread is waiting, mutex
1257 is released. Before it returns mutex is reacquired.
1258
1259 int pth_cond_notify(pth_cond_t *cond, int broadcast);
1260 This notified one or all threads which are waiting on cond. When
1261 broadcast is "TRUE" all thread are notified, else only a single
1262 (unspecified) one.
1263
1264 int pth_barrier_init(pth_barrier_t *barrier, int threshold);
1265 This dynamically initializes a barrier variable of type `"pth_bar‐
1266 rier_t"'. Alternatively one can also use static initialization via
1267 `"pth_barrier_t barrier = PTH_BARRIER_INIT("threadhold")"'.
1268
1269 int pth_barrier_reach(pth_barrier_t *barrier);
1270 This function reaches a barrier barrier. If this is the last thread
1271 (as specified by threshold on init of barrier) all threads are
1272 awakened. Else the current thread is suspended until the last
1273 thread reached the barrier and this way awakes all threads. The
1274 function returns (beside "FALSE" on error) the value "TRUE" for any
1275 thread which neither reached the barrier as the first nor the last
1276 thread; "PTH_BARRIER_HEADLIGHT" for the thread which reached the
1277 barrier as the first thread and "PTH_BARRIER_TAILLIGHT" for the
1278 thread which reached the barrier as the last thread.
1279
1280 Semaphore support
1281
1282 The interface provides functions to set/get the value of a semaphore,
1283 increment it with arbitrary values, wait, until the value becomes big‐
1284 ger than a given value (without or with decrementing, if the condition
1285 becomes true.
1286
1287 The data-type for the semaphore is names "pth_sem_t" and it has an ini‐
1288 tializer like "pth_cond_t".
1289
1290 int pth_sem_init(pth_sem_t *sem);
1291 This dynamically initializes a semaphore variable of type
1292 `"pth_sem_t"'. Alternatively one can also use static initializa‐
1293 tion via `"pth_sem_t semaphore = PTH_SEM_INIT"'.
1294
1295 int pth_sem_dec(pth_sem_t *sem);
1296 waits, until the value of "sem" is >= 1 and decrement it.
1297
1298 int pth_sem_dec_value(pth_sem_t *sem, unsigned value);
1299 waits, until the value of "sem" is >= "value" and subtracts
1300 "value".
1301
1302 int pth_sem_inc(pth_sem_t *sem, int notify);
1303 increments "sem". The scheduler is started, if "notify" is not
1304 null.
1305
1306 int pth_sem_inc_value(pth_sem_t *sem, unsigned value, int notify);
1307 adds value to "sem". The scheduler is started, if "notify" is not
1308 null.
1309
1310 int pth_sem_set_value(pth_sem_t *sem, unsigned value);
1311 sets the value of "sem" to "value".
1312
1313 int pth_sem_get_value(pth_sem_t *sem, unsigned *value);
1314 stores the value of "sem" in *"value".
1315
1316 User-Space Context
1317
1318 The following functions provide a stand-alone sub-API for user-space
1319 context switching. It internally is based on the same underlying
1320 machine context switching mechanism the threads in GNU Pth are based
1321 on. Hence these functions you can use for implementing your own simple
1322 user-space threads. The "pth_uctx_t" context is somewhat modeled after
1323 POSIX ucontext(3).
1324
1325 The time required to create (via pth_uctx_make(3)) a user-space context
1326 can range from just a few microseconds up to a more dramatical time
1327 (depending on the machine context switching method which is available
1328 on the platform). On the other hand, the raw performance in switching
1329 the user-space contexts is always very good (nearly independent of the
1330 used machine context switching method). For instance, on an Intel Pen‐
1331 tium-III CPU with 800Mhz running under FreeBSD 4 one usually achieves
1332 about 260,000 user-space context switches (via pth_uctx_switch(3)) per
1333 second.
1334
1335 int pth_uctx_create(pth_uctx_t *uctx);
1336 This function creates a user-space context and stores it into uctx.
1337 There is still no underlying user-space context configured. You
1338 still have to do this with pth_uctx_make(3). On success, this func‐
1339 tion returns "TRUE", else "FALSE".
1340
1341 int pth_uctx_make(pth_uctx_t uctx, char *sk_addr, size_t sk_size, const
1342 sigset_t *sigmask, void (*start_func)(void *), void *start_arg,
1343 pth_uctx_t uctx_after);
1344 This function makes a new user-space context in uctx which will
1345 operate on the run-time stack sk_addr (which is of maximum size
1346 sk_size), with the signals in sigmask blocked (if sigmask is not
1347 "NULL") and starting to execute with the call
1348 start_func(start_arg). If sk_addr is "NULL", a stack is dynamically
1349 allocated. The stack size sk_size has to be at least 16384 (16KB).
1350 If the start function start_func returns and uctx_after is not
1351 "NULL", an implicit user-space context switch to this context is
1352 performed. Else (if uctx_after is "NULL") the process is terminated
1353 with exit(3). This function is somewhat modeled after POSIX make‐
1354 context(3). On success, this function returns "TRUE", else "FALSE".
1355
1356 int pth_uctx_switch(pth_uctx_t uctx_from, pth_uctx_t uctx_to);
1357 This function saves the current user-space context in uctx_from for
1358 later restoring by another call to pth_uctx_switch(3) and restores
1359 the new user-space context from uctx_to, which previously had to be
1360 set with either a previous call to pth_uctx_switch(3) or initially
1361 by pth_uctx_make(3). This function is somewhat modeled after POSIX
1362 swapcontext(3). If uctx_from or uctx_to are "NULL" or if uctx_to
1363 contains no valid user-space context, "FALSE" is returned instead
1364 of "TRUE". These are the only errors possible.
1365
1366 int pth_uctx_destroy(pth_uctx_t uctx);
1367 This function destroys the user-space context in uctx. The run-time
1368 stack associated with the user-space context is deallocated only if
1369 it was not given by the application (see sk_addr of pth_uctx_cre‐
1370 ate(3)). If uctx is "NULL", "FALSE" is returned instead of "TRUE".
1371 This is the only error possible.
1372
1373 Generalized POSIX Replacement API
1374
1375 The following functions are generalized replacements functions for the
1376 POSIX API, i.e., they are similar to the functions under `Standard
1377 POSIX Replacement API' but all have an additional event argument which
1378 can be used for timeouts, etc.
1379
1380 int pth_sigwait_ev(const sigset_t *set, int *sig, pth_event_t ev);
1381 This is equal to pth_sigwait(3) (see below), but has an additional
1382 event argument ev. When pth_sigwait(3) suspends the current threads
1383 execution it usually only uses the signal event on set to awake.
1384 With this function any number of extra events can be used to awake
1385 the current thread (remember that ev actually is an event ring).
1386
1387 int pth_connect_ev(int s, const struct sockaddr *addr, socklen_t
1388 addrlen, pth_event_t ev);
1389 This is equal to pth_connect(3) (see below), but has an additional
1390 event argument ev. When pth_connect(3) suspends the current threads
1391 execution it usually only uses the I/O event on s to awake. With
1392 this function any number of extra events can be used to awake the
1393 current thread (remember that ev actually is an event ring).
1394
1395 int pth_accept_ev(int s, struct sockaddr *addr, socklen_t *addrlen,
1396 pth_event_t ev);
1397 This is equal to pth_accept(3) (see below), but has an additional
1398 event argument ev. When pth_accept(3) suspends the current threads
1399 execution it usually only uses the I/O event on s to awake. With
1400 this function any number of extra events can be used to awake the
1401 current thread (remember that ev actually is an event ring).
1402
1403 int pth_select_ev(int nfd, fd_set *rfds, fd_set *wfds, fd_set *efds,
1404 struct timeval *timeout, pth_event_t ev);
1405 This is equal to pth_select(3) (see below), but has an additional
1406 event argument ev. When pth_select(3) suspends the current threads
1407 execution it usually only uses the I/O event on rfds, wfds and efds
1408 to awake. With this function any number of extra events can be used
1409 to awake the current thread (remember that ev actually is an event
1410 ring).
1411
1412 int pth_poll_ev(struct pollfd *fds, unsigned int nfd, int timeout,
1413 pth_event_t ev);
1414 This is equal to pth_poll(3) (see below), but has an additional
1415 event argument ev. When pth_poll(3) suspends the current threads
1416 execution it usually only uses the I/O event on fds to awake. With
1417 this function any number of extra events can be used to awake the
1418 current thread (remember that ev actually is an event ring).
1419
1420 ssize_t pth_read_ev(int fd, void *buf, size_t nbytes, pth_event_t ev);
1421 This is equal to pth_read(3) (see below), but has an additional
1422 event argument ev. When pth_read(3) suspends the current threads
1423 execution it usually only uses the I/O event on fd to awake. With
1424 this function any number of extra events can be used to awake the
1425 current thread (remember that ev actually is an event ring).
1426
1427 ssize_t pth_readv_ev(int fd, const struct iovec *iovec, int iovcnt,
1428 pth_event_t ev);
1429 This is equal to pth_readv(3) (see below), but has an additional
1430 event argument ev. When pth_readv(3) suspends the current threads
1431 execution it usually only uses the I/O event on fd to awake. With
1432 this function any number of extra events can be used to awake the
1433 current thread (remember that ev actually is an event ring).
1434
1435 ssize_t pth_write_ev(int fd, const void *buf, size_t nbytes,
1436 pth_event_t ev);
1437 This is equal to pth_write(3) (see below), but has an additional
1438 event argument ev. When pth_write(3) suspends the current threads
1439 execution it usually only uses the I/O event on fd to awake. With
1440 this function any number of extra events can be used to awake the
1441 current thread (remember that ev actually is an event ring).
1442
1443 ssize_t pth_writev_ev(int fd, const struct iovec *iovec, int iovcnt,
1444 pth_event_t ev);
1445 This is equal to pth_writev(3) (see below), but has an additional
1446 event argument ev. When pth_writev(3) suspends the current threads
1447 execution it usually only uses the I/O event on fd to awake. With
1448 this function any number of extra events can be used to awake the
1449 current thread (remember that ev actually is an event ring).
1450
1451 ssize_t pth_recv_ev(int fd, void *buf, size_t nbytes, int flags,
1452 pth_event_t ev);
1453 This is equal to pth_recv(3) (see below), but has an additional
1454 event argument ev. When pth_recv(3) suspends the current threads
1455 execution it usually only uses the I/O event on fd to awake. With
1456 this function any number of extra events can be used to awake the
1457 current thread (remember that ev actually is an event ring).
1458
1459 ssize_t pth_recvfrom_ev(int fd, void *buf, size_t nbytes, int flags,
1460 struct sockaddr *from, socklen_t *fromlen, pth_event_t ev);
1461 This is equal to pth_recvfrom(3) (see below), but has an additional
1462 event argument ev. When pth_recvfrom(3) suspends the current
1463 threads execution it usually only uses the I/O event on fd to
1464 awake. With this function any number of extra events can be used to
1465 awake the current thread (remember that ev actually is an event
1466 ring).
1467
1468 ssize_t pth_send_ev(int fd, const void *buf, size_t nbytes, int flags,
1469 pth_event_t ev);
1470 This is equal to pth_send(3) (see below), but has an additional
1471 event argument ev. When pth_send(3) suspends the current threads
1472 execution it usually only uses the I/O event on fd to awake. With
1473 this function any number of extra events can be used to awake the
1474 current thread (remember that ev actually is an event ring).
1475
1476 ssize_t pth_sendto_ev(int fd, const void *buf, size_t nbytes, int
1477 flags, const struct sockaddr *to, socklen_t tolen, pth_event_t ev);
1478 This is equal to pth_sendto(3) (see below), but has an additional
1479 event argument ev. When pth_sendto(3) suspends the current threads
1480 execution it usually only uses the I/O event on fd to awake. With
1481 this function any number of extra events can be used to awake the
1482 current thread (remember that ev actually is an event ring).
1483
1484 Standard POSIX Replacement API
1485
1486 The following functions are standard replacements functions for the
1487 POSIX API. The difference is mainly that they suspend the current
1488 thread only instead of the whole process in case the file descriptors
1489 will block.
1490
1491 int pth_nanosleep(const struct timespec *rqtp, struct timespec *rmtp);
1492 This is a variant of the POSIX nanosleep(3) function. It suspends
1493 the current threads execution until the amount of time in rqtp
1494 elapsed. The thread is guaranteed to not wake up before this time,
1495 but because of the non-preemptive scheduling nature of Pth, it can
1496 be awakened later, of course. If rmtp is not "NULL", the "timespec"
1497 structure it references is updated to contain the unslept amount
1498 (the request time minus the time actually slept time). The differ‐
1499 ence between nanosleep(3) and pth_nanosleep(3) is that that
1500 pth_nanosleep(3) suspends only the execution of the current thread
1501 and not the whole process.
1502
1503 int pth_usleep(unsigned int usec);
1504 This is a variant of the 4.3BSD usleep(3) function. It suspends the
1505 current threads execution until usec microseconds (= usec*1/1000000
1506 sec) elapsed. The thread is guaranteed to not wake up before this
1507 time, but because of the non-preemptive scheduling nature of Pth,
1508 it can be awakened later, of course. The difference between
1509 usleep(3) and pth_usleep(3) is that that pth_usleep(3) suspends
1510 only the execution of the current thread and not the whole process.
1511
1512 unsigned int pth_sleep(unsigned int sec);
1513 This is a variant of the POSIX sleep(3) function. It suspends the
1514 current threads execution until sec seconds elapsed. The thread is
1515 guaranteed to not wake up before this time, but because of the non-
1516 preemptive scheduling nature of Pth, it can be awakened later, of
1517 course. The difference between sleep(3) and pth_sleep(3) is that
1518 pth_sleep(3) suspends only the execution of the current thread and
1519 not the whole process.
1520
1521 pid_t pth_waitpid(pid_t pid, int *status, int options);
1522 This is a variant of the POSIX waitpid(2) function. It suspends the
1523 current threads execution until status information is available for
1524 a terminated child process pid. The difference between waitpid(2)
1525 and pth_waitpid(3) is that pth_waitpid(3) suspends only the execu‐
1526 tion of the current thread and not the whole process. For more
1527 details about the arguments and return code semantics see wait‐
1528 pid(2).
1529
1530 int pth_system(const char *cmd);
1531 This is a variant of the POSIX system(3) function. It executes the
1532 shell command cmd with Bourne Shell ("sh") and suspends the current
1533 threads execution until this command terminates. The difference
1534 between system(3) and pth_system(3) is that pth_system(3) suspends
1535 only the execution of the current thread and not the whole process.
1536 For more details about the arguments and return code semantics see
1537 system(3).
1538
1539 int pth_sigmask(int how, const sigset_t *set, sigset_t *oset)
1540 This is the Pth thread-related equivalent of POSIX sigprocmask(2)
1541 respectively pthread_sigmask(3). The arguments how, set and oset
1542 directly relate to sigprocmask(2), because Pth internally just uses
1543 sigprocmask(2) here. So alternatively you can also directly call
1544 sigprocmask(2), but for consistency reasons you should use this
1545 function pth_sigmask(3).
1546
1547 int pth_sigwait(const sigset_t *set, int *sig);
1548 This is a variant of the POSIX.1c sigwait(3) function. It suspends
1549 the current threads execution until a signal in set occurred and
1550 stores the signal number in sig. The important point is that the
1551 signal is not delivered to a signal handler. Instead it's caught by
1552 the scheduler only in order to awake the pth_sigwait() call. The
1553 trick and noticeable point here is that this way you get an asyn‐
1554 chronous aware application that is written completely syn‐
1555 chronously. When you think about the problem of asynchronous safe
1556 functions you should recognize that this is a great benefit.
1557
1558 int pth_connect(int s, const struct sockaddr *addr, socklen_t addrlen);
1559 This is a variant of the 4.2BSD connect(2) function. It establishes
1560 a connection on a socket s to target specified in addr and addrlen.
1561 The difference between connect(2) and pth_connect(3) is that
1562 pth_connect(3) suspends only the execution of the current thread
1563 and not the whole process. For more details about the arguments
1564 and return code semantics see connect(2).
1565
1566 int pth_accept(int s, struct sockaddr *addr, socklen_t *addrlen);
1567 This is a variant of the 4.2BSD accept(2) function. It accepts a
1568 connection on a socket by extracting the first connection request
1569 on the queue of pending connections, creating a new socket with the
1570 same properties of s and allocates a new file descriptor for the
1571 socket (which is returned). The difference between accept(2) and
1572 pth_accept(3) is that pth_accept(3) suspends only the execution of
1573 the current thread and not the whole process. For more details
1574 about the arguments and return code semantics see accept(2).
1575
1576 int pth_select(int nfd, fd_set *rfds, fd_set *wfds, fd_set *efds,
1577 struct timeval *timeout);
1578 This is a variant of the 4.2BSD select(2) function. It examines
1579 the I/O descriptor sets whose addresses are passed in rfds, wfds,
1580 and efds to see if some of their descriptors are ready for reading,
1581 are ready for writing, or have an exceptional condition pending,
1582 respectively. For more details about the arguments and return code
1583 semantics see select(2).
1584
1585 int pth_pselect(int nfd, fd_set *rfds, fd_set *wfds, fd_set *efds,
1586 const struct timespec *timeout, const sigset_t *sigmask);
1587 This is a variant of the POSIX pselect(2) function, which in turn
1588 is a stronger variant of 4.2BSD select(2). The difference is that
1589 the higher-resolution "struct timespec" is passed instead of the
1590 lower-resolution "struct timeval" and that a signal mask is speci‐
1591 fied which is temporarily set while waiting for input. For more
1592 details about the arguments and return code semantics see pse‐
1593 lect(2) and select(2).
1594
1595 int pth_poll(struct pollfd *fds, unsigned int nfd, int timeout);
1596 This is a variant of the SysV poll(2) function. It examines the I/O
1597 descriptors which are passed in the array fds to see if some of
1598 them are ready for reading, are ready for writing, or have an
1599 exceptional condition pending, respectively. For more details about
1600 the arguments and return code semantics see poll(2).
1601
1602 ssize_t pth_read(int fd, void *buf, size_t nbytes);
1603 This is a variant of the POSIX read(2) function. It reads up to
1604 nbytes bytes into buf from file descriptor fd. The difference
1605 between read(2) and pth_read(2) is that pth_read(2) suspends execu‐
1606 tion of the current thread until the file descriptor is ready for
1607 reading. For more details about the arguments and return code
1608 semantics see read(2).
1609
1610 ssize_t pth_readv(int fd, const struct iovec *iovec, int iovcnt);
1611 This is a variant of the POSIX readv(2) function. It reads data
1612 from file descriptor fd into the first iovcnt rows of the iov vec‐
1613 tor. The difference between readv(2) and pth_readv(2) is that
1614 pth_readv(2) suspends execution of the current thread until the
1615 file descriptor is ready for reading. For more details about the
1616 arguments and return code semantics see readv(2).
1617
1618 ssize_t pth_write(int fd, const void *buf, size_t nbytes);
1619 This is a variant of the POSIX write(2) function. It writes nbytes
1620 bytes from buf to file descriptor fd. The difference between
1621 write(2) and pth_write(2) is that pth_write(2) suspends execution
1622 of the current thread until the file descriptor is ready for writ‐
1623 ing. For more details about the arguments and return code seman‐
1624 tics see write(2).
1625
1626 ssize_t pth_writev(int fd, const struct iovec *iovec, int iovcnt);
1627 This is a variant of the POSIX writev(2) function. It writes data
1628 to file descriptor fd from the first iovcnt rows of the iov vector.
1629 The difference between writev(2) and pth_writev(2) is that
1630 pth_writev(2) suspends execution of the current thread until the
1631 file descriptor is ready for reading. For more details about the
1632 arguments and return code semantics see writev(2).
1633
1634 ssize_t pth_pread(int fd, void *buf, size_t nbytes, off_t offset);
1635 This is a variant of the POSIX pread(3) function. It performs the
1636 same action as a regular read(2), except that it reads from a given
1637 position in the file without changing the file pointer. The first
1638 three arguments are the same as for pth_read(3) with the addition
1639 of a fourth argument offset for the desired position inside the
1640 file.
1641
1642 ssize_t pth_pwrite(int fd, const void *buf, size_t nbytes, off_t off‐
1643 set);
1644 This is a variant of the POSIX pwrite(3) function. It performs the
1645 same action as a regular write(2), except that it writes to a given
1646 position in the file without changing the file pointer. The first
1647 three arguments are the same as for pth_write(3) with the addition
1648 of a fourth argument offset for the desired position inside the
1649 file.
1650
1651 ssize_t pth_recv(int fd, void *buf, size_t nbytes, int flags);
1652 This is a variant of the SUSv2 recv(2) function and equal to
1653 ``pth_recvfrom(fd, buf, nbytes, flags, NULL, 0)''.
1654
1655 ssize_t pth_recvfrom(int fd, void *buf, size_t nbytes, int flags,
1656 struct sockaddr *from, socklen_t *fromlen);
1657 This is a variant of the SUSv2 recvfrom(2) function. It reads up to
1658 nbytes bytes into buf from file descriptor fd while using flags and
1659 from/fromlen. The difference between recvfrom(2) and
1660 pth_recvfrom(2) is that pth_recvfrom(2) suspends execution of the
1661 current thread until the file descriptor is ready for reading. For
1662 more details about the arguments and return code semantics see
1663 recvfrom(2).
1664
1665 ssize_t pth_send(int fd, const void *buf, size_t nbytes, int flags);
1666 This is a variant of the SUSv2 send(2) function and equal to
1667 ``pth_sendto(fd, buf, nbytes, flags, NULL, 0)''.
1668
1669 ssize_t pth_sendto(int fd, const void *buf, size_t nbytes, int flags,
1670 const struct sockaddr *to, socklen_t tolen);
1671 This is a variant of the SUSv2 sendto(2) function. It writes nbytes
1672 bytes from buf to file descriptor fd while using flags and
1673 to/tolen. The difference between sendto(2) and pth_sendto(2) is
1674 that pth_sendto(2) suspends execution of the current thread until
1675 the file descriptor is ready for writing. For more details about
1676 the arguments and return code semantics see sendto(2).
1677
1679 The following example is a useless server which does nothing more than
1680 listening on TCP port 12345 and displaying the current time to the
1681 socket when a connection was established. For each incoming connection
1682 a thread is spawned. Additionally, to see more multithreading, a use‐
1683 less ticker thread runs simultaneously which outputs the current time
1684 to "stderr" every 5 seconds. The example contains no error checking and
1685 is only intended to show you the look and feel of Pth.
1686
1687 #include <stdio.h>
1688 #include <stdlib.h>
1689 #include <errno.h>
1690 #include <sys/types.h>
1691 #include <sys/socket.h>
1692 #include <netinet/in.h>
1693 #include <arpa/inet.h>
1694 #include <signal.h>
1695 #include <netdb.h>
1696 #include <unistd.h>
1697 #include "pth.h"
1698
1699 #define PORT 12345
1700
1701 /* the socket connection handler thread */
1702 static void *handler(void *_arg)
1703 {
1704 int fd = (int)_arg;
1705 time_t now;
1706 char *ct;
1707
1708 now = time(NULL);
1709 ct = ctime(&now);
1710 pth_write(fd, ct, strlen(ct));
1711 close(fd);
1712 return NULL;
1713 }
1714
1715 /* the stderr time ticker thread */
1716 static void *ticker(void *_arg)
1717 {
1718 time_t now;
1719 char *ct;
1720 float load;
1721
1722 for (;;) {
1723 pth_sleep(5);
1724 now = time(NULL);
1725 ct = ctime(&now);
1726 ct[strlen(ct)-1] = '\0';
1727 pth_ctrl(PTH_CTRL_GETAVLOAD, &load);
1728 printf("ticker: time: %s, average load: %.2f\n", ct, load);
1729 }
1730 }
1731
1732 /* the main thread/procedure */
1733 int main(int argc, char *argv[])
1734 {
1735 pth_attr_t attr;
1736 struct sockaddr_in sar;
1737 struct protoent *pe;
1738 struct sockaddr_in peer_addr;
1739 int peer_len;
1740 int sa, sw;
1741 int port;
1742
1743 pth_init();
1744 signal(SIGPIPE, SIG_IGN);
1745
1746 attr = pth_attr_new();
1747 pth_attr_set(attr, PTH_ATTR_NAME, "ticker");
1748 pth_attr_set(attr, PTH_ATTR_STACK_SIZE, 64*1024);
1749 pth_attr_set(attr, PTH_ATTR_JOINABLE, FALSE);
1750 pth_spawn(attr, ticker, NULL);
1751
1752 pe = getprotobyname("tcp");
1753 sa = socket(AF_INET, SOCK_STREAM, pe->p_proto);
1754 sar.sin_family = AF_INET;
1755 sar.sin_addr.s_addr = INADDR_ANY;
1756 sar.sin_port = htons(PORT);
1757 bind(sa, (struct sockaddr *)&sar, sizeof(struct sockaddr_in));
1758 listen(sa, 10);
1759
1760 pth_attr_set(attr, PTH_ATTR_NAME, "handler");
1761 for (;;) {
1762 peer_len = sizeof(peer_addr);
1763 sw = pth_accept(sa, (struct sockaddr *)&peer_addr, &peer_len);
1764 pth_spawn(attr, handler, (void *)sw);
1765 }
1766 }
1767
1769 In this section we will discuss the canonical ways to establish the
1770 build environment for a Pth based program. The possibilities supported
1771 by Pth range from very simple environments to rather complex ones.
1772
1773 Manual Build Environment (Novice)
1774
1775 As a first example, assume we have the above test program staying in
1776 the source file "foo.c". Then we can create a very simple build envi‐
1777 ronment by just adding the following "Makefile":
1778
1779 $ vi Makefile
1780 ⎪ CC = cc
1781 ⎪ CFLAGS = `pth-config --cflags`
1782 ⎪ LDFLAGS = `pth-config --ldflags`
1783 ⎪ LIBS = `pth-config --libs`
1784 ⎪
1785 ⎪ all: foo
1786 ⎪ foo: foo.o
1787 ⎪ $(CC) $(LDFLAGS) -o foo foo.o $(LIBS)
1788 ⎪ foo.o: foo.c
1789 ⎪ $(CC) $(CFLAGS) -c foo.c
1790 ⎪ clean:
1791 ⎪ rm -f foo foo.o
1792
1793 This imports the necessary compiler and linker flags on-the-fly from
1794 the Pth installation via its "pth-config" program. This approach is
1795 straight-forward and works fine for small projects.
1796
1797 Autoconf Build Environment (Advanced)
1798
1799 The previous approach is simple but inflexible. First, to speed up
1800 building, it would be nice to not expand the compiler and linker flags
1801 every time the compiler is started. Second, it would be useful to also
1802 be able to build against uninstalled Pth, that is, against a Pth source
1803 tree which was just configured and built, but not installed. Third, it
1804 would be also useful to allow checking of the Pth version to make sure
1805 it is at least a minimum required version. And finally, it would be
1806 also great to make sure Pth works correctly by first performing some
1807 sanity compile and run-time checks. All this can be done if we use GNU
1808 autoconf and the "AC_CHECK_PTH" macro provided by Pth. For this, we
1809 establish the following three files:
1810
1811 First we again need the "Makefile", but this time it contains autoconf
1812 placeholders and additional cleanup targets. And we create it under the
1813 name "Makefile.in", because it is now an input file for autoconf:
1814
1815 $ vi Makefile.in
1816 ⎪ CC = @CC@
1817 ⎪ CFLAGS = @CFLAGS@
1818 ⎪ LDFLAGS = @LDFLAGS@
1819 ⎪ LIBS = @LIBS@
1820 ⎪
1821 ⎪ all: foo
1822 ⎪ foo: foo.o
1823 ⎪ $(CC) $(LDFLAGS) -o foo foo.o $(LIBS)
1824 ⎪ foo.o: foo.c
1825 ⎪ $(CC) $(CFLAGS) -c foo.c
1826 ⎪ clean:
1827 ⎪ rm -f foo foo.o
1828 ⎪ distclean:
1829 ⎪ rm -f foo foo.o
1830 ⎪ rm -f config.log config.status config.cache
1831 ⎪ rm -f Makefile
1832
1833 Because autoconf generates additional files, we added a canonical
1834 "distclean" target which cleans this up. Secondly, we wrote "config‐
1835 ure.ac", a (minimal) autoconf script specification:
1836
1837 $ vi configure.ac
1838 ⎪ AC_INIT(Makefile.in)
1839 ⎪ AC_CHECK_PTH(1.3.0)
1840 ⎪ AC_OUTPUT(Makefile)
1841
1842 Then we let autoconf's "aclocal" program generate for us an "aclo‐
1843 cal.m4" file containing Pth's "AC_CHECK_PTH" macro. Then we generate
1844 the final "configure" script out of this "aclocal.m4" file and the
1845 "configure.ac" file:
1846
1847 $ aclocal --acdir=`pth-config --acdir`
1848 $ autoconf
1849
1850 After these steps, the working directory should look similar to this:
1851
1852 $ ls -l
1853 -rw-r--r-- 1 rse users 176 Nov 3 11:11 Makefile.in
1854 -rw-r--r-- 1 rse users 15314 Nov 3 11:16 aclocal.m4
1855 -rwxr-xr-x 1 rse users 52045 Nov 3 11:16 configure
1856 -rw-r--r-- 1 rse users 63 Nov 3 11:11 configure.ac
1857 -rw-r--r-- 1 rse users 4227 Nov 3 11:11 foo.c
1858
1859 If we now run "configure" we get a correct "Makefile" which immediately
1860 can be used to build "foo" (assuming that Pth is already installed
1861 somewhere, so that "pth-config" is in $PATH):
1862
1863 $ ./configure
1864 creating cache ./config.cache
1865 checking for gcc... gcc
1866 checking whether the C compiler (gcc ) works... yes
1867 checking whether the C compiler (gcc ) is a cross-compiler... no
1868 checking whether we are using GNU C... yes
1869 checking whether gcc accepts -g... yes
1870 checking how to run the C preprocessor... gcc -E
1871 checking for GNU Pth... version 1.3.0, installed under /usr/local
1872 updating cache ./config.cache
1873 creating ./config.status
1874 creating Makefile
1875 rse@en1:/e/gnu/pth/ac
1876 $ make
1877 gcc -g -O2 -I/usr/local/include -c foo.c
1878 gcc -L/usr/local/lib -o foo foo.o -lpth
1879
1880 If Pth is installed in non-standard locations or "pth-config" is not in
1881 $PATH, one just has to drop the "configure" script a note about the
1882 location by running "configure" with the option "--with-pth="dir (where
1883 dir is the argument which was used with the "--prefix" option when Pth
1884 was installed).
1885
1886 Autoconf Build Environment with Local Copy of Pth (Expert)
1887
1888 Finally let us assume the "foo" program stays under either a GPL or
1889 LGPL distribution license and we want to make it a stand-alone package
1890 for easier distribution and installation. That is, we don't want to
1891 oblige the end-user to install Pth just to allow our "foo" package to
1892 compile. For this, it is a convenient practice to include the required
1893 libraries (here Pth) into the source tree of the package (here "foo").
1894 Pth ships with all necessary support to allow us to easily achieve this
1895 approach. Say, we want Pth in a subdirectory named "pth/" and this
1896 directory should be seamlessly integrated into the configuration and
1897 build process of "foo".
1898
1899 First we again start with the "Makefile.in", but this time it is a more
1900 advanced version which supports subdirectory movement:
1901
1902 $ vi Makefile.in
1903 ⎪ CC = @CC@
1904 ⎪ CFLAGS = @CFLAGS@
1905 ⎪ LDFLAGS = @LDFLAGS@
1906 ⎪ LIBS = @LIBS@
1907 ⎪
1908 ⎪ SUBDIRS = pth
1909 ⎪
1910 ⎪ all: subdirs_all foo
1911 ⎪
1912 ⎪ subdirs_all:
1913 ⎪ @$(MAKE) $(MFLAGS) subdirs TARGET=all
1914 ⎪ subdirs_clean:
1915 ⎪ @$(MAKE) $(MFLAGS) subdirs TARGET=clean
1916 ⎪ subdirs_distclean:
1917 ⎪ @$(MAKE) $(MFLAGS) subdirs TARGET=distclean
1918 ⎪ subdirs:
1919 ⎪ @for subdir in $(SUBDIRS); do \
1920 ⎪ echo "===> $$subdir ($(TARGET))"; \
1921 ⎪ (cd $$subdir; $(MAKE) $(MFLAGS) $(TARGET) ⎪⎪ exit 1) ⎪⎪ exit 1; \
1922 ⎪ echo "<=== $$subdir"; \
1923 ⎪ done
1924 ⎪
1925 ⎪ foo: foo.o
1926 ⎪ $(CC) $(LDFLAGS) -o foo foo.o $(LIBS)
1927 ⎪ foo.o: foo.c
1928 ⎪ $(CC) $(CFLAGS) -c foo.c
1929 ⎪
1930 ⎪ clean: subdirs_clean
1931 ⎪ rm -f foo foo.o
1932 ⎪ distclean: subdirs_distclean
1933 ⎪ rm -f foo foo.o
1934 ⎪ rm -f config.log config.status config.cache
1935 ⎪ rm -f Makefile
1936
1937 Then we create a slightly different autoconf script "configure.ac":
1938
1939 $ vi configure.ac
1940 ⎪ AC_INIT(Makefile.in)
1941 ⎪ AC_CONFIG_AUX_DIR(pth)
1942 ⎪ AC_CHECK_PTH(1.3.0, subdir:pth --disable-tests)
1943 ⎪ AC_CONFIG_SUBDIRS(pth)
1944 ⎪ AC_OUTPUT(Makefile)
1945
1946 Here we provided a default value for "foo"'s "--with-pth" option as the
1947 second argument to "AC_CHECK_PTH" which indicates that Pth can be found
1948 in the subdirectory named "pth/". Additionally we specified that the
1949 "--disable-tests" option of Pth should be passed to the "pth/" subdi‐
1950 rectory, because we need only to build the Pth library itself. And we
1951 added a "AC_CONFIG_SUBDIR" call which indicates to autoconf that it
1952 should configure the "pth/" subdirectory, too. The "AC_CONFIG_AUX_DIR"
1953 directive was added just to make autoconf happy, because it wants to
1954 find a "install.sh" or "shtool" script if "AC_CONFIG_SUBDIRS" is used.
1955
1956 Now we let autoconf's "aclocal" program again generate for us an "aclo‐
1957 cal.m4" file with the contents of Pth's "AC_CHECK_PTH" macro. Finally
1958 we generate the "configure" script out of this "aclocal.m4" file and
1959 the "configure.ac" file.
1960
1961 $ aclocal --acdir=`pth-config --acdir`
1962 $ autoconf
1963
1964 Now we have to create the "pth/" subdirectory itself. For this, we
1965 extract the Pth distribution to the "foo" source tree and just rename
1966 it to "pth/":
1967
1968 $ gunzip <pth-X.Y.Z.tar.gz ⎪ tar xvf -
1969 $ mv pth-X.Y.Z pth
1970
1971 Optionally to reduce the size of the "pth/" subdirectory, we can strip
1972 down the Pth sources to a minimum with the striptease feature:
1973
1974 $ cd pth
1975 $ ./configure
1976 $ make striptease
1977 $ cd ..
1978
1979 After this the source tree of "foo" should look similar to this:
1980
1981 $ ls -l
1982 -rw-r--r-- 1 rse users 709 Nov 3 11:51 Makefile.in
1983 -rw-r--r-- 1 rse users 16431 Nov 3 12:20 aclocal.m4
1984 -rwxr-xr-x 1 rse users 57403 Nov 3 12:21 configure
1985 -rw-r--r-- 1 rse users 129 Nov 3 12:21 configure.ac
1986 -rw-r--r-- 1 rse users 4227 Nov 3 11:11 foo.c
1987 drwxr-xr-x 2 rse users 3584 Nov 3 12:36 pth
1988 $ ls -l pth/
1989 -rw-rw-r-- 1 rse users 26344 Nov 1 20:12 COPYING
1990 -rw-rw-r-- 1 rse users 2042 Nov 3 12:36 Makefile.in
1991 -rw-rw-r-- 1 rse users 3967 Nov 1 19:48 README
1992 -rw-rw-r-- 1 rse users 340 Nov 3 12:36 README.1st
1993 -rw-rw-r-- 1 rse users 28719 Oct 31 17:06 config.guess
1994 -rw-rw-r-- 1 rse users 24274 Aug 18 13:31 config.sub
1995 -rwxrwxr-x 1 rse users 155141 Nov 3 12:36 configure
1996 -rw-rw-r-- 1 rse users 162021 Nov 3 12:36 pth.c
1997 -rw-rw-r-- 1 rse users 18687 Nov 2 15:19 pth.h.in
1998 -rw-rw-r-- 1 rse users 5251 Oct 31 12:46 pth_acdef.h.in
1999 -rw-rw-r-- 1 rse users 2120 Nov 1 11:27 pth_acmac.h.in
2000 -rw-rw-r-- 1 rse users 2323 Nov 1 11:27 pth_p.h.in
2001 -rw-rw-r-- 1 rse users 946 Nov 1 11:27 pth_vers.c
2002 -rw-rw-r-- 1 rse users 26848 Nov 1 11:27 pthread.c
2003 -rw-rw-r-- 1 rse users 18772 Nov 1 11:27 pthread.h.in
2004 -rwxrwxr-x 1 rse users 26188 Nov 3 12:36 shtool
2005
2006 Now when we configure and build the "foo" package it looks similar to
2007 this:
2008
2009 $ ./configure
2010 creating cache ./config.cache
2011 checking for gcc... gcc
2012 checking whether the C compiler (gcc ) works... yes
2013 checking whether the C compiler (gcc ) is a cross-compiler... no
2014 checking whether we are using GNU C... yes
2015 checking whether gcc accepts -g... yes
2016 checking how to run the C preprocessor... gcc -E
2017 checking for GNU Pth... version 1.3.0, local under pth
2018 updating cache ./config.cache
2019 creating ./config.status
2020 creating Makefile
2021 configuring in pth
2022 running /bin/sh ./configure --enable-subdir --enable-batch
2023 --disable-tests --cache-file=.././config.cache --srcdir=.
2024 loading cache .././config.cache
2025 checking for gcc... (cached) gcc
2026 checking whether the C compiler (gcc ) works... yes
2027 checking whether the C compiler (gcc ) is a cross-compiler... no
2028 [...]
2029 $ make
2030 ===> pth (all)
2031 ./shtool scpp -o pth_p.h -t pth_p.h.in -Dcpp -Cintern -M '==#==' pth.c
2032 pth_vers.c
2033 gcc -c -I. -O2 -pipe pth.c
2034 gcc -c -I. -O2 -pipe pth_vers.c
2035 ar rc libpth.a pth.o pth_vers.o
2036 ranlib libpth.a
2037 <=== pth
2038 gcc -g -O2 -Ipth -c foo.c
2039 gcc -Lpth -o foo foo.o -lpth
2040
2041 As you can see, autoconf now automatically configures the local
2042 (stripped down) copy of Pth in the subdirectory "pth/" and the "Make‐
2043 file" automatically builds the subdirectory, too.
2044
2046 Pth per default uses an explicit API, including the system calls. For
2047 instance you've to explicitly use pth_read(3) when you need a thread-
2048 aware read(3) and cannot expect that by just calling read(3) only the
2049 current thread is blocked. Instead with the standard read(3) call the
2050 whole process will be blocked. But because for some applications
2051 (mainly those consisting of lots of third-party stuff) this can be
2052 inconvenient. Here it's required that a call to read(3) `magically'
2053 means pth_read(3). The problem here is that such magic Pth cannot pro‐
2054 vide per default because it's not really portable. Nevertheless Pth
2055 provides a two step approach to solve this problem:
2056
2057 Soft System Call Mapping
2058
2059 This variant is available on all platforms and can always be enabled by
2060 building Pth with "--enable-syscall-soft". This then triggers some
2061 "#define"'s in the "pth.h" header which map for instance read(3) to
2062 pth_read(3), etc. Currently the following functions are mapped:
2063 fork(2), nanosleep(3), usleep(3), sleep(3), sigwait(3), waitpid(2),
2064 system(3), select(2), poll(2), connect(2), accept(2), read(2),
2065 write(2), recv(2), send(2), recvfrom(2), sendto(2).
2066
2067 The drawback of this approach is just that really all source files of
2068 the application where these function calls occur have to include
2069 "pth.h", of course. And this also means that existing libraries,
2070 including the vendor's stdio, usually will still block the whole
2071 process if one of its I/O functions block.
2072
2073 Hard System Call Mapping
2074
2075 This variant is available only on those platforms where the syscall(2)
2076 function exists and there it can be enabled by building Pth with
2077 "--enable-syscall-hard". This then builds wrapper functions (for
2078 instances read(3)) into the Pth library which internally call the real
2079 Pth replacement functions (pth_read(3)). Currently the following func‐
2080 tions are mapped: fork(2), nanosleep(3), usleep(3), sleep(3), wait‐
2081 pid(2), system(3), select(2), poll(2), connect(2), accept(2), read(2),
2082 write(2).
2083
2084 The drawback of this approach is that it depends on syscall(2) inter‐
2085 face and prototype conflicts can occur while building the wrapper func‐
2086 tions due to different function signatures in the vendor C header
2087 files. But the advantage of this mapping variant is that the source
2088 files of the application where these function calls occur have not to
2089 include "pth.h" and that existing libraries, including the vendor's
2090 stdio, magically become thread-aware (and then block only the current
2091 thread).
2092
2094 Pth is very portable because it has only one part which perhaps has to
2095 be ported to new platforms (the machine context initialization). But it
2096 is written in a way which works on mostly all Unix platforms which sup‐
2097 port makecontext(2) or at least sigstack(2) or sigaltstack(2) [see
2098 "pth_mctx.c" for details]. Any other Pth code is POSIX and ANSI C based
2099 only.
2100
2101 The context switching is done via either SUSv2 makecontext(2) or POSIX
2102 make[sig]setjmp(3) and [sig]longjmp(3). Here all CPU registers, the
2103 program counter and the stack pointer are switched. Additionally the
2104 Pth dispatcher switches also the global Unix "errno" variable [see
2105 "pth_mctx.c" for details] and the signal mask (either implicitly via
2106 sigsetjmp(3) or in an emulated way via explicit setprocmask(2) calls).
2107
2108 The Pth event manager is mainly select(2) and gettimeofday(2) based,
2109 i.e., the current time is fetched via gettimeofday(2) once per context
2110 switch for time calculations and all I/O events are implemented via a
2111 single central select(2) call [see "pth_sched.c" for details].
2112
2113 The thread control block management is done via virtual priority queues
2114 without any additional data structure overhead. For this, the queue
2115 linkage attributes are part of the thread control blocks and the queues
2116 are actually implemented as rings with a selected element as the entry
2117 point [see "pth_tcb.h" and "pth_pqueue.c" for details].
2118
2119 Most time critical code sections (especially the dispatcher and event
2120 manager) are speeded up by inline functions (implemented as ANSI C pre-
2121 processor macros). Additionally any debugging code is completely
2122 removed from the source when not built with "-DPTH_DEBUG" (see Autoconf
2123 "--enable-debug" option), i.e., not only stub functions remain [see
2124 "pth_debug.c" for details].
2125
2127 Pth (intentionally) provides no replacements for non-thread-safe func‐
2128 tions (like strtok(3) which uses a static internal buffer) or synchro‐
2129 nous system functions (like gethostbyname(3) which doesn't provide an
2130 asynchronous mode where it doesn't block). When you want to use those
2131 functions in your server application together with threads, you've to
2132 either link the application against special third-party libraries (or
2133 for thread-safe/reentrant functions possibly against an existing
2134 "libc_r" of the platform vendor). For an asynchronous DNS resolver
2135 library use the GNU adns package from Ian Jackson ( see
2136 http://www.gnu.org/software/adns/adns.html ).
2137
2139 The Pth library was designed and implemented between February and July
2140 1999 by Ralf S. Engelschall after evaluating numerous (mostly preemp‐
2141 tive) thread libraries and after intensive discussions with Peter
2142 Simons, Martin Kraemer, Lars Eilebrecht and Ralph Babel related to an
2143 experimental (matrix based) non-preemptive C++ scheduler class written
2144 by Peter Simons.
2145
2146 Pth was then implemented in order to combine the non-preemptive
2147 approach of multithreading (which provides better portability and per‐
2148 formance) with an API similar to the popular one found in Pthread
2149 libraries (which provides easy programming).
2150
2151 So the essential idea of the non-preemptive approach was taken over
2152 from Peter Simons scheduler. The priority based scheduling algorithm
2153 was suggested by Martin Kraemer. Some code inspiration also came from
2154 an experimental threading library (rsthreads) written by Robert S. Thau
2155 for an ancient internal test version of the Apache webserver. The con‐
2156 cept and API of message ports was borrowed from AmigaOS' Exec subsys‐
2157 tem. The concept and idea for the flexible event mechanism came from
2158 Paul Vixie's eventlib (which can be found as a part of BIND v8).
2159
2161 If you think you have found a bug in Pth, you should send a report as
2162 complete as possible to bug-pth@gnu.org. If you can, please try to fix
2163 the problem and include a patch, made with '"diff -u3"', in your
2164 report. Always, at least, include a reasonable amount of description in
2165 your report to allow the author to deterministically reproduce the bug.
2166
2167 For further support you additionally can subscribe to the
2168 pth-users@gnu.org mailing list by sending an Email to
2169 pth-users-request@gnu.org with `"subscribe pth-users"' (or `"subscribe
2170 pth-users" address' if you want to subscribe from a particular Email
2171 address) in the body. Then you can discuss your issues with other Pth
2172 users by sending messages to pth-users@gnu.org. Currently (as of August
2173 2000) you can reach about 110 Pth users on this mailing list. Old post‐
2174 ings you can find at http://www.mail-archive.com/pth-users@gnu.org/.
2175
2177 Related Web Locations
2178
2179 `comp.programming.threads Newsgroup Archive', http://www.deja.com/top‐
2180 ics_if.xp? search=topic&group=comp.programming.threads
2181
2182 `comp.programming.threads Frequently Asked Questions (F.A.Q.)',
2183 http://www.lambdacs.com/newsgroup/FAQ.html
2184
2185 `Multithreading - Definitions and Guidelines', Numeric Quest Inc 1998;
2186 http://www.numeric-quest.com/lang/multi-frame.html
2187
2188 `The Single UNIX Specification, Version 2 - Threads', The Open Group
2189 1997; http://www.opengroup.org/onlinepubs /007908799/xsh/threads.html
2190
2191 SMI Thread Resources, Sun Microsystems Inc; http://www.sun.com/work‐
2192 shop/threads/
2193
2194 Bibliography on threads and multithreading, Torsten Amundsen;
2195 http://liinwww.ira.uka.de/bibliography/Os/threads.html
2196
2197 Related Books
2198
2199 B. Nichols, D. Buttlar, J.P. Farrel: `Pthreads Programming - A POSIX
2200 Standard for Better Multiprocessing', O'Reilly 1996; ISBN 1-56592-115-1
2201
2202 B. Lewis, D. J. Berg: `Multithreaded Programming with Pthreads', Sun
2203 Microsystems Press, Prentice Hall 1998; ISBN 0-13-680729-1
2204
2205 B. Lewis, D. J. Berg: `Threads Primer - A Guide To Multithreaded Pro‐
2206 gramming', Prentice Hall 1996; ISBN 0-13-443698-9
2207
2208 S. J. Norton, M. D. Dipasquale: `Thread Time - The Multithreaded Pro‐
2209 gramming Guide', Prentice Hall 1997; ISBN 0-13-190067-6
2210
2211 D. R. Butenhof: `Programming with POSIX Threads', Addison Wesley 1997;
2212 ISBN 0-201-63392-2
2213
2214 Related Manpages
2215
2216 pth-config(1), pthread(3).
2217
2218 getcontext(2), setcontext(2), makecontext(2), swapcontext(2),
2219 sigstack(2), sigaltstack(2), sigaction(2), sigemptyset(2),
2220 sigaddset(2), sigprocmask(2), sigsuspend(2), sigsetjmp(3), sig‐
2221 longjmp(3), setjmp(3), longjmp(3), select(2), gettimeofday(2).
2222
2224 Ralf S. Engelschall
2225 rse@engelschall.com
2226 www.engelschall.com
2227
2228
2229
223008-Jun-2006 GNU Pth 2.0.7 pth(3)