1pth(3) GNU Portable Threads pth(3)
2
3
4
6 pth - GNU Portable Threads
7
9 GNU Pth 2.0.7 (08-Jun-2006)
10
12 Global Library Management
13 pth_init, pth_kill, pth_ctrl, pth_version.
14
15 Thread Attribute Handling
16 pth_attr_of, pth_attr_new, pth_attr_init, pth_attr_set,
17 pth_attr_get, pth_attr_destroy.
18
19 Thread Control
20 pth_spawn, pth_once, pth_self, pth_suspend, pth_resume, pth_yield,
21 pth_nap, pth_wait, pth_cancel, pth_abort, pth_raise, pth_join,
22 pth_exit.
23
24 Utilities
25 pth_fdmode, pth_time, pth_timeout, pth_sfiodisc.
26
27 Cancellation Management
28 pth_cancel_point, pth_cancel_state.
29
30 Event Handling
31 pth_event, pth_event_typeof, pth_event_extract, pth_event_concat,
32 pth_event_isolate, pth_event_walk, pth_event_status,
33 pth_event_free.
34
35 Key-Based Storage
36 pth_key_create, pth_key_delete, pth_key_setdata, pth_key_getdata.
37
38 Message Port Communication
39 pth_msgport_create, pth_msgport_destroy, pth_msgport_find, pth_msg‐
40 port_pending, pth_msgport_put, pth_msgport_get, pth_msgport_reply.
41
42 Thread Cleanups
43 pth_cleanup_push, pth_cleanup_pop.
44
45 Process Forking
46 pth_atfork_push, pth_atfork_pop, pth_fork.
47
48 Synchronization
49 pth_mutex_init, pth_mutex_acquire, pth_mutex_release,
50 pth_rwlock_init, pth_rwlock_acquire, pth_rwlock_release,
51 pth_cond_init, pth_cond_await, pth_cond_notify, pth_barrier_init,
52 pth_barrier_reach.
53
54 User-Space Context
55 pth_uctx_create, pth_uctx_make, pth_uctx_switch, pth_uctx_destroy.
56
57 Generalized POSIX Replacement API
58 pth_sigwait_ev, pth_accept_ev, pth_connect_ev, pth_select_ev,
59 pth_poll_ev, pth_read_ev, pth_readv_ev, pth_write_ev,
60 pth_writev_ev, pth_recv_ev, pth_recvfrom_ev, pth_send_ev,
61 pth_sendto_ev.
62
63 Standard POSIX Replacement API
64 pth_nanosleep, pth_usleep, pth_sleep, pth_waitpid, pth_system,
65 pth_sigmask, pth_sigwait, pth_accept, pth_connect, pth_select,
66 pth_pselect, pth_poll, pth_read, pth_readv, pth_write, pth_writev,
67 pth_pread, pth_pwrite, pth_recv, pth_recvfrom, pth_send,
68 pth_sendto.
69
71 ____ _ _
72 ⎪ _ \⎪ ⎪_⎪ ⎪__
73 ⎪ ⎪_) ⎪ __⎪ '_ \ ``Only those who attempt
74 ⎪ __/⎪ ⎪_⎪ ⎪ ⎪ ⎪ the absurd can achieve
75 ⎪_⎪ \__⎪_⎪ ⎪_⎪ the impossible.''
76
77 Pth is a very portable POSIX/ANSI-C based library for Unix platforms
78 which provides non-preemptive priority-based scheduling for multiple
79 threads of execution (aka `multithreading') inside event-driven appli‐
80 cations. All threads run in the same address space of the application
81 process, but each thread has its own individual program counter, run-
82 time stack, signal mask and "errno" variable.
83
84 The thread scheduling itself is done in a cooperative way, i.e., the
85 threads are managed and dispatched by a priority- and event-driven non-
86 preemptive scheduler. The intention is that this way both better porta‐
87 bility and run-time performance is achieved than with preemptive sched‐
88 uling. The event facility allows threads to wait until various types of
89 internal and external events occur, including pending I/O on file
90 descriptors, asynchronous signals, elapsed timers, pending I/O on mes‐
91 sage ports, thread and process termination, and even results of custom‐
92 ized callback functions.
93
94 Pth also provides an optional emulation API for POSIX.1c threads
95 (`Pthreads') which can be used for backward compatibility to existing
96 multithreaded applications. See Pth's pthread(3) manual page for
97 details.
98
99 Threading Background
100
101 When programming event-driven applications, usually servers, lots of
102 regular jobs and one-shot requests have to be processed in parallel.
103 To efficiently simulate this parallel processing on uniprocessor
104 machines, we use `multitasking' -- that is, we have the application ask
105 the operating system to spawn multiple instances of itself. On Unix,
106 typically the kernel implements multitasking in a preemptive and prior‐
107 ity-based way through heavy-weight processes spawned with fork(2).
108 These processes usually do not share a common address space. Instead
109 they are clearly separated from each other, and are created by direct
110 cloning a process address space (although modern kernels use memory
111 segment mapping and copy-on-write semantics to avoid unnecessary copy‐
112 ing of physical memory).
113
114 The drawbacks are obvious: Sharing data between the processes is com‐
115 plicated, and can usually only be done efficiently through shared mem‐
116 ory (but which itself is not very portable). Synchronization is compli‐
117 cated because of the preemptive nature of the Unix scheduler (one has
118 to use atomic locks, etc). The machine's resources can be exhausted
119 very quickly when the server application has to serve too many long-
120 running requests (heavy-weight processes cost memory). And when each
121 request spawns a sub-process to handle it, the server performance and
122 responsiveness is horrible (heavy-weight processes cost time to spawn).
123 Finally, the server application doesn't scale very well with the load
124 because of these resource problems. In practice, lots of tricks are
125 usually used to overcome these problems - ranging from pre-forked sub-
126 process pools to semi-serialized processing, etc.
127
128 One of the most elegant ways to solve these resource- and data-sharing
129 problems is to have multiple light-weight threads of execution inside a
130 single (heavy-weight) process, i.e., to use multithreading. Those
131 threads usually improve responsiveness and performance of the applica‐
132 tion, often improve and simplify the internal program structure, and
133 most important, require less system resources than heavy-weight pro‐
134 cesses. Threads are neither the optimal run-time facility for all types
135 of applications, nor can all applications benefit from them. But at
136 least event-driven server applications usually benefit greatly from
137 using threads.
138
139 The World of Threading
140
141 Even though lots of documents exists which describe and define the
142 world of threading, to understand Pth, you need only basic knowledge
143 about threading. The following definitions of thread-related terms
144 should at least help you understand thread programming enough to allow
145 you to use Pth.
146
147 o process vs. thread
148 A process on Unix systems consists of at least the following funda‐
149 mental ingredients: virtual memory table, program code, program
150 counter, heap memory, stack memory, stack pointer, file descriptor
151 set, signal table. On every process switch, the kernel saves and
152 restores these ingredients for the individual processes. On the other
153 hand, a thread consists of only a private program counter, stack mem‐
154 ory, stack pointer and signal table. All other ingredients, in par‐
155 ticular the virtual memory, it shares with the other threads of the
156 same process.
157
158 o kernel-space vs. user-space threading
159 Threads on a Unix platform traditionally can be implemented either
160 inside kernel-space or user-space. When threads are implemented by
161 the kernel, the thread context switches are performed by the kernel
162 without the application's knowledge. Similarly, when threads are
163 implemented in user-space, the thread context switches are performed
164 by an application library, without the kernel's knowledge. There also
165 are hybrid threading approaches where, typically, a user-space
166 library binds one or more user-space threads to one or more kernel-
167 space threads (there usually called light-weight processes - or in
168 short LWPs).
169
170 User-space threads are usually more portable and can perform faster
171 and cheaper context switches (for instance via swapcontext(2) or
172 setjmp(3)/longjmp(3)) than kernel based threads. On the other hand,
173 kernel-space threads can take advantage of multiprocessor machines
174 and don't have any inherent I/O blocking problems. Kernel-space
175 threads are usually scheduled in preemptive way side-by-side with the
176 underlying processes. User-space threads on the other hand use either
177 preemptive or non-preemptive scheduling.
178
179 o preemptive vs. non-preemptive thread scheduling
180 In preemptive scheduling, the scheduler lets a thread execute until a
181 blocking situation occurs (usually a function call which would block)
182 or the assigned timeslice elapses. Then it detracts control from the
183 thread without a chance for the thread to object. This is usually
184 realized by interrupting the thread through a hardware interrupt sig‐
185 nal (for kernel-space threads) or a software interrupt signal (for
186 user-space threads), like "SIGALRM" or "SIGVTALRM". In non-preemptive
187 scheduling, once a thread received control from the scheduler it
188 keeps it until either a blocking situation occurs (again a function
189 call which would block and instead switches back to the scheduler) or
190 the thread explicitly yields control back to the scheduler in a coop‐
191 erative way.
192
193 o concurrency vs. parallelism
194 Concurrency exists when at least two threads are in progress at the
195 same time. Parallelism arises when at least two threads are executing
196 simultaneously. Real parallelism can be only achieved on multiproces‐
197 sor machines, of course. But one also usually speaks of parallelism
198 or high concurrency in the context of preemptive thread scheduling
199 and of low concurrency in the context of non-preemptive thread sched‐
200 uling.
201
202 o responsiveness
203 The responsiveness of a system can be described by the user visible
204 delay until the system responses to an external request. When this
205 delay is small enough and the user doesn't recognize a noticeable
206 delay, the responsiveness of the system is considered good. When the
207 user recognizes or is even annoyed by the delay, the responsiveness
208 of the system is considered bad.
209
210 o reentrant, thread-safe and asynchronous-safe functions
211 A reentrant function is one that behaves correctly if it is called
212 simultaneously by several threads and then also executes simultane‐
213 ously. Functions that access global state, such as memory or files,
214 of course, need to be carefully designed in order to be reentrant.
215 Two traditional approaches to solve these problems are caller-sup‐
216 plied states and thread-specific data.
217
218 Thread-safety is the avoidance of data races, i.e., situations in
219 which data is set to either correct or incorrect value depending upon
220 the (unpredictable) order in which multiple threads access and modify
221 the data. So a function is thread-safe when it still behaves semanti‐
222 cally correct when called simultaneously by several threads (it is
223 not required that the functions also execute simultaneously). The
224 traditional approach to achieve thread-safety is to wrap a function
225 body with an internal mutual exclusion lock (aka `mutex'). As you
226 should recognize, reentrant is a stronger attribute than thread-safe,
227 because it is harder to achieve and results especially in no run-time
228 contention between threads. So, a reentrant function is always
229 thread-safe, but not vice versa.
230
231 Additionally there is a related attribute for functions named asyn‐
232 chronous-safe, which comes into play in conjunction with signal han‐
233 dlers. This is very related to the problem of reentrant functions. An
234 asynchronous-safe function is one that can be called safe and without
235 side-effects from within a signal handler context. Usually very few
236 functions are of this type, because an application is very restricted
237 in what it can perform from within a signal handler (especially what
238 system functions it is allowed to call). The reason mainly is,
239 because only a few system functions are officially declared by POSIX
240 as guaranteed to be asynchronous-safe. Asynchronous-safe functions
241 usually have to be already reentrant.
242
243 User-Space Threads
244
245 User-space threads can be implemented in various way. The two tradi‐
246 tional approaches are:
247
248 1. Matrix-based explicit dispatching between small units of execution:
249
250 Here the global procedures of the application are split into small
251 execution units (each is required to not run for more than a few
252 milliseconds) and those units are implemented by separate functions.
253 Then a global matrix is defined which describes the execution (and
254 perhaps even dependency) order of these functions. The main server
255 procedure then just dispatches between these units by calling one
256 function after each other controlled by this matrix. The threads are
257 created by more than one jump-trail through this matrix and by
258 switching between these jump-trails controlled by corresponding
259 occurred events.
260
261 This approach gives the best possible performance, because one can
262 fine-tune the threads of execution by adjusting the matrix, and the
263 scheduling is done explicitly by the application itself. It is also
264 very portable, because the matrix is just an ordinary data struc‐
265 ture, and functions are a standard feature of ANSI C.
266
267 The disadvantage of this approach is that it is complicated to write
268 large applications with this approach, because in those applications
269 one quickly gets hundreds(!) of execution units and the control flow
270 inside such an application is very hard to understand (because it is
271 interrupted by function borders and one always has to remember the
272 global dispatching matrix to follow it). Additionally, all threads
273 operate on the same execution stack. Although this saves memory, it
274 is often nasty, because one cannot switch between threads in the
275 middle of a function. Thus the scheduling borders are the function
276 borders.
277
278 2. Context-based implicit scheduling between threads of execution:
279
280 Here the idea is that one programs the application as with forked
281 processes, i.e., one spawns a thread of execution and this runs from
282 the begin to the end without an interrupted control flow. But the
283 control flow can be still interrupted - even in the middle of a
284 function. Actually in a preemptive way, similar to what the kernel
285 does for the heavy-weight processes, i.e., every few milliseconds
286 the user-space scheduler switches between the threads of execution.
287 But the thread itself doesn't recognize this and usually (except for
288 synchronization issues) doesn't have to care about this.
289
290 The advantage of this approach is that it's very easy to program,
291 because the control flow and context of a thread directly follows a
292 procedure without forced interrupts through function borders. Addi‐
293 tionally, the programming is very similar to a traditional and well
294 understood fork(2) based approach.
295
296 The disadvantage is that although the general performance is
297 increased, compared to using approaches based on heavy-weight pro‐
298 cesses, it is decreased compared to the matrix-approach above.
299 Because the implicit preemptive scheduling does usually a lot more
300 context switches (every user-space context switch costs some over‐
301 head even when it is a lot cheaper than a kernel-level context
302 switch) than the explicit cooperative/non-preemptive scheduling.
303 Finally, there is no really portable POSIX/ANSI-C based way to
304 implement user-space preemptive threading. Either the platform
305 already has threads, or one has to hope that some semi-portable
306 package exists for it. And even those semi-portable packages usually
307 have to deal with assembler code and other nasty internals and are
308 not easy to port to forthcoming platforms.
309
310 So, in short: the matrix-dispatching approach is portable and fast, but
311 nasty to program. The thread scheduling approach is easy to program,
312 but suffers from synchronization and portability problems caused by its
313 preemptive nature.
314
315 The Compromise of Pth
316
317 But why not combine the good aspects of both approaches while avoiding
318 their bad aspects? That's the goal of Pth. Pth implements easy-to-pro‐
319 gram threads of execution, but avoids the problems of preemptive sched‐
320 uling by using non-preemptive scheduling instead.
321
322 This sounds like, and is, a useful approach. Nevertheless, one has to
323 keep the implications of non-preemptive thread scheduling in mind when
324 working with Pth. The following list summarizes a few essential points:
325
326 o Pth provides maximum portability, but NOT the fanciest features.
327
328 This is, because it uses a nifty and portable POSIX/ANSI-C approach
329 for thread creation (and this way doesn't require any platform depen‐
330 dent assembler hacks) and schedules the threads in non-preemptive way
331 (which doesn't require unportable facilities like "SIGVTALRM"). On
332 the other hand, this way not all fancy threading features can be
333 implemented. Nevertheless the available facilities are enough to
334 provide a robust and full-featured threading system.
335
336 o Pth increases the responsiveness and concurrency of an event-driven
337 application, but NOT the concurrency of number-crunching applica‐
338 tions.
339
340 The reason is the non-preemptive scheduling. Number-crunching appli‐
341 cations usually require preemptive scheduling to achieve concurrency
342 because of their long CPU bursts. For them, non-preemptive scheduling
343 (even together with explicit yielding) provides only the old concept
344 of `coroutines'. On the other hand, event driven applications benefit
345 greatly from non-preemptive scheduling. They have only short CPU
346 bursts and lots of events to wait on, and this way run faster under
347 non-preemptive scheduling because no unnecessary context switching
348 occurs, as it is the case for preemptive scheduling. That's why Pth
349 is mainly intended for server type applications, although there is no
350 technical restriction.
351
352 o Pth requires thread-safe functions, but NOT reentrant functions.
353
354 This nice fact exists again because of the nature of non-preemptive
355 scheduling, where a function isn't interrupted and this way cannot be
356 reentered before it returned. This is a great portability benefit,
357 because thread-safety can be achieved more easily than reentrance
358 possibility. Especially this means that under Pth more existing
359 third-party libraries can be used without side-effects than it's the
360 case for other threading systems.
361
362 o Pth doesn't require any kernel support, but can NOT benefit from mul‐
363 tiprocessor machines.
364
365 This means that Pth runs on almost all Unix kernels, because the ker‐
366 nel does not need to be aware of the Pth threads (because they are
367 implemented entirely in user-space). On the other hand, it cannot
368 benefit from the existence of multiprocessors, because for this, ker‐
369 nel support would be needed. In practice, this is no problem, because
370 multiprocessor systems are rare, and portability is almost more
371 important than highest concurrency.
372
373 The life cycle of a thread
374
375 To understand the Pth Application Programming Interface (API), it helps
376 to first understand the life cycle of a thread in the Pth threading
377 system. It can be illustrated with the following directed graph:
378
379 NEW
380 ⎪
381 V
382 +---> READY ---+
383 ⎪ ^ ⎪
384 ⎪ ⎪ V
385 WAITING <--+-- RUNNING
386 ⎪
387 : V
388 SUSPENDED DEAD
389
390 When a new thread is created, it is moved into the NEW queue of the
391 scheduler. On the next dispatching for this thread, the scheduler picks
392 it up from there and moves it to the READY queue. This is a queue con‐
393 taining all threads which want to perform a CPU burst. There they are
394 queued in priority order. On each dispatching step, the scheduler
395 always removes the thread with the highest priority only. It then
396 increases the priority of all remaining threads by 1, to prevent them
397 from `starving'.
398
399 The thread which was removed from the READY queue is the new RUNNING
400 thread (there is always just one RUNNING thread, of course). The RUN‐
401 NING thread is assigned execution control. After this thread yields
402 execution (either explicitly by yielding execution or implicitly by
403 calling a function which would block) there are three possibilities:
404 Either it has terminated, then it is moved to the DEAD queue, or it has
405 events on which it wants to wait, then it is moved into the WAITING
406 queue. Else it is assumed it wants to perform more CPU bursts and imme‐
407 diately enters the READY queue again.
408
409 Before the next thread is taken out of the READY queue, the WAITING
410 queue is checked for pending events. If one or more events occurred,
411 the threads that are waiting on them are immediately moved to the READY
412 queue.
413
414 The purpose of the NEW queue has to do with the fact that in Pth a
415 thread never directly switches to another thread. A thread always
416 yields execution to the scheduler and the scheduler dispatches to the
417 next thread. So a freshly spawned thread has to be kept somewhere until
418 the scheduler gets a chance to pick it up for scheduling. That is what
419 the NEW queue is for.
420
421 The purpose of the DEAD queue is to support thread joining. When a
422 thread is marked to be unjoinable, it is directly kicked out of the
423 system after it terminated. But when it is joinable, it enters the DEAD
424 queue. There it remains until another thread joins it.
425
426 Finally, there is a special separated queue named SUSPENDED, to where
427 threads can be manually moved from the NEW, READY or WAITING queues by
428 the application. The purpose of this special queue is to temporarily
429 absorb suspended threads until they are again resumed by the applica‐
430 tion. Suspended threads do not cost scheduling or event handling
431 resources, because they are temporarily completely out of the sched‐
432 uler's scope. If a thread is resumed, it is moved back to the queue
433 from where it originally came and this way again enters the schedulers
434 scope.
435
437 In the following the Pth Application Programming Interface (API) is
438 discussed in detail. With the knowledge given above, it should now be
439 easy to understand how to program threads with this API. In good Unix
440 tradition, Pth functions use special return values ("NULL" in pointer
441 context, "FALSE" in boolean context and "-1" in integer context) to
442 indicate an error condition and set (or pass through) the "errno" sys‐
443 tem variable to pass more details about the error to the caller.
444
445 Global Library Management
446
447 The following functions act on the library as a whole. They are used
448 to initialize and shutdown the scheduler and fetch information from it.
449
450 int pth_init(void);
451 This initializes the Pth library. It has to be the first Pth API
452 function call in an application, and is mandatory. It's usually
453 done at the begin of the main() function of the application. This
454 implicitly spawns the internal scheduler thread and transforms the
455 single execution unit of the current process into a thread (the
456 `main' thread). It returns "TRUE" on success and "FALSE" on error.
457
458 int pth_kill(void);
459 This kills the Pth library. It should be the last Pth API function
460 call in an application, but is not really required. It's usually
461 done at the end of the main function of the application. At least,
462 it has to be called from within the main thread. It implicitly
463 kills all threads and transforms back the calling thread into the
464 single execution unit of the underlying process. The usual way to
465 terminate a Pth application is either a simple `"pth_exit(0);"' in
466 the main thread (which waits for all other threads to terminate,
467 kills the threading system and then terminates the process) or a
468 `"pth_kill(); exit(0)"' (which immediately kills the threading sys‐
469 tem and terminates the process). The pth_kill() return immediately
470 with a return code of "FALSE" if it is not called from within the
471 main thread. Else it kills the threading system and returns "TRUE".
472
473 long pth_ctrl(unsigned long query, ...);
474 This is a generalized query/control function for the Pth library.
475 The argument query is a bitmask formed out of one or more
476 "PTH_CTRL_"XXXX queries. Currently the following queries are sup‐
477 ported:
478
479 "PTH_CTRL_GETTHREADS"
480 This returns the total number of threads currently in exis‐
481 tence. This query actually is formed out of the combination of
482 queries for threads in a particular state, i.e., the
483 "PTH_CTRL_GETTHREADS" query is equal to the OR-combination of
484 all the following specialized queries:
485
486 "PTH_CTRL_GETTHREADS_NEW" for the number of threads in the new
487 queue (threads created via pth_spawn(3) but still not scheduled
488 once), "PTH_CTRL_GETTHREADS_READY" for the number of threads in
489 the ready queue (threads who want to do CPU bursts),
490 "PTH_CTRL_GETTHREADS_RUNNING" for the number of running threads
491 (always just one thread!), "PTH_CTRL_GETTHREADS_WAITING" for
492 the number of threads in the waiting queue (threads waiting for
493 events), "PTH_CTRL_GETTHREADS_SUSPENDED" for the number of
494 threads in the suspended queue (threads waiting to be resumed)
495 and "PTH_CTRL_GETTHREADS_DEAD" for the number of threads in the
496 new queue (terminated threads waiting for a join).
497
498 "PTH_CTRL_GETAVLOAD"
499 This requires a second argument of type `"float *"' (pointer to
500 a floating point variable). It stores a floating point value
501 describing the exponential averaged load of the scheduler in
502 this variable. The load is a function from the number of
503 threads in the ready queue of the schedulers dispatching unit.
504 So a load around 1.0 means there is only one ready thread (the
505 standard situation when the application has no high load). A
506 higher load value means there a more threads ready who want to
507 do CPU bursts. The average load value updates once per second
508 only. The return value for this query is always 0.
509
510 "PTH_CTRL_GETPRIO"
511 This requires a second argument of type `"pth_t"' which identi‐
512 fies a thread. It returns the priority (ranging from
513 "PTH_PRIO_MIN" to "PTH_PRIO_MAX") of the given thread.
514
515 "PTH_CTRL_GETNAME"
516 This requires a second argument of type `"pth_t"' which identi‐
517 fies a thread. It returns the name of the given thread, i.e.,
518 the return value of pth_ctrl(3) should be casted to a `"char
519 *"'.
520
521 "PTH_CTRL_DUMPSTATE"
522 This requires a second argument of type `"FILE *"' to which a
523 summary of the internal Pth library state is written to. The
524 main information which is currently written out is the current
525 state of the thread pool.
526
527 "PTH_CTRL_FAVOURNEW"
528 This requires a second argument of type `"int"' which specified
529 whether the GNU Pth scheduler favours new threads on startup,
530 i.e., whether they are moved from the new queue to the top
531 (argument is "TRUE") or middle (argument is "FALSE") of the
532 ready queue. The default is to favour new threads to make sure
533 they do not starve already at startup, although this slightly
534 violates the strict priority based scheduling.
535
536 The function returns "-1" on error.
537
538 long pth_version(void);
539 This function returns a hex-value `0xVRRTLL' which describes the
540 current Pth library version. V is the version, RR the revisions, LL
541 the level and T the type of the level (alphalevel=0, betalevel=1,
542 patchlevel=2, etc). For instance Pth version 1.0b1 is encoded as
543 0x100101. The reason for this unusual mapping is that this way the
544 version number is steadily increasing. The same value is also
545 available under compile time as "PTH_VERSION".
546
547 Thread Attribute Handling
548
549 Attribute objects are used in Pth for two things: First
550 stand-alone/unbound attribute objects are used to store attributes for
551 to be spawned threads. Bounded attribute objects are used to modify
552 attributes of already existing threads. The following attribute fields
553 exists in attribute objects:
554
555 "PTH_ATTR_PRIO" (read-write) ["int"]
556 Thread Priority between "PTH_PRIO_MIN" and "PTH_PRIO_MAX". The
557 default is "PTH_PRIO_STD".
558
559 "PTH_ATTR_NAME" (read-write) ["char *"]
560 Name of thread (up to 40 characters are stored only), mainly for
561 debugging purposes.
562
563 "PTH_ATTR_DISPATCHES" (read-write) ["int"]
564 In bounded attribute objects, this field is incremented every time
565 the context is switched to the associated thread.
566
567 "PTH_ATTR_JOINABLE" (read-write> ["int"]
568 The thread detachment type, "TRUE" indicates a joinable thread,
569 "FALSE" indicates a detached thread. When a thread is detached,
570 after termination it is immediately kicked out of the system
571 instead of inserted into the dead queue.
572
573 "PTH_ATTR_CANCEL_STATE" (read-write) ["unsigned int"]
574 The thread cancellation state, i.e., a combination of "PTH_CAN‐
575 CEL_ENABLE" or "PTH_CANCEL_DISABLE" and "PTH_CANCEL_DEFERRED" or
576 "PTH_CANCEL_ASYNCHRONOUS".
577
578 "PTH_ATTR_STACK_SIZE" (read-write) ["unsigned int"]
579 The thread stack size in bytes. Use lower values than 64 KB with
580 great care!
581
582 "PTH_ATTR_STACK_ADDR" (read-write) ["char *"]
583 A pointer to the lower address of a chunk of malloc(3)'ed memory
584 for the stack.
585
586 "PTH_ATTR_TIME_SPAWN" (read-only) ["pth_time_t"]
587 The time when the thread was spawned. This can be queried only
588 when the attribute object is bound to a thread.
589
590 "PTH_ATTR_TIME_LAST" (read-only) ["pth_time_t"]
591 The time when the thread was last dispatched. This can be queried
592 only when the attribute object is bound to a thread.
593
594 "PTH_ATTR_TIME_RAN" (read-only) ["pth_time_t"]
595 The total time the thread was running. This can be queried only
596 when the attribute object is bound to a thread.
597
598 "PTH_ATTR_START_FUNC" (read-only) ["void *(*)(void *)"]
599 The thread start function. This can be queried only when the
600 attribute object is bound to a thread.
601
602 "PTH_ATTR_START_ARG" (read-only) ["void *"]
603 The thread start argument. This can be queried only when the
604 attribute object is bound to a thread.
605
606 "PTH_ATTR_STATE" (read-only) ["pth_state_t"]
607 The scheduling state of the thread, i.e., either "PTH_STATE_NEW",
608 "PTH_STATE_READY", "PTH_STATE_WAITING", or "PTH_STATE_DEAD" This
609 can be queried only when the attribute object is bound to a thread.
610
611 "PTH_ATTR_EVENTS" (read-only) ["pth_event_t"]
612 The event ring the thread is waiting for. This can be queried only
613 when the attribute object is bound to a thread.
614
615 "PTH_ATTR_BOUND" (read-only) ["int"]
616 Whether the attribute object is bound ("TRUE") to a thread or not
617 ("FALSE").
618
619 The following API functions can be used to handle the attribute
620 objects:
621
622 pth_attr_t pth_attr_of(pth_t tid);
623 This returns a new attribute object bound to thread tid. Any
624 queries on this object directly fetch attributes from tid. And
625 attribute modifications directly change tid. Use such attribute
626 objects to modify existing threads.
627
628 pth_attr_t pth_attr_new(void);
629 This returns a new unbound attribute object. An implicit
630 pth_attr_init() is done on it. Any queries on this object just
631 fetch stored attributes from it. And attribute modifications just
632 change the stored attributes. Use such attribute objects to pre-
633 configure attributes for to be spawned threads.
634
635 int pth_attr_init(pth_attr_t attr);
636 This initializes an attribute object attr to the default values:
637 "PTH_ATTR_PRIO" := "PTH_PRIO_STD", "PTH_ATTR_NAME" := `"unknown"',
638 "PTH_ATTR_DISPATCHES" := 0, "PTH_ATTR_JOINABLE" := "TRUE",
639 "PTH_ATTR_CANCELSTATE" := "PTH_CANCEL_DEFAULT",
640 "PTH_ATTR_STACK_SIZE" := 64*1024 and "PTH_ATTR_STACK_ADDR" :=
641 "NULL". All other "PTH_ATTR_*" attributes are read-only attributes
642 and don't receive default values in attr, because they exists only
643 for bounded attribute objects.
644
645 int pth_attr_set(pth_attr_t attr, int field, ...);
646 This sets the attribute field field in attr to a value specified as
647 an additional argument on the variable argument list. The following
648 attribute fields and argument pairs can be used:
649
650 PTH_ATTR_PRIO int
651 PTH_ATTR_NAME char *
652 PTH_ATTR_DISPATCHES int
653 PTH_ATTR_JOINABLE int
654 PTH_ATTR_CANCEL_STATE unsigned int
655 PTH_ATTR_STACK_SIZE unsigned int
656 PTH_ATTR_STACK_ADDR char *
657
658 int pth_attr_get(pth_attr_t attr, int field, ...);
659 This retrieves the attribute field field in attr and stores its
660 value in the variable specified through a pointer in an additional
661 argument on the variable argument list. The following fields and
662 argument pairs can be used:
663
664 PTH_ATTR_PRIO int *
665 PTH_ATTR_NAME char **
666 PTH_ATTR_DISPATCHES int *
667 PTH_ATTR_JOINABLE int *
668 PTH_ATTR_CANCEL_STATE unsigned int *
669 PTH_ATTR_STACK_SIZE unsigned int *
670 PTH_ATTR_STACK_ADDR char **
671 PTH_ATTR_TIME_SPAWN pth_time_t *
672 PTH_ATTR_TIME_LAST pth_time_t *
673 PTH_ATTR_TIME_RAN pth_time_t *
674 PTH_ATTR_START_FUNC void *(**)(void *)
675 PTH_ATTR_START_ARG void **
676 PTH_ATTR_STATE pth_state_t *
677 PTH_ATTR_EVENTS pth_event_t *
678 PTH_ATTR_BOUND int *
679
680 int pth_attr_destroy(pth_attr_t attr);
681 This destroys a attribute object attr. After this attr is no longer
682 a valid attribute object.
683
684 Thread Control
685
686 The following functions control the threading itself and make up the
687 main API of the Pth library.
688
689 pth_t pth_spawn(pth_attr_t attr, void *(*entry)(void *), void *arg);
690 This spawns a new thread with the attributes given in attr (or
691 "PTH_ATTR_DEFAULT" for default attributes - which means that thread
692 priority, joinability and cancel state are inherited from the cur‐
693 rent thread) with the starting point at routine entry; the dispatch
694 count is not inherited from the current thread if attr is not spec‐
695 ified - rather, it is initialized to zero. This entry routine is
696 called as `pth_exit(entry(arg))' inside the new thread unit, i.e.,
697 entry's return value is fed to an implicit pth_exit(3). So the
698 thread can also exit by just returning. Nevertheless the thread can
699 also exit explicitly at any time by calling pth_exit(3). But keep
700 in mind that calling the POSIX function exit(3) still terminates
701 the complete process and not just the current thread.
702
703 There is no Pth-internal limit on the number of threads one can
704 spawn, except the limit implied by the available virtual memory.
705 Pth internally keeps track of thread in dynamic data structures.
706 The function returns "NULL" on error.
707
708 int pth_once(pth_once_t *ctrlvar, void (*func)(void *), void *arg);
709 This is a convenience function which uses a control variable of
710 type "pth_once_t" to make sure a constructor function func is
711 called only once as `func(arg)' in the system. In other words: Only
712 the first call to pth_once(3) by any thread in the system succeeds.
713 The variable referenced via ctrlvar should be declared as
714 `"pth_once_t" variable-name = "PTH_ONCE_INIT";' before calling this
715 function.
716
717 pth_t pth_self(void);
718 This just returns the unique thread handle of the currently running
719 thread. This handle itself has to be treated as an opaque entity
720 by the application. It's usually used as an argument to other
721 functions who require an argument of type "pth_t".
722
723 int pth_suspend(pth_t tid);
724 This suspends a thread tid until it is manually resumed again via
725 pth_resume(3). For this, the thread is moved to the SUSPENDED queue
726 and this way is completely out of the scheduler's event handling
727 and thread dispatching scope. Suspending the current thread is not
728 allowed. The function returns "TRUE" on success and "FALSE" on
729 errors.
730
731 int pth_resume(pth_t tid);
732 This function resumes a previously suspended thread tid, i.e. tid
733 has to stay on the SUSPENDED queue. The thread is moved to the NEW,
734 READY or WAITING queue (dependent on what its state was when the
735 pth_suspend(3) call were made) and this way again enters the event
736 handling and thread dispatching scope of the scheduler. The func‐
737 tion returns "TRUE" on success and "FALSE" on errors.
738
739 int pth_raise(pth_t tid, int sig)
740 This function raises a signal for delivery to thread tid only.
741 When one just raises a signal via raise(3) or kill(2), its deliv‐
742 ered to an arbitrary thread which has this signal not blocked.
743 With pth_raise(3) one can send a signal to a thread and its guaran‐
744 tees that only this thread gets the signal delivered. But keep in
745 mind that nevertheless the signals action is still configured
746 process-wide. When sig is 0 plain thread checking is performed,
747 i.e., `"pth_raise(tid, 0)"' returns "TRUE" when thread tid still
748 exists in the PTH system but doesn't send any signal to it.
749
750 int pth_yield(pth_t tid);
751 This explicitly yields back the execution control to the scheduler
752 thread. Usually the execution is implicitly transferred back to
753 the scheduler when a thread waits for an event. But when a thread
754 has to do larger CPU bursts, it can be reasonable to interrupt it
755 explicitly by doing a few pth_yield(3) calls to give other threads
756 a chance to execute, too. This obviously is the cooperating part
757 of Pth. A thread has not to yield execution, of course. But when
758 you want to program a server application with good response times
759 the threads should be cooperative, i.e., when they should split
760 their CPU bursts into smaller units with this call.
761
762 Usually one specifies tid as "NULL" to indicate to the scheduler
763 that it can freely decide which thread to dispatch next. But if
764 one wants to indicate to the scheduler that a particular thread
765 should be favored on the next dispatching step, one can specify
766 this thread explicitly. This allows the usage of the old concept of
767 coroutines where a thread/routine switches to a particular cooper‐
768 ating thread. If tid is not "NULL" and points to a new or ready
769 thread, it is guaranteed that this thread receives execution con‐
770 trol on the next dispatching step. If tid is in a different state
771 (that is, not in "PTH_STATE_NEW" or "PTH_STATE_READY") an error is
772 reported.
773
774 The function usually returns "TRUE" for success and only "FALSE"
775 (with "errno" set to "EINVAL") if tid specified an invalid or still
776 not new or ready thread.
777
778 int pth_nap(pth_time_t naptime);
779 This functions suspends the execution of the current thread until
780 naptime is elapsed. naptime is of type "pth_time_t" and this way
781 has theoretically a resolution of one microsecond. In practice you
782 should neither rely on this nor that the thread is awakened exactly
783 after naptime has elapsed. It's only guarantees that the thread
784 will sleep at least naptime. But because of the non-preemptive
785 nature of Pth it can last longer (when another thread kept the CPU
786 for a long time). Additionally the resolution is dependent of the
787 implementation of timers by the operating system and these usually
788 have only a resolution of 10 microseconds or larger. But usually
789 this isn't important for an application unless it tries to use this
790 facility for real time tasks.
791
792 int pth_wait(pth_event_t ev);
793 This is the link between the scheduler and the event facility (see
794 below for the various pth_event_xxx() functions). It's modeled like
795 select(2), i.e., one gives this function one or more events (in the
796 event ring specified by ev) on which the current thread wants to
797 wait. The scheduler awakes the thread when one ore more of them
798 occurred or failed after tagging them as such. The ev argument is a
799 pointer to an event ring which isn't changed except for the tag‐
800 ging. pth_wait(3) returns the number of occurred or failed events
801 and the application can use pth_event_status(3) to test which
802 events occurred or failed.
803
804 int pth_cancel(pth_t tid);
805 This cancels a thread tid. How the cancellation is done depends on
806 the cancellation state of tid which the thread can configure
807 itself. When its state is "PTH_CANCEL_DISABLE" a cancellation
808 request is just made pending. When it is "PTH_CANCEL_ENABLE" it
809 depends on the cancellation type what is performed. When its
810 "PTH_CANCEL_DEFERRED" again the cancellation request is just made
811 pending. But when its "PTH_CANCEL_ASYNCHRONOUS" the thread is imme‐
812 diately canceled before pth_cancel(3) returns. The effect of a
813 thread cancellation is equal to implicitly forcing the thread to
814 call `"pth_exit(PTH_CANCELED)"' at one of his cancellation points.
815 In Pth thread enter a cancellation point either explicitly via
816 pth_cancel_point(3) or implicitly by waiting for an event.
817
818 int pth_abort(pth_t tid);
819 This is the cruel way to cancel a thread tid. When it's already
820 dead and waits to be joined it just joins it (via `"pth_join("tid",
821 NULL)"') and this way kicks it out of the system. Else it forces
822 the thread to be not joinable and to allow asynchronous cancella‐
823 tion and then cancels it via `"pth_cancel("tid")"'.
824
825 int pth_join(pth_t tid, void **value);
826 This joins the current thread with the thread specified via tid.
827 It first suspends the current thread until the tid thread has ter‐
828 minated. Then it is awakened and stores the value of tid's
829 pth_exit(3) call into *value (if value and not "NULL") and returns
830 to the caller. A thread can be joined only when it has the
831 attribute "PTH_ATTR_JOINABLE" set to "TRUE" (the default). A thread
832 can only be joined once, i.e., after the pth_join(3) call the
833 thread tid is completely removed from the system.
834
835 void pth_exit(void *value);
836 This terminates the current thread. Whether it's immediately
837 removed from the system or inserted into the dead queue of the
838 scheduler depends on its join type which was specified at spawning
839 time. If it has the attribute "PTH_ATTR_JOINABLE" set to "FALSE",
840 it's immediately removed and value is ignored. Else the thread is
841 inserted into the dead queue and value remembered for a subsequent
842 pth_join(3) call by another thread.
843
844 Utilities
845
846 Utility functions.
847
848 int pth_fdmode(int fd, int mode);
849 This switches the non-blocking mode flag on file descriptor fd.
850 The argument mode can be "PTH_FDMODE_BLOCK" for switching fd into
851 blocking I/O mode, "PTH_FDMODE_NONBLOCK" for switching fd into non-
852 blocking I/O mode or "PTH_FDMODE_POLL" for just polling the current
853 mode. The current mode is returned (either "PTH_FDMODE_BLOCK" or
854 "PTH_FDMODE_NONBLOCK") or "PTH_FDMODE_ERROR" on error. Keep in mind
855 that since Pth 1.1 there is no longer a requirement to manually
856 switch a file descriptor into non-blocking mode in order to use it.
857 This is automatically done temporarily inside Pth. Instead when
858 you now switch a file descriptor explicitly into non-blocking mode,
859 pth_read(3) or pth_write(3) will never block the current thread.
860
861 pth_time_t pth_time(long sec, long usec);
862 This is a constructor for a "pth_time_t" structure which is a con‐
863 venient function to avoid temporary structure values. It returns a
864 pth_time_t structure which holds the absolute time value specified
865 by sec and usec.
866
867 pth_time_t pth_timeout(long sec, long usec);
868 This is a constructor for a "pth_time_t" structure which is a con‐
869 venient function to avoid temporary structure values. It returns a
870 pth_time_t structure which holds the absolute time value calculated
871 by adding sec and usec to the current time.
872
873 Sfdisc_t *pth_sfiodisc(void);
874 This functions is always available, but only reasonably usable when
875 Pth was built with Sfio support ("--with-sfio" option) and
876 "PTH_EXT_SFIO" is then defined by "pth.h". It is useful for appli‐
877 cations which want to use the comprehensive Sfio I/O library with
878 the Pth threading library. Then this function can be used to get an
879 Sfio discipline structure ("Sfdisc_t") which can be pushed onto
880 Sfio streams ("Sfio_t") in order to let this stream use
881 pth_read(3)/pth_write(2) instead of read(2)/write(2). The benefit
882 is that this way I/O on the Sfio stream does only block the current
883 thread instead of the whole process. The application has to free(3)
884 the "Sfdisc_t" structure when it is no longer needed. The Sfio
885 package can be found at http://www.research.att.com/sw/tools/sfio/.
886
887 Cancellation Management
888
889 Pth supports POSIX style thread cancellation via pth_cancel(3) and the
890 following two related functions:
891
892 void pth_cancel_state(int newstate, int *oldstate);
893 This manages the cancellation state of the current thread. When
894 oldstate is not "NULL" the function stores the old cancellation
895 state under the variable pointed to by oldstate. When newstate is
896 not 0 it sets the new cancellation state. oldstate is created
897 before newstate is set. A state is a combination of "PTH_CAN‐
898 CEL_ENABLE" or "PTH_CANCEL_DISABLE" and "PTH_CANCEL_DEFERRED" or
899 "PTH_CANCEL_ASYNCHRONOUS". "PTH_CANCEL_ENABLE⎪PTH_CANCEL_DEFERRED"
900 (or "PTH_CANCEL_DEFAULT") is the default state where cancellation
901 is possible but only at cancellation points. Use "PTH_CANCEL_DIS‐
902 ABLE" to complete disable cancellation for a thread and "PTH_CAN‐
903 CEL_ASYNCHRONOUS" for allowing asynchronous cancellations, i.e.,
904 cancellations which can happen at any time.
905
906 void pth_cancel_point(void);
907 This explicitly enter a cancellation point. When the current can‐
908 cellation state is "PTH_CANCEL_DISABLE" or no cancellation request
909 is pending, this has no side-effect and returns immediately. Else
910 it calls `"pth_exit(PTH_CANCELED)"'.
911
912 Event Handling
913
914 Pth has a very flexible event facility which is linked into the sched‐
915 uler through the pth_wait(3) function. The following functions provide
916 the handling of event rings.
917
918 pth_event_t pth_event(unsigned long spec, ...);
919 This creates a new event ring consisting of a single initial event.
920 The type of the generated event is specified by spec. The following
921 types are available:
922
923 "PTH_EVENT_FD"
924 This is a file descriptor event. One or more of
925 "PTH_UNTIL_FD_READABLE", "PTH_UNTIL_FD_WRITEABLE" or
926 "PTH_UNTIL_FD_EXCEPTION" have to be OR-ed into spec to specify
927 on which state of the file descriptor you want to wait. The
928 file descriptor itself has to be given as an additional argu‐
929 ment. Example: `"pth_event(PTH_EVENT_FD⎪PTH_UNTIL_FD_READABLE,
930 fd)"'.
931
932 "PTH_EVENT_SELECT"
933 This is a multiple file descriptor event modeled directly after
934 the select(2) call (actually it is also used to implement
935 pth_select(3) internally). It's a convenient way to wait for a
936 large set of file descriptors at once and at each file descrip‐
937 tor for a different type of state. Additionally as a nice side-
938 effect one receives the number of file descriptors which causes
939 the event to be occurred (using BSD semantics, i.e., when a
940 file descriptor occurred in two sets it's counted twice). The
941 arguments correspond directly to the select(2) function argu‐
942 ments except that there is no timeout argument (because time‐
943 outs already can be handled via "PTH_EVENT_TIME" events).
944
945 Example: `"pth_event(PTH_EVENT_SELECT, &rc, nfd, rfds, wfds,
946 efds)"' where "rc" has to be of type `"int *"', "nfd" has to be
947 of type `"int"' and "rfds", "wfds" and "efds" have to be of
948 type `"fd_set *"' (see select(2)). The number of occurred file
949 descriptors are stored in "rc".
950
951 "PTH_EVENT_SIGS"
952 This is a signal set event. The two additional arguments have
953 to be a pointer to a signal set (type `"sigset_t *"') and a
954 pointer to a signal number variable (type `"int *"'). This
955 event waits until one of the signals in the signal set
956 occurred. As a result the occurred signal number is stored in
957 the second additional argument. Keep in mind that the Pth
958 scheduler doesn't block signals automatically. So when you
959 want to wait for a signal with this event you've to block it
960 via sigprocmask(2) or it will be delivered without your notice.
961 Example: `"sigemptyset(&set); sigaddset(&set, SIGINT);
962 pth_event(PTH_EVENT_SIG, &set, &sig);"'.
963
964 "PTH_EVENT_TIME"
965 This is a time point event. The additional argument has to be
966 of type "pth_time_t" (usually on-the-fly generated via
967 pth_time(3)). This events waits until the specified time point
968 has elapsed. Keep in mind that the value is an absolute time
969 point and not an offset. When you want to wait for a specified
970 amount of time, you've to add the current time to the offset
971 (usually on-the-fly achieved via pth_timeout(3)). Example:
972 `"pth_event(PTH_EVENT_TIME, pth_timeout(2,0))"'.
973
974 "PTH_EVENT_MSG"
975 This is a message port event. The additional argument has to be
976 of type "pth_msgport_t". This events waits until one or more
977 messages were received on the specified message port. Example:
978 `"pth_event(PTH_EVENT_MSG, mp)"'.
979
980 "PTH_EVENT_TID"
981 This is a thread event. The additional argument has to be of
982 type "pth_t". One of "PTH_UNTIL_TID_NEW",
983 "PTH_UNTIL_TID_READY", "PTH_UNTIL_TID_WAITING" or
984 "PTH_UNTIL_TID_DEAD" has to be OR-ed into spec to specify on
985 which state of the thread you want to wait. Example:
986 `"pth_event(PTH_EVENT_TID⎪PTH_UNTIL_TID_DEAD, tid)"'.
987
988 "PTH_EVENT_FUNC"
989 This is a custom callback function event. Three additional
990 arguments have to be given with the following types: `"int
991 (*)(void *)"', `"void *"' and `"pth_time_t"'. The first is a
992 function pointer to a check function and the second argument is
993 a user-supplied context value which is passed to this function.
994 The scheduler calls this function on a regular basis (on his
995 own scheduler stack, so be very careful!) and the thread is
996 kept sleeping while the function returns "FALSE". Once it
997 returned "TRUE" the thread will be awakened. The check interval
998 is defined by the third argument, i.e., the check function is
999 polled again not until this amount of time elapsed. Example:
1000 `"pth_event(PTH_EVENT_FUNC, func, arg, pth_time(0,500000))"'.
1001
1002 unsigned long pth_event_typeof(pth_event_t ev);
1003 This returns the type of event ev. It's a combination of the
1004 describing "PTH_EVENT_XX" and "PTH_UNTIL_XX" value. This is espe‐
1005 cially useful to know which arguments have to be supplied to the
1006 pth_event_extract(3) function.
1007
1008 int pth_event_extract(pth_event_t ev, ...);
1009 When pth_event(3) is treated like sprintf(3), then this function is
1010 sscanf(3), i.e., it is the inverse operation of pth_event(3). This
1011 means that it can be used to extract the ingredients of an event.
1012 The ingredients are stored into variables which are given as point‐
1013 ers on the variable argument list. Which pointers have to be
1014 present depends on the event type and has to be determined by the
1015 caller before via pth_event_typeof(3).
1016
1017 To make it clear, when you constructed ev via `"ev =
1018 pth_event(PTH_EVENT_FD, fd);"' you have to extract it via
1019 `"pth_event_extract(ev, &fd)"', etc. For multiple arguments of an
1020 event the order of the pointer arguments is the same as for
1021 pth_event(3). But always keep in mind that you have to always sup‐
1022 ply pointers to variables and these variables have to be of the
1023 same type as the argument of pth_event(3) required.
1024
1025 pth_event_t pth_event_concat(pth_event_t ev, ...);
1026 This concatenates one or more additional event rings to the event
1027 ring ev and returns ev. The end of the argument list has to be
1028 marked with a "NULL" argument. Use this function to create real
1029 events rings out of the single-event rings created by pth_event(3).
1030
1031 pth_event_t pth_event_isolate(pth_event_t ev);
1032 This isolates the event ev from possibly appended events in the
1033 event ring. When in ev only one event exists, this returns "NULL".
1034 When remaining events exists, they form a new event ring which is
1035 returned.
1036
1037 pth_event_t pth_event_walk(pth_event_t ev, int direction);
1038 This walks to the next (when direction is "PTH_WALK_NEXT") or pre‐
1039 views (when direction is "PTH_WALK_PREV") event in the event ring
1040 ev and returns this new reached event. Additionally
1041 "PTH_UNTIL_OCCURRED" can be OR-ed into direction to walk to the
1042 next/previous occurred event in the ring ev.
1043
1044 pth_status_t pth_event_status(pth_event_t ev);
1045 This returns the status of event ev. This is a fast operation
1046 because only a tag on ev is checked which was either set or still
1047 not set by the scheduler. In other words: This doesn't check the
1048 event itself, it just checks the last knowledge of the scheduler.
1049 The possible returned status codes are: "PTH_STATUS_PENDING" (event
1050 is still pending), "PTH_STATUS_OCCURRED" (event successfully
1051 occurred), "PTH_STATUS_FAILED" (event failed).
1052
1053 int pth_event_free(pth_event_t ev, int mode);
1054 This deallocates the event ev (when mode is "PTH_FREE_THIS") or all
1055 events appended to the event ring under ev (when mode is
1056 "PTH_FREE_ALL").
1057
1058 Key-Based Storage
1059
1060 The following functions provide thread-local storage through unique
1061 keys similar to the POSIX Pthread API. Use this for thread specific
1062 global data.
1063
1064 int pth_key_create(pth_key_t *key, void (*func)(void *));
1065 This created a new unique key and stores it in key. Additionally
1066 func can specify a destructor function which is called on the cur‐
1067 rent threads termination with the key.
1068
1069 int pth_key_delete(pth_key_t key);
1070 This explicitly destroys a key key.
1071
1072 int pth_key_setdata(pth_key_t key, const void *value);
1073 This stores value under key.
1074
1075 void *pth_key_getdata(pth_key_t key);
1076 This retrieves the value under key.
1077
1078 Message Port Communication
1079
1080 The following functions provide message ports which can be used for
1081 efficient and flexible inter-thread communication.
1082
1083 pth_msgport_t pth_msgport_create(const char *name);
1084 This returns a pointer to a new message port. If name name is not
1085 "NULL", the name can be used by other threads via pth_msg‐
1086 port_find(3) to find the message port in case they do not know
1087 directly the pointer to the message port.
1088
1089 void pth_msgport_destroy(pth_msgport_t mp);
1090 This destroys a message port mp. Before all pending messages on it
1091 are replied to their origin message port.
1092
1093 pth_msgport_t pth_msgport_find(const char *name);
1094 This finds a message port in the system by name and returns the
1095 pointer to it.
1096
1097 int pth_msgport_pending(pth_msgport_t mp);
1098 This returns the number of pending messages on message port mp.
1099
1100 int pth_msgport_put(pth_msgport_t mp, pth_message_t *m);
1101 This puts (or sends) a message m to message port mp.
1102
1103 pth_message_t *pth_msgport_get(pth_msgport_t mp);
1104 This gets (or receives) the top message from message port mp.
1105 Incoming messages are always kept in a queue, so there can be more
1106 pending messages, of course.
1107
1108 int pth_msgport_reply(pth_message_t *m);
1109 This replies a message m to the message port of the sender.
1110
1111 Thread Cleanups
1112
1113 Per-thread cleanup functions.
1114
1115 int pth_cleanup_push(void (*handler)(void *), void *arg);
1116 This pushes the routine handler onto the stack of cleanup routines
1117 for the current thread. These routines are called in LIFO order
1118 when the thread terminates.
1119
1120 int pth_cleanup_pop(int execute);
1121 This pops the top-most routine from the stack of cleanup routines
1122 for the current thread. When execute is "TRUE" the routine is addi‐
1123 tionally called.
1124
1125 Process Forking
1126
1127 The following functions provide some special support for process fork‐
1128 ing situations inside the threading environment.
1129
1130 int pth_atfork_push(void (*prepare)(void *), void (*)(void *parent),
1131 void (*)(void *child), void *arg);
1132 This function declares forking handlers to be called before and
1133 after pth_fork(3), in the context of the thread that called
1134 pth_fork(3). The prepare handler is called before fork(2) process‐
1135 ing commences. The parent handler is called after fork(2) pro‐
1136 cessing completes in the parent process. The child handler is
1137 called after fork(2) processing completed in the child process. If
1138 no handling is desired at one or more of these three points, the
1139 corresponding handler can be given as "NULL". Each handler is
1140 called with arg as the argument.
1141
1142 The order of calls to pth_atfork_push(3) is significant. The parent
1143 and child handlers are called in the order in which they were
1144 established by calls to pth_atfork_push(3), i.e., FIFO. The prepare
1145 fork handlers are called in the opposite order, i.e., LIFO.
1146
1147 int pth_atfork_pop(void);
1148 This removes the top-most handlers on the forking handler stack
1149 which were established with the last pth_atfork_push(3) call. It
1150 returns "FALSE" when no more handlers couldn't be removed from the
1151 stack.
1152
1153 pid_t pth_fork(void);
1154 This is a variant of fork(2) with the difference that the current
1155 thread only is forked into a separate process, i.e., in the parent
1156 process nothing changes while in the child process all threads are
1157 gone except for the scheduler and the calling thread. When you
1158 really want to duplicate all threads in the current process you
1159 should use fork(2) directly. But this is usually not reasonable.
1160 Additionally this function takes care of forking handlers as estab‐
1161 lished by pth_fork_push(3).
1162
1163 Synchronization
1164
1165 The following functions provide synchronization support via mutual
1166 exclusion locks (mutex), read-write locks (rwlock), condition variables
1167 (cond) and barriers (barrier). Keep in mind that in a non-preemptive
1168 threading system like Pth this might sound unnecessary at the first
1169 look, because a thread isn't interrupted by the system. Actually when
1170 you have a critical code section which doesn't contain any pth_xxx()
1171 functions, you don't need any mutex to protect it, of course.
1172
1173 But when your critical code section contains any pth_xxx() function the
1174 chance is high that these temporarily switch to the scheduler. And this
1175 way other threads can make progress and enter your critical code sec‐
1176 tion, too. This is especially true for critical code sections which
1177 implicitly or explicitly use the event mechanism.
1178
1179 int pth_mutex_init(pth_mutex_t *mutex);
1180 This dynamically initializes a mutex variable of type
1181 `"pth_mutex_t"'. Alternatively one can also use static initializa‐
1182 tion via `"pth_mutex_t mutex = PTH_MUTEX_INIT"'.
1183
1184 int pth_mutex_acquire(pth_mutex_t *mutex, int try, pth_event_t ev);
1185 This acquires a mutex mutex. If the mutex is already locked by
1186 another thread, the current threads execution is suspended until
1187 the mutex is unlocked again or additionally the extra events in ev
1188 occurred (when ev is not "NULL"). Recursive locking is explicitly
1189 supported, i.e., a thread is allowed to acquire a mutex more than
1190 once before its released. But it then also has be released the same
1191 number of times until the mutex is again lockable by others. When
1192 try is "TRUE" this function never suspends execution. Instead it
1193 returns "FALSE" with "errno" set to "EBUSY".
1194
1195 int pth_mutex_release(pth_mutex_t *mutex);
1196 This decrements the recursion locking count on mutex and when it is
1197 zero it releases the mutex mutex.
1198
1199 int pth_rwlock_init(pth_rwlock_t *rwlock);
1200 This dynamically initializes a read-write lock variable of type
1201 `"pth_rwlock_t"'. Alternatively one can also use static initial‐
1202 ization via `"pth_rwlock_t rwlock = PTH_RWLOCK_INIT"'.
1203
1204 int pth_rwlock_acquire(pth_rwlock_t *rwlock, int op, int try,
1205 pth_event_t ev);
1206 This acquires a read-only (when op is "PTH_RWLOCK_RD") or a read-
1207 write (when op is "PTH_RWLOCK_RW") lock rwlock. When the lock is
1208 only locked by other threads in read-only mode, the lock succeeds.
1209 But when one thread holds a read-write lock, all locking attempts
1210 suspend the current thread until this lock is released again. Addi‐
1211 tionally in ev events can be given to let the locking timeout, etc.
1212 When try is "TRUE" this function never suspends execution. Instead
1213 it returns "FALSE" with "errno" set to "EBUSY".
1214
1215 int pth_rwlock_release(pth_rwlock_t *rwlock);
1216 This releases a previously acquired (read-only or read-write) lock.
1217
1218 int pth_cond_init(pth_cond_t *cond);
1219 This dynamically initializes a condition variable variable of type
1220 `"pth_cond_t"'. Alternatively one can also use static initializa‐
1221 tion via `"pth_cond_t cond = PTH_COND_INIT"'.
1222
1223 int pth_cond_await(pth_cond_t *cond, pth_mutex_t *mutex, pth_event_t
1224 ev);
1225 This awaits a condition situation. The caller has to follow the
1226 semantics of the POSIX condition variables: mutex has to be
1227 acquired before this function is called. The execution of the cur‐
1228 rent thread is then suspended either until the events in ev
1229 occurred (when ev is not "NULL") or cond was notified by another
1230 thread via pth_cond_notify(3). While the thread is waiting, mutex
1231 is released. Before it returns mutex is reacquired.
1232
1233 int pth_cond_notify(pth_cond_t *cond, int broadcast);
1234 This notified one or all threads which are waiting on cond. When
1235 broadcast is "TRUE" all thread are notified, else only a single
1236 (unspecified) one.
1237
1238 int pth_barrier_init(pth_barrier_t *barrier, int threshold);
1239 This dynamically initializes a barrier variable of type `"pth_bar‐
1240 rier_t"'. Alternatively one can also use static initialization via
1241 `"pth_barrier_t barrier = PTH_BARRIER_INIT("threadhold")"'.
1242
1243 int pth_barrier_reach(pth_barrier_t *barrier);
1244 This function reaches a barrier barrier. If this is the last thread
1245 (as specified by threshold on init of barrier) all threads are
1246 awakened. Else the current thread is suspended until the last
1247 thread reached the barrier and this way awakes all threads. The
1248 function returns (beside "FALSE" on error) the value "TRUE" for any
1249 thread which neither reached the barrier as the first nor the last
1250 thread; "PTH_BARRIER_HEADLIGHT" for the thread which reached the
1251 barrier as the first thread and "PTH_BARRIER_TAILLIGHT" for the
1252 thread which reached the barrier as the last thread.
1253
1254 User-Space Context
1255
1256 The following functions provide a stand-alone sub-API for user-space
1257 context switching. It internally is based on the same underlying
1258 machine context switching mechanism the threads in GNU Pth are based
1259 on. Hence these functions you can use for implementing your own simple
1260 user-space threads. The "pth_uctx_t" context is somewhat modeled after
1261 POSIX ucontext(3).
1262
1263 The time required to create (via pth_uctx_make(3)) a user-space context
1264 can range from just a few microseconds up to a more dramatical time
1265 (depending on the machine context switching method which is available
1266 on the platform). On the other hand, the raw performance in switching
1267 the user-space contexts is always very good (nearly independent of the
1268 used machine context switching method). For instance, on an Intel Pen‐
1269 tium-III CPU with 800Mhz running under FreeBSD 4 one usually achieves
1270 about 260,000 user-space context switches (via pth_uctx_switch(3)) per
1271 second.
1272
1273 int pth_uctx_create(pth_uctx_t *uctx);
1274 This function creates a user-space context and stores it into uctx.
1275 There is still no underlying user-space context configured. You
1276 still have to do this with pth_uctx_make(3). On success, this func‐
1277 tion returns "TRUE", else "FALSE".
1278
1279 int pth_uctx_make(pth_uctx_t uctx, char *sk_addr, size_t sk_size, const
1280 sigset_t *sigmask, void (*start_func)(void *), void *start_arg,
1281 pth_uctx_t uctx_after);
1282 This function makes a new user-space context in uctx which will
1283 operate on the run-time stack sk_addr (which is of maximum size
1284 sk_size), with the signals in sigmask blocked (if sigmask is not
1285 "NULL") and starting to execute with the call
1286 start_func(start_arg). If sk_addr is "NULL", a stack is dynamically
1287 allocated. The stack size sk_size has to be at least 16384 (16KB).
1288 If the start function start_func returns and uctx_after is not
1289 "NULL", an implicit user-space context switch to this context is
1290 performed. Else (if uctx_after is "NULL") the process is terminated
1291 with exit(3). This function is somewhat modeled after POSIX make‐
1292 context(3). On success, this function returns "TRUE", else "FALSE".
1293
1294 int pth_uctx_switch(pth_uctx_t uctx_from, pth_uctx_t uctx_to);
1295 This function saves the current user-space context in uctx_from for
1296 later restoring by another call to pth_uctx_switch(3) and restores
1297 the new user-space context from uctx_to, which previously had to be
1298 set with either a previous call to pth_uctx_switch(3) or initially
1299 by pth_uctx_make(3). This function is somewhat modeled after POSIX
1300 swapcontext(3). If uctx_from or uctx_to are "NULL" or if uctx_to
1301 contains no valid user-space context, "FALSE" is returned instead
1302 of "TRUE". These are the only errors possible.
1303
1304 int pth_uctx_destroy(pth_uctx_t uctx);
1305 This function destroys the user-space context in uctx. The run-time
1306 stack associated with the user-space context is deallocated only if
1307 it was not given by the application (see sk_addr of pth_uctx_cre‐
1308 ate(3)). If uctx is "NULL", "FALSE" is returned instead of "TRUE".
1309 This is the only error possible.
1310
1311 Generalized POSIX Replacement API
1312
1313 The following functions are generalized replacements functions for the
1314 POSIX API, i.e., they are similar to the functions under `Standard
1315 POSIX Replacement API' but all have an additional event argument which
1316 can be used for timeouts, etc.
1317
1318 int pth_sigwait_ev(const sigset_t *set, int *sig, pth_event_t ev);
1319 This is equal to pth_sigwait(3) (see below), but has an additional
1320 event argument ev. When pth_sigwait(3) suspends the current threads
1321 execution it usually only uses the signal event on set to awake.
1322 With this function any number of extra events can be used to awake
1323 the current thread (remember that ev actually is an event ring).
1324
1325 int pth_connect_ev(int s, const struct sockaddr *addr, socklen_t
1326 addrlen, pth_event_t ev);
1327 This is equal to pth_connect(3) (see below), but has an additional
1328 event argument ev. When pth_connect(3) suspends the current threads
1329 execution it usually only uses the I/O event on s to awake. With
1330 this function any number of extra events can be used to awake the
1331 current thread (remember that ev actually is an event ring).
1332
1333 int pth_accept_ev(int s, struct sockaddr *addr, socklen_t *addrlen,
1334 pth_event_t ev);
1335 This is equal to pth_accept(3) (see below), but has an additional
1336 event argument ev. When pth_accept(3) suspends the current threads
1337 execution it usually only uses the I/O event on s to awake. With
1338 this function any number of extra events can be used to awake the
1339 current thread (remember that ev actually is an event ring).
1340
1341 int pth_select_ev(int nfd, fd_set *rfds, fd_set *wfds, fd_set *efds,
1342 struct timeval *timeout, pth_event_t ev);
1343 This is equal to pth_select(3) (see below), but has an additional
1344 event argument ev. When pth_select(3) suspends the current threads
1345 execution it usually only uses the I/O event on rfds, wfds and efds
1346 to awake. With this function any number of extra events can be used
1347 to awake the current thread (remember that ev actually is an event
1348 ring).
1349
1350 int pth_poll_ev(struct pollfd *fds, unsigned int nfd, int timeout,
1351 pth_event_t ev);
1352 This is equal to pth_poll(3) (see below), but has an additional
1353 event argument ev. When pth_poll(3) suspends the current threads
1354 execution it usually only uses the I/O event on fds to awake. With
1355 this function any number of extra events can be used to awake the
1356 current thread (remember that ev actually is an event ring).
1357
1358 ssize_t pth_read_ev(int fd, void *buf, size_t nbytes, pth_event_t ev);
1359 This is equal to pth_read(3) (see below), but has an additional
1360 event argument ev. When pth_read(3) suspends the current threads
1361 execution it usually only uses the I/O event on fd to awake. With
1362 this function any number of extra events can be used to awake the
1363 current thread (remember that ev actually is an event ring).
1364
1365 ssize_t pth_readv_ev(int fd, const struct iovec *iovec, int iovcnt,
1366 pth_event_t ev);
1367 This is equal to pth_readv(3) (see below), but has an additional
1368 event argument ev. When pth_readv(3) suspends the current threads
1369 execution it usually only uses the I/O event on fd to awake. With
1370 this function any number of extra events can be used to awake the
1371 current thread (remember that ev actually is an event ring).
1372
1373 ssize_t pth_write_ev(int fd, const void *buf, size_t nbytes,
1374 pth_event_t ev);
1375 This is equal to pth_write(3) (see below), but has an additional
1376 event argument ev. When pth_write(3) suspends the current threads
1377 execution it usually only uses the I/O event on fd to awake. With
1378 this function any number of extra events can be used to awake the
1379 current thread (remember that ev actually is an event ring).
1380
1381 ssize_t pth_writev_ev(int fd, const struct iovec *iovec, int iovcnt,
1382 pth_event_t ev);
1383 This is equal to pth_writev(3) (see below), but has an additional
1384 event argument ev. When pth_writev(3) suspends the current threads
1385 execution it usually only uses the I/O event on fd to awake. With
1386 this function any number of extra events can be used to awake the
1387 current thread (remember that ev actually is an event ring).
1388
1389 ssize_t pth_recv_ev(int fd, void *buf, size_t nbytes, int flags,
1390 pth_event_t ev);
1391 This is equal to pth_recv(3) (see below), but has an additional
1392 event argument ev. When pth_recv(3) suspends the current threads
1393 execution it usually only uses the I/O event on fd to awake. With
1394 this function any number of extra events can be used to awake the
1395 current thread (remember that ev actually is an event ring).
1396
1397 ssize_t pth_recvfrom_ev(int fd, void *buf, size_t nbytes, int flags,
1398 struct sockaddr *from, socklen_t *fromlen, pth_event_t ev);
1399 This is equal to pth_recvfrom(3) (see below), but has an additional
1400 event argument ev. When pth_recvfrom(3) suspends the current
1401 threads execution it usually only uses the I/O event on fd to
1402 awake. With this function any number of extra events can be used to
1403 awake the current thread (remember that ev actually is an event
1404 ring).
1405
1406 ssize_t pth_send_ev(int fd, const void *buf, size_t nbytes, int flags,
1407 pth_event_t ev);
1408 This is equal to pth_send(3) (see below), but has an additional
1409 event argument ev. When pth_send(3) suspends the current threads
1410 execution it usually only uses the I/O event on fd to awake. With
1411 this function any number of extra events can be used to awake the
1412 current thread (remember that ev actually is an event ring).
1413
1414 ssize_t pth_sendto_ev(int fd, const void *buf, size_t nbytes, int
1415 flags, const struct sockaddr *to, socklen_t tolen, pth_event_t ev);
1416 This is equal to pth_sendto(3) (see below), but has an additional
1417 event argument ev. When pth_sendto(3) suspends the current threads
1418 execution it usually only uses the I/O event on fd to awake. With
1419 this function any number of extra events can be used to awake the
1420 current thread (remember that ev actually is an event ring).
1421
1422 Standard POSIX Replacement API
1423
1424 The following functions are standard replacements functions for the
1425 POSIX API. The difference is mainly that they suspend the current
1426 thread only instead of the whole process in case the file descriptors
1427 will block.
1428
1429 int pth_nanosleep(const struct timespec *rqtp, struct timespec *rmtp);
1430 This is a variant of the POSIX nanosleep(3) function. It suspends
1431 the current threads execution until the amount of time in rqtp
1432 elapsed. The thread is guaranteed to not wake up before this time,
1433 but because of the non-preemptive scheduling nature of Pth, it can
1434 be awakened later, of course. If rmtp is not "NULL", the "timespec"
1435 structure it references is updated to contain the unslept amount
1436 (the request time minus the time actually slept time). The differ‐
1437 ence between nanosleep(3) and pth_nanosleep(3) is that that
1438 pth_nanosleep(3) suspends only the execution of the current thread
1439 and not the whole process.
1440
1441 int pth_usleep(unsigned int usec);
1442 This is a variant of the 4.3BSD usleep(3) function. It suspends the
1443 current threads execution until usec microseconds (= usec*1/1000000
1444 sec) elapsed. The thread is guaranteed to not wake up before this
1445 time, but because of the non-preemptive scheduling nature of Pth,
1446 it can be awakened later, of course. The difference between
1447 usleep(3) and pth_usleep(3) is that that pth_usleep(3) suspends
1448 only the execution of the current thread and not the whole process.
1449
1450 unsigned int pth_sleep(unsigned int sec);
1451 This is a variant of the POSIX sleep(3) function. It suspends the
1452 current threads execution until sec seconds elapsed. The thread is
1453 guaranteed to not wake up before this time, but because of the non-
1454 preemptive scheduling nature of Pth, it can be awakened later, of
1455 course. The difference between sleep(3) and pth_sleep(3) is that
1456 pth_sleep(3) suspends only the execution of the current thread and
1457 not the whole process.
1458
1459 pid_t pth_waitpid(pid_t pid, int *status, int options);
1460 This is a variant of the POSIX waitpid(2) function. It suspends the
1461 current threads execution until status information is available for
1462 a terminated child process pid. The difference between waitpid(2)
1463 and pth_waitpid(3) is that pth_waitpid(3) suspends only the execu‐
1464 tion of the current thread and not the whole process. For more
1465 details about the arguments and return code semantics see wait‐
1466 pid(2).
1467
1468 int pth_system(const char *cmd);
1469 This is a variant of the POSIX system(3) function. It executes the
1470 shell command cmd with Bourne Shell ("sh") and suspends the current
1471 threads execution until this command terminates. The difference
1472 between system(3) and pth_system(3) is that pth_system(3) suspends
1473 only the execution of the current thread and not the whole process.
1474 For more details about the arguments and return code semantics see
1475 system(3).
1476
1477 int pth_sigmask(int how, const sigset_t *set, sigset_t *oset)
1478 This is the Pth thread-related equivalent of POSIX sigprocmask(2)
1479 respectively pthread_sigmask(3). The arguments how, set and oset
1480 directly relate to sigprocmask(2), because Pth internally just uses
1481 sigprocmask(2) here. So alternatively you can also directly call
1482 sigprocmask(2), but for consistency reasons you should use this
1483 function pth_sigmask(3).
1484
1485 int pth_sigwait(const sigset_t *set, int *sig);
1486 This is a variant of the POSIX.1c sigwait(3) function. It suspends
1487 the current threads execution until a signal in set occurred and
1488 stores the signal number in sig. The important point is that the
1489 signal is not delivered to a signal handler. Instead it's caught by
1490 the scheduler only in order to awake the pth_sigwait() call. The
1491 trick and noticeable point here is that this way you get an asyn‐
1492 chronous aware application that is written completely syn‐
1493 chronously. When you think about the problem of asynchronous safe
1494 functions you should recognize that this is a great benefit.
1495
1496 int pth_connect(int s, const struct sockaddr *addr, socklen_t addrlen);
1497 This is a variant of the 4.2BSD connect(2) function. It establishes
1498 a connection on a socket s to target specified in addr and addrlen.
1499 The difference between connect(2) and pth_connect(3) is that
1500 pth_connect(3) suspends only the execution of the current thread
1501 and not the whole process. For more details about the arguments
1502 and return code semantics see connect(2).
1503
1504 int pth_accept(int s, struct sockaddr *addr, socklen_t *addrlen);
1505 This is a variant of the 4.2BSD accept(2) function. It accepts a
1506 connection on a socket by extracting the first connection request
1507 on the queue of pending connections, creating a new socket with the
1508 same properties of s and allocates a new file descriptor for the
1509 socket (which is returned). The difference between accept(2) and
1510 pth_accept(3) is that pth_accept(3) suspends only the execution of
1511 the current thread and not the whole process. For more details
1512 about the arguments and return code semantics see accept(2).
1513
1514 int pth_select(int nfd, fd_set *rfds, fd_set *wfds, fd_set *efds,
1515 struct timeval *timeout);
1516 This is a variant of the 4.2BSD select(2) function. It examines
1517 the I/O descriptor sets whose addresses are passed in rfds, wfds,
1518 and efds to see if some of their descriptors are ready for reading,
1519 are ready for writing, or have an exceptional condition pending,
1520 respectively. For more details about the arguments and return code
1521 semantics see select(2).
1522
1523 int pth_pselect(int nfd, fd_set *rfds, fd_set *wfds, fd_set *efds,
1524 const struct timespec *timeout, const sigset_t *sigmask);
1525 This is a variant of the POSIX pselect(2) function, which in turn
1526 is a stronger variant of 4.2BSD select(2). The difference is that
1527 the higher-resolution "struct timespec" is passed instead of the
1528 lower-resolution "struct timeval" and that a signal mask is speci‐
1529 fied which is temporarily set while waiting for input. For more
1530 details about the arguments and return code semantics see pse‐
1531 lect(2) and select(2).
1532
1533 int pth_poll(struct pollfd *fds, unsigned int nfd, int timeout);
1534 This is a variant of the SysV poll(2) function. It examines the I/O
1535 descriptors which are passed in the array fds to see if some of
1536 them are ready for reading, are ready for writing, or have an
1537 exceptional condition pending, respectively. For more details about
1538 the arguments and return code semantics see poll(2).
1539
1540 ssize_t pth_read(int fd, void *buf, size_t nbytes);
1541 This is a variant of the POSIX read(2) function. It reads up to
1542 nbytes bytes into buf from file descriptor fd. The difference
1543 between read(2) and pth_read(2) is that pth_read(2) suspends execu‐
1544 tion of the current thread until the file descriptor is ready for
1545 reading. For more details about the arguments and return code
1546 semantics see read(2).
1547
1548 ssize_t pth_readv(int fd, const struct iovec *iovec, int iovcnt);
1549 This is a variant of the POSIX readv(2) function. It reads data
1550 from file descriptor fd into the first iovcnt rows of the iov vec‐
1551 tor. The difference between readv(2) and pth_readv(2) is that
1552 pth_readv(2) suspends execution of the current thread until the
1553 file descriptor is ready for reading. For more details about the
1554 arguments and return code semantics see readv(2).
1555
1556 ssize_t pth_write(int fd, const void *buf, size_t nbytes);
1557 This is a variant of the POSIX write(2) function. It writes nbytes
1558 bytes from buf to file descriptor fd. The difference between
1559 write(2) and pth_write(2) is that pth_write(2) suspends execution
1560 of the current thread until the file descriptor is ready for writ‐
1561 ing. For more details about the arguments and return code seman‐
1562 tics see write(2).
1563
1564 ssize_t pth_writev(int fd, const struct iovec *iovec, int iovcnt);
1565 This is a variant of the POSIX writev(2) function. It writes data
1566 to file descriptor fd from the first iovcnt rows of the iov vector.
1567 The difference between writev(2) and pth_writev(2) is that
1568 pth_writev(2) suspends execution of the current thread until the
1569 file descriptor is ready for reading. For more details about the
1570 arguments and return code semantics see writev(2).
1571
1572 ssize_t pth_pread(int fd, void *buf, size_t nbytes, off_t offset);
1573 This is a variant of the POSIX pread(3) function. It performs the
1574 same action as a regular read(2), except that it reads from a given
1575 position in the file without changing the file pointer. The first
1576 three arguments are the same as for pth_read(3) with the addition
1577 of a fourth argument offset for the desired position inside the
1578 file.
1579
1580 ssize_t pth_pwrite(int fd, const void *buf, size_t nbytes, off_t off‐
1581 set);
1582 This is a variant of the POSIX pwrite(3) function. It performs the
1583 same action as a regular write(2), except that it writes to a given
1584 position in the file without changing the file pointer. The first
1585 three arguments are the same as for pth_write(3) with the addition
1586 of a fourth argument offset for the desired position inside the
1587 file.
1588
1589 ssize_t pth_recv(int fd, void *buf, size_t nbytes, int flags);
1590 This is a variant of the SUSv2 recv(2) function and equal to
1591 ``pth_recvfrom(fd, buf, nbytes, flags, NULL, 0)''.
1592
1593 ssize_t pth_recvfrom(int fd, void *buf, size_t nbytes, int flags,
1594 struct sockaddr *from, socklen_t *fromlen);
1595 This is a variant of the SUSv2 recvfrom(2) function. It reads up to
1596 nbytes bytes into buf from file descriptor fd while using flags and
1597 from/fromlen. The difference between recvfrom(2) and
1598 pth_recvfrom(2) is that pth_recvfrom(2) suspends execution of the
1599 current thread until the file descriptor is ready for reading. For
1600 more details about the arguments and return code semantics see
1601 recvfrom(2).
1602
1603 ssize_t pth_send(int fd, const void *buf, size_t nbytes, int flags);
1604 This is a variant of the SUSv2 send(2) function and equal to
1605 ``pth_sendto(fd, buf, nbytes, flags, NULL, 0)''.
1606
1607 ssize_t pth_sendto(int fd, const void *buf, size_t nbytes, int flags,
1608 const struct sockaddr *to, socklen_t tolen);
1609 This is a variant of the SUSv2 sendto(2) function. It writes nbytes
1610 bytes from buf to file descriptor fd while using flags and
1611 to/tolen. The difference between sendto(2) and pth_sendto(2) is
1612 that pth_sendto(2) suspends execution of the current thread until
1613 the file descriptor is ready for writing. For more details about
1614 the arguments and return code semantics see sendto(2).
1615
1617 The following example is a useless server which does nothing more than
1618 listening on TCP port 12345 and displaying the current time to the
1619 socket when a connection was established. For each incoming connection
1620 a thread is spawned. Additionally, to see more multithreading, a use‐
1621 less ticker thread runs simultaneously which outputs the current time
1622 to "stderr" every 5 seconds. The example contains no error checking and
1623 is only intended to show you the look and feel of Pth.
1624
1625 #include <stdio.h>
1626 #include <stdlib.h>
1627 #include <errno.h>
1628 #include <sys/types.h>
1629 #include <sys/socket.h>
1630 #include <netinet/in.h>
1631 #include <arpa/inet.h>
1632 #include <signal.h>
1633 #include <netdb.h>
1634 #include <unistd.h>
1635 #include "pth.h"
1636
1637 #define PORT 12345
1638
1639 /* the socket connection handler thread */
1640 static void *handler(void *_arg)
1641 {
1642 int fd = (int)_arg;
1643 time_t now;
1644 char *ct;
1645
1646 now = time(NULL);
1647 ct = ctime(&now);
1648 pth_write(fd, ct, strlen(ct));
1649 close(fd);
1650 return NULL;
1651 }
1652
1653 /* the stderr time ticker thread */
1654 static void *ticker(void *_arg)
1655 {
1656 time_t now;
1657 char *ct;
1658 float load;
1659
1660 for (;;) {
1661 pth_sleep(5);
1662 now = time(NULL);
1663 ct = ctime(&now);
1664 ct[strlen(ct)-1] = '\0';
1665 pth_ctrl(PTH_CTRL_GETAVLOAD, &load);
1666 printf("ticker: time: %s, average load: %.2f\n", ct, load);
1667 }
1668 }
1669
1670 /* the main thread/procedure */
1671 int main(int argc, char *argv[])
1672 {
1673 pth_attr_t attr;
1674 struct sockaddr_in sar;
1675 struct protoent *pe;
1676 struct sockaddr_in peer_addr;
1677 int peer_len;
1678 int sa, sw;
1679 int port;
1680
1681 pth_init();
1682 signal(SIGPIPE, SIG_IGN);
1683
1684 attr = pth_attr_new();
1685 pth_attr_set(attr, PTH_ATTR_NAME, "ticker");
1686 pth_attr_set(attr, PTH_ATTR_STACK_SIZE, 64*1024);
1687 pth_attr_set(attr, PTH_ATTR_JOINABLE, FALSE);
1688 pth_spawn(attr, ticker, NULL);
1689
1690 pe = getprotobyname("tcp");
1691 sa = socket(AF_INET, SOCK_STREAM, pe->p_proto);
1692 sar.sin_family = AF_INET;
1693 sar.sin_addr.s_addr = INADDR_ANY;
1694 sar.sin_port = htons(PORT);
1695 bind(sa, (struct sockaddr *)&sar, sizeof(struct sockaddr_in));
1696 listen(sa, 10);
1697
1698 pth_attr_set(attr, PTH_ATTR_NAME, "handler");
1699 for (;;) {
1700 peer_len = sizeof(peer_addr);
1701 sw = pth_accept(sa, (struct sockaddr *)&peer_addr, &peer_len);
1702 pth_spawn(attr, handler, (void *)sw);
1703 }
1704 }
1705
1707 In this section we will discuss the canonical ways to establish the
1708 build environment for a Pth based program. The possibilities supported
1709 by Pth range from very simple environments to rather complex ones.
1710
1711 Manual Build Environment (Novice)
1712
1713 As a first example, assume we have the above test program staying in
1714 the source file "foo.c". Then we can create a very simple build envi‐
1715 ronment by just adding the following "Makefile":
1716
1717 $ vi Makefile
1718 ⎪ CC = cc
1719 ⎪ CFLAGS = `pth-config --cflags`
1720 ⎪ LDFLAGS = `pth-config --ldflags`
1721 ⎪ LIBS = `pth-config --libs`
1722 ⎪
1723 ⎪ all: foo
1724 ⎪ foo: foo.o
1725 ⎪ $(CC) $(LDFLAGS) -o foo foo.o $(LIBS)
1726 ⎪ foo.o: foo.c
1727 ⎪ $(CC) $(CFLAGS) -c foo.c
1728 ⎪ clean:
1729 ⎪ rm -f foo foo.o
1730
1731 This imports the necessary compiler and linker flags on-the-fly from
1732 the Pth installation via its "pth-config" program. This approach is
1733 straight-forward and works fine for small projects.
1734
1735 Autoconf Build Environment (Advanced)
1736
1737 The previous approach is simple but inflexible. First, to speed up
1738 building, it would be nice to not expand the compiler and linker flags
1739 every time the compiler is started. Second, it would be useful to also
1740 be able to build against uninstalled Pth, that is, against a Pth source
1741 tree which was just configured and built, but not installed. Third, it
1742 would be also useful to allow checking of the Pth version to make sure
1743 it is at least a minimum required version. And finally, it would be
1744 also great to make sure Pth works correctly by first performing some
1745 sanity compile and run-time checks. All this can be done if we use GNU
1746 autoconf and the "AC_CHECK_PTH" macro provided by Pth. For this, we
1747 establish the following three files:
1748
1749 First we again need the "Makefile", but this time it contains autoconf
1750 placeholders and additional cleanup targets. And we create it under the
1751 name "Makefile.in", because it is now an input file for autoconf:
1752
1753 $ vi Makefile.in
1754 ⎪ CC = @CC@
1755 ⎪ CFLAGS = @CFLAGS@
1756 ⎪ LDFLAGS = @LDFLAGS@
1757 ⎪ LIBS = @LIBS@
1758 ⎪
1759 ⎪ all: foo
1760 ⎪ foo: foo.o
1761 ⎪ $(CC) $(LDFLAGS) -o foo foo.o $(LIBS)
1762 ⎪ foo.o: foo.c
1763 ⎪ $(CC) $(CFLAGS) -c foo.c
1764 ⎪ clean:
1765 ⎪ rm -f foo foo.o
1766 ⎪ distclean:
1767 ⎪ rm -f foo foo.o
1768 ⎪ rm -f config.log config.status config.cache
1769 ⎪ rm -f Makefile
1770
1771 Because autoconf generates additional files, we added a canonical
1772 "distclean" target which cleans this up. Secondly, we wrote "config‐
1773 ure.ac", a (minimal) autoconf script specification:
1774
1775 $ vi configure.ac
1776 ⎪ AC_INIT(Makefile.in)
1777 ⎪ AC_CHECK_PTH(1.3.0)
1778 ⎪ AC_OUTPUT(Makefile)
1779
1780 Then we let autoconf's "aclocal" program generate for us an "aclo‐
1781 cal.m4" file containing Pth's "AC_CHECK_PTH" macro. Then we generate
1782 the final "configure" script out of this "aclocal.m4" file and the
1783 "configure.ac" file:
1784
1785 $ aclocal --acdir=`pth-config --acdir`
1786 $ autoconf
1787
1788 After these steps, the working directory should look similar to this:
1789
1790 $ ls -l
1791 -rw-r--r-- 1 rse users 176 Nov 3 11:11 Makefile.in
1792 -rw-r--r-- 1 rse users 15314 Nov 3 11:16 aclocal.m4
1793 -rwxr-xr-x 1 rse users 52045 Nov 3 11:16 configure
1794 -rw-r--r-- 1 rse users 63 Nov 3 11:11 configure.ac
1795 -rw-r--r-- 1 rse users 4227 Nov 3 11:11 foo.c
1796
1797 If we now run "configure" we get a correct "Makefile" which immediately
1798 can be used to build "foo" (assuming that Pth is already installed
1799 somewhere, so that "pth-config" is in $PATH):
1800
1801 $ ./configure
1802 creating cache ./config.cache
1803 checking for gcc... gcc
1804 checking whether the C compiler (gcc ) works... yes
1805 checking whether the C compiler (gcc ) is a cross-compiler... no
1806 checking whether we are using GNU C... yes
1807 checking whether gcc accepts -g... yes
1808 checking how to run the C preprocessor... gcc -E
1809 checking for GNU Pth... version 1.3.0, installed under /usr/local
1810 updating cache ./config.cache
1811 creating ./config.status
1812 creating Makefile
1813 rse@en1:/e/gnu/pth/ac
1814 $ make
1815 gcc -g -O2 -I/usr/local/include -c foo.c
1816 gcc -L/usr/local/lib -o foo foo.o -lpth
1817
1818 If Pth is installed in non-standard locations or "pth-config" is not in
1819 $PATH, one just has to drop the "configure" script a note about the
1820 location by running "configure" with the option "--with-pth="dir (where
1821 dir is the argument which was used with the "--prefix" option when Pth
1822 was installed).
1823
1824 Autoconf Build Environment with Local Copy of Pth (Expert)
1825
1826 Finally let us assume the "foo" program stays under either a GPL or
1827 LGPL distribution license and we want to make it a stand-alone package
1828 for easier distribution and installation. That is, we don't want to
1829 oblige the end-user to install Pth just to allow our "foo" package to
1830 compile. For this, it is a convenient practice to include the required
1831 libraries (here Pth) into the source tree of the package (here "foo").
1832 Pth ships with all necessary support to allow us to easily achieve this
1833 approach. Say, we want Pth in a subdirectory named "pth/" and this
1834 directory should be seamlessly integrated into the configuration and
1835 build process of "foo".
1836
1837 First we again start with the "Makefile.in", but this time it is a more
1838 advanced version which supports subdirectory movement:
1839
1840 $ vi Makefile.in
1841 ⎪ CC = @CC@
1842 ⎪ CFLAGS = @CFLAGS@
1843 ⎪ LDFLAGS = @LDFLAGS@
1844 ⎪ LIBS = @LIBS@
1845 ⎪
1846 ⎪ SUBDIRS = pth
1847 ⎪
1848 ⎪ all: subdirs_all foo
1849 ⎪
1850 ⎪ subdirs_all:
1851 ⎪ @$(MAKE) $(MFLAGS) subdirs TARGET=all
1852 ⎪ subdirs_clean:
1853 ⎪ @$(MAKE) $(MFLAGS) subdirs TARGET=clean
1854 ⎪ subdirs_distclean:
1855 ⎪ @$(MAKE) $(MFLAGS) subdirs TARGET=distclean
1856 ⎪ subdirs:
1857 ⎪ @for subdir in $(SUBDIRS); do \
1858 ⎪ echo "===> $$subdir ($(TARGET))"; \
1859 ⎪ (cd $$subdir; $(MAKE) $(MFLAGS) $(TARGET) ⎪⎪ exit 1) ⎪⎪ exit 1; \
1860 ⎪ echo "<=== $$subdir"; \
1861 ⎪ done
1862 ⎪
1863 ⎪ foo: foo.o
1864 ⎪ $(CC) $(LDFLAGS) -o foo foo.o $(LIBS)
1865 ⎪ foo.o: foo.c
1866 ⎪ $(CC) $(CFLAGS) -c foo.c
1867 ⎪
1868 ⎪ clean: subdirs_clean
1869 ⎪ rm -f foo foo.o
1870 ⎪ distclean: subdirs_distclean
1871 ⎪ rm -f foo foo.o
1872 ⎪ rm -f config.log config.status config.cache
1873 ⎪ rm -f Makefile
1874
1875 Then we create a slightly different autoconf script "configure.ac":
1876
1877 $ vi configure.ac
1878 ⎪ AC_INIT(Makefile.in)
1879 ⎪ AC_CONFIG_AUX_DIR(pth)
1880 ⎪ AC_CHECK_PTH(1.3.0, subdir:pth --disable-tests)
1881 ⎪ AC_CONFIG_SUBDIRS(pth)
1882 ⎪ AC_OUTPUT(Makefile)
1883
1884 Here we provided a default value for "foo"'s "--with-pth" option as the
1885 second argument to "AC_CHECK_PTH" which indicates that Pth can be found
1886 in the subdirectory named "pth/". Additionally we specified that the
1887 "--disable-tests" option of Pth should be passed to the "pth/" subdi‐
1888 rectory, because we need only to build the Pth library itself. And we
1889 added a "AC_CONFIG_SUBDIR" call which indicates to autoconf that it
1890 should configure the "pth/" subdirectory, too. The "AC_CONFIG_AUX_DIR"
1891 directive was added just to make autoconf happy, because it wants to
1892 find a "install.sh" or "shtool" script if "AC_CONFIG_SUBDIRS" is used.
1893
1894 Now we let autoconf's "aclocal" program again generate for us an "aclo‐
1895 cal.m4" file with the contents of Pth's "AC_CHECK_PTH" macro. Finally
1896 we generate the "configure" script out of this "aclocal.m4" file and
1897 the "configure.ac" file.
1898
1899 $ aclocal --acdir=`pth-config --acdir`
1900 $ autoconf
1901
1902 Now we have to create the "pth/" subdirectory itself. For this, we
1903 extract the Pth distribution to the "foo" source tree and just rename
1904 it to "pth/":
1905
1906 $ gunzip <pth-X.Y.Z.tar.gz ⎪ tar xvf -
1907 $ mv pth-X.Y.Z pth
1908
1909 Optionally to reduce the size of the "pth/" subdirectory, we can strip
1910 down the Pth sources to a minimum with the striptease feature:
1911
1912 $ cd pth
1913 $ ./configure
1914 $ make striptease
1915 $ cd ..
1916
1917 After this the source tree of "foo" should look similar to this:
1918
1919 $ ls -l
1920 -rw-r--r-- 1 rse users 709 Nov 3 11:51 Makefile.in
1921 -rw-r--r-- 1 rse users 16431 Nov 3 12:20 aclocal.m4
1922 -rwxr-xr-x 1 rse users 57403 Nov 3 12:21 configure
1923 -rw-r--r-- 1 rse users 129 Nov 3 12:21 configure.ac
1924 -rw-r--r-- 1 rse users 4227 Nov 3 11:11 foo.c
1925 drwxr-xr-x 2 rse users 3584 Nov 3 12:36 pth
1926 $ ls -l pth/
1927 -rw-rw-r-- 1 rse users 26344 Nov 1 20:12 COPYING
1928 -rw-rw-r-- 1 rse users 2042 Nov 3 12:36 Makefile.in
1929 -rw-rw-r-- 1 rse users 3967 Nov 1 19:48 README
1930 -rw-rw-r-- 1 rse users 340 Nov 3 12:36 README.1st
1931 -rw-rw-r-- 1 rse users 28719 Oct 31 17:06 config.guess
1932 -rw-rw-r-- 1 rse users 24274 Aug 18 13:31 config.sub
1933 -rwxrwxr-x 1 rse users 155141 Nov 3 12:36 configure
1934 -rw-rw-r-- 1 rse users 162021 Nov 3 12:36 pth.c
1935 -rw-rw-r-- 1 rse users 18687 Nov 2 15:19 pth.h.in
1936 -rw-rw-r-- 1 rse users 5251 Oct 31 12:46 pth_acdef.h.in
1937 -rw-rw-r-- 1 rse users 2120 Nov 1 11:27 pth_acmac.h.in
1938 -rw-rw-r-- 1 rse users 2323 Nov 1 11:27 pth_p.h.in
1939 -rw-rw-r-- 1 rse users 946 Nov 1 11:27 pth_vers.c
1940 -rw-rw-r-- 1 rse users 26848 Nov 1 11:27 pthread.c
1941 -rw-rw-r-- 1 rse users 18772 Nov 1 11:27 pthread.h.in
1942 -rwxrwxr-x 1 rse users 26188 Nov 3 12:36 shtool
1943
1944 Now when we configure and build the "foo" package it looks similar to
1945 this:
1946
1947 $ ./configure
1948 creating cache ./config.cache
1949 checking for gcc... gcc
1950 checking whether the C compiler (gcc ) works... yes
1951 checking whether the C compiler (gcc ) is a cross-compiler... no
1952 checking whether we are using GNU C... yes
1953 checking whether gcc accepts -g... yes
1954 checking how to run the C preprocessor... gcc -E
1955 checking for GNU Pth... version 1.3.0, local under pth
1956 updating cache ./config.cache
1957 creating ./config.status
1958 creating Makefile
1959 configuring in pth
1960 running /bin/sh ./configure --enable-subdir --enable-batch
1961 --disable-tests --cache-file=.././config.cache --srcdir=.
1962 loading cache .././config.cache
1963 checking for gcc... (cached) gcc
1964 checking whether the C compiler (gcc ) works... yes
1965 checking whether the C compiler (gcc ) is a cross-compiler... no
1966 [...]
1967 $ make
1968 ===> pth (all)
1969 ./shtool scpp -o pth_p.h -t pth_p.h.in -Dcpp -Cintern -M '==#==' pth.c
1970 pth_vers.c
1971 gcc -c -I. -O2 -pipe pth.c
1972 gcc -c -I. -O2 -pipe pth_vers.c
1973 ar rc libpth.a pth.o pth_vers.o
1974 ranlib libpth.a
1975 <=== pth
1976 gcc -g -O2 -Ipth -c foo.c
1977 gcc -Lpth -o foo foo.o -lpth
1978
1979 As you can see, autoconf now automatically configures the local
1980 (stripped down) copy of Pth in the subdirectory "pth/" and the "Make‐
1981 file" automatically builds the subdirectory, too.
1982
1984 Pth per default uses an explicit API, including the system calls. For
1985 instance you've to explicitly use pth_read(3) when you need a thread-
1986 aware read(3) and cannot expect that by just calling read(3) only the
1987 current thread is blocked. Instead with the standard read(3) call the
1988 whole process will be blocked. But because for some applications
1989 (mainly those consisting of lots of third-party stuff) this can be
1990 inconvenient. Here it's required that a call to read(3) `magically'
1991 means pth_read(3). The problem here is that such magic Pth cannot pro‐
1992 vide per default because it's not really portable. Nevertheless Pth
1993 provides a two step approach to solve this problem:
1994
1995 Soft System Call Mapping
1996
1997 This variant is available on all platforms and can always be enabled by
1998 building Pth with "--enable-syscall-soft". This then triggers some
1999 "#define"'s in the "pth.h" header which map for instance read(3) to
2000 pth_read(3), etc. Currently the following functions are mapped:
2001 fork(2), nanosleep(3), usleep(3), sleep(3), sigwait(3), waitpid(2),
2002 system(3), select(2), poll(2), connect(2), accept(2), read(2),
2003 write(2), recv(2), send(2), recvfrom(2), sendto(2).
2004
2005 The drawback of this approach is just that really all source files of
2006 the application where these function calls occur have to include
2007 "pth.h", of course. And this also means that existing libraries,
2008 including the vendor's stdio, usually will still block the whole
2009 process if one of its I/O functions block.
2010
2011 Hard System Call Mapping
2012
2013 This variant is available only on those platforms where the syscall(2)
2014 function exists and there it can be enabled by building Pth with
2015 "--enable-syscall-hard". This then builds wrapper functions (for
2016 instances read(3)) into the Pth library which internally call the real
2017 Pth replacement functions (pth_read(3)). Currently the following func‐
2018 tions are mapped: fork(2), nanosleep(3), usleep(3), sleep(3), wait‐
2019 pid(2), system(3), select(2), poll(2), connect(2), accept(2), read(2),
2020 write(2).
2021
2022 The drawback of this approach is that it depends on syscall(2) inter‐
2023 face and prototype conflicts can occur while building the wrapper func‐
2024 tions due to different function signatures in the vendor C header
2025 files. But the advantage of this mapping variant is that the source
2026 files of the application where these function calls occur have not to
2027 include "pth.h" and that existing libraries, including the vendor's
2028 stdio, magically become thread-aware (and then block only the current
2029 thread).
2030
2032 Pth is very portable because it has only one part which perhaps has to
2033 be ported to new platforms (the machine context initialization). But it
2034 is written in a way which works on mostly all Unix platforms which sup‐
2035 port makecontext(2) or at least sigstack(2) or sigaltstack(2) [see
2036 "pth_mctx.c" for details]. Any other Pth code is POSIX and ANSI C based
2037 only.
2038
2039 The context switching is done via either SUSv2 makecontext(2) or POSIX
2040 make[sig]setjmp(3) and [sig]longjmp(3). Here all CPU registers, the
2041 program counter and the stack pointer are switched. Additionally the
2042 Pth dispatcher switches also the global Unix "errno" variable [see
2043 "pth_mctx.c" for details] and the signal mask (either implicitly via
2044 sigsetjmp(3) or in an emulated way via explicit setprocmask(2) calls).
2045
2046 The Pth event manager is mainly select(2) and gettimeofday(2) based,
2047 i.e., the current time is fetched via gettimeofday(2) once per context
2048 switch for time calculations and all I/O events are implemented via a
2049 single central select(2) call [see "pth_sched.c" for details].
2050
2051 The thread control block management is done via virtual priority queues
2052 without any additional data structure overhead. For this, the queue
2053 linkage attributes are part of the thread control blocks and the queues
2054 are actually implemented as rings with a selected element as the entry
2055 point [see "pth_tcb.h" and "pth_pqueue.c" for details].
2056
2057 Most time critical code sections (especially the dispatcher and event
2058 manager) are speeded up by inline functions (implemented as ANSI C pre-
2059 processor macros). Additionally any debugging code is completely
2060 removed from the source when not built with "-DPTH_DEBUG" (see Autoconf
2061 "--enable-debug" option), i.e., not only stub functions remain [see
2062 "pth_debug.c" for details].
2063
2065 Pth (intentionally) provides no replacements for non-thread-safe func‐
2066 tions (like strtok(3) which uses a static internal buffer) or synchro‐
2067 nous system functions (like gethostbyname(3) which doesn't provide an
2068 asynchronous mode where it doesn't block). When you want to use those
2069 functions in your server application together with threads, you've to
2070 either link the application against special third-party libraries (or
2071 for thread-safe/reentrant functions possibly against an existing
2072 "libc_r" of the platform vendor). For an asynchronous DNS resolver
2073 library use the GNU adns package from Ian Jackson ( see
2074 http://www.gnu.org/software/adns/adns.html ).
2075
2077 The Pth library was designed and implemented between February and July
2078 1999 by Ralf S. Engelschall after evaluating numerous (mostly preemp‐
2079 tive) thread libraries and after intensive discussions with Peter
2080 Simons, Martin Kraemer, Lars Eilebrecht and Ralph Babel related to an
2081 experimental (matrix based) non-preemptive C++ scheduler class written
2082 by Peter Simons.
2083
2084 Pth was then implemented in order to combine the non-preemptive
2085 approach of multithreading (which provides better portability and per‐
2086 formance) with an API similar to the popular one found in Pthread
2087 libraries (which provides easy programming).
2088
2089 So the essential idea of the non-preemptive approach was taken over
2090 from Peter Simons scheduler. The priority based scheduling algorithm
2091 was suggested by Martin Kraemer. Some code inspiration also came from
2092 an experimental threading library (rsthreads) written by Robert S. Thau
2093 for an ancient internal test version of the Apache webserver. The con‐
2094 cept and API of message ports was borrowed from AmigaOS' Exec subsys‐
2095 tem. The concept and idea for the flexible event mechanism came from
2096 Paul Vixie's eventlib (which can be found as a part of BIND v8).
2097
2099 If you think you have found a bug in Pth, you should send a report as
2100 complete as possible to bug-pth@gnu.org. If you can, please try to fix
2101 the problem and include a patch, made with '"diff -u3"', in your
2102 report. Always, at least, include a reasonable amount of description in
2103 your report to allow the author to deterministically reproduce the bug.
2104
2105 For further support you additionally can subscribe to the
2106 pth-users@gnu.org mailing list by sending an Email to
2107 pth-users-request@gnu.org with `"subscribe pth-users"' (or `"subscribe
2108 pth-users" address' if you want to subscribe from a particular Email
2109 address) in the body. Then you can discuss your issues with other Pth
2110 users by sending messages to pth-users@gnu.org. Currently (as of August
2111 2000) you can reach about 110 Pth users on this mailing list. Old post‐
2112 ings you can find at http://www.mail-archive.com/pth-users@gnu.org/.
2113
2115 Related Web Locations
2116
2117 `comp.programming.threads Newsgroup Archive', http://www.deja.com/top‐
2118 ics_if.xp? search=topic&group=comp.programming.threads
2119
2120 `comp.programming.threads Frequently Asked Questions (F.A.Q.)',
2121 http://www.lambdacs.com/newsgroup/FAQ.html
2122
2123 `Multithreading - Definitions and Guidelines', Numeric Quest Inc 1998;
2124 http://www.numeric-quest.com/lang/multi-frame.html
2125
2126 `The Single UNIX Specification, Version 2 - Threads', The Open Group
2127 1997; http://www.opengroup.org/onlinepubs /007908799/xsh/threads.html
2128
2129 SMI Thread Resources, Sun Microsystems Inc; http://www.sun.com/work‐
2130 shop/threads/
2131
2132 Bibliography on threads and multithreading, Torsten Amundsen;
2133 http://liinwww.ira.uka.de/bibliography/Os/threads.html
2134
2135 Related Books
2136
2137 B. Nichols, D. Buttlar, J.P. Farrel: `Pthreads Programming - A POSIX
2138 Standard for Better Multiprocessing', O'Reilly 1996; ISBN 1-56592-115-1
2139
2140 B. Lewis, D. J. Berg: `Multithreaded Programming with Pthreads', Sun
2141 Microsystems Press, Prentice Hall 1998; ISBN 0-13-680729-1
2142
2143 B. Lewis, D. J. Berg: `Threads Primer - A Guide To Multithreaded Pro‐
2144 gramming', Prentice Hall 1996; ISBN 0-13-443698-9
2145
2146 S. J. Norton, M. D. Dipasquale: `Thread Time - The Multithreaded Pro‐
2147 gramming Guide', Prentice Hall 1997; ISBN 0-13-190067-6
2148
2149 D. R. Butenhof: `Programming with POSIX Threads', Addison Wesley 1997;
2150 ISBN 0-201-63392-2
2151
2152 Related Manpages
2153
2154 pth-config(1), pthread(3).
2155
2156 getcontext(2), setcontext(2), makecontext(2), swapcontext(2),
2157 sigstack(2), sigaltstack(2), sigaction(2), sigemptyset(2),
2158 sigaddset(2), sigprocmask(2), sigsuspend(2), sigsetjmp(3), sig‐
2159 longjmp(3), setjmp(3), longjmp(3), select(2), gettimeofday(2).
2160
2162 Ralf S. Engelschall
2163 rse@engelschall.com
2164 www.engelschall.com
2165
2166
2167
216808-Jun-2006 GNU Pth 2.0.7 pth(3)