1xparace(3)                    SAORD Documentation                   xparace(3)
2
3
4

NAME

6       XPA Race Conditions
7

SYNOPSIS

9       Potential XPA race conditions and how to avoid them.
10

DESCRIPTION

12       Currently, there is only one known circumstance in which XPA can get
13       (temporarily) deadlocked in a race condition: if two or more XPA
14       servers send messages to one another using an XPA client routine such
15       as XPASet(), they can deadlock while each waits for the other server to
16       respond.  (This can happen if the servers call XPAPoll() with a time
17       limit, and send messages in between the polling call.)  The reason this
18       happens is that both client routines send a string to the other server
19       to establish the handshake and then wait for the server response. Since
20       each client is waiting for a response, neither is able to enter its
21       event-handling loop and respond to the other's request. This deadlock
22       will continue until one of the timeout periods expire, at which point
23       an error condition will be triggered and the timed-out server will
24       return to its event loop.
25
26       Starting with version 2.1.6, this is rare race condition can be avoided
27       by setting the XPA_IOCALLSXPA environment variable for servers that
28       will make client calls. Setting this variable causes all XPA socket IO
29       calls to process outstanding XPA requests whenever the primary socket
30       is not ready for IO. This means that a server making a client call will
31       (recursively) process incoming server requests while waiting for client
32       completion. It also means that a server callback routine can handle
33       incoming XPA messages if it makes its own XPA call.  The semi-public
34       routine oldvalue=XPAIOCallsXPA(newvalue) can be used to turn this
35       behavior off and on temporarily. Passing a 0 will turn off IO process‐
36       ing, 1 will turn it back on. The old value is returned by the call.
37
38       By default, the XPA_IOCALLSXPA option is turned off, because we judge
39       that the added code complication and overhead involved will not be jus‐
40       tified by the amount of its use.  Moreover, processing XPA requests
41       within socket IO can lead to non-intuitive results, since incoming
42       server requests will not necessarily be processed to completion in the
43       order in which they are received.
44
45       Aside from setting XPA_IOCALLSXPA, the simplest way to avoid this race
46       condition is to multi-process: when you want to send a client message,
47       simply start a separate process to call the client routine, so that the
48       server is not stopped. It probably is fastest and easiest to use fork()
49       and then have the child call the client routine and exit. But you also
50       can use either the system() or popen() routine to start one of the com‐
51       mand line programs and do the same thing. Alternatively, you can use
52       XPA's internal launch() routine instead of system(). Based on fork()
53       and exec(), this routine is more secure than system() because it does
54       not call /bin/sh.
55
56       Starting with version 2.1.5, you also can send an XPAInfo() message
57       with the mode string "ack=false". This will cause the client to send a
58       message to the server and then exit without waiting for any return mes‐
59       sage from the server. This UDP-like behavior will avoid the server
60       deadlock when sending short XPAInfo messages.
61

SEE ALSO

63       See xpa(n) for a list of XPA help pages
64
65
66
67version 2.1.8                  November 1, 2007                     xparace(3)
Impressum