1dispatch_async(3) BSD Library Functions Manual dispatch_async(3)
2
4 dispatch_async, dispatch_sync — schedule blocks for execution
5
7 #include <dispatch/dispatch.h>
8
9 void
10 dispatch_async(dispatch_queue_t queue, void (^block)(void));
11
12 void
13 dispatch_sync(dispatch_queue_t queue, void (^block)(void));
14
15 void
16 dispatch_async_f(dispatch_queue_t queue, void *context,
17 void (*function)(void *));
18
19 void
20 dispatch_sync_f(dispatch_queue_t queue, void *context,
21 void (*function)(void *));
22
24 The dispatch_async() and dispatch_sync() functions schedule blocks for
25 concurrent execution within the dispatch(3) framework. Blocks are submit‐
26 ted to a queue which dictates the policy for their execution. See
27 dispatch_queue_create(3) for more information about creating dispatch
28 queues.
29
30 These functions support efficient temporal synchronization, background
31 concurrency and data-level concurrency. These same functions can also be
32 used for efficient notification of the completion of asynchronous blocks
33 (a.k.a. callbacks).
34
36 Synchronization is often required when multiple threads of execution
37 access shared data concurrently. The simplest form of synchronization is
38 mutual-exclusion (a lock), whereby different subsystems execute concur‐
39 rently until a shared critical section is entered. In the pthread(3) fam‐
40 ily of procedures, temporal synchronization is accomplished like so:
41
42 int r = pthread_mutex_lock(&my_lock);
43 assert(r == 0);
44
45 // critical section
46
47 r = pthread_mutex_unlock(&my_lock);
48 assert(r == 0);
49
50 The dispatch_sync() function may be used with a serial queue to accom‐
51 plish the same style of synchronization. For example:
52
53 dispatch_sync(my_queue, ^{
54 // critical section
55 });
56
57 In addition to providing a more concise expression of synchronization,
58 this approach is less error prone as the critical section cannot be acci‐
59 dentally left without restoring the queue to a reentrant state.
60
61 The dispatch_async() function may be used to implement deferred critical
62 sections when the result of the block is not needed locally. Deferred
63 critical sections have the same synchronization properties as the above
64 code, but are non-blocking and therefore more efficient to perform. For
65 example:
66
67 dispatch_async(my_queue, ^{
68 // critical section
69 });
70
72 dispatch_async() function may be used to execute trivial background tasks
73 on a global concurrent queue. For example:
74
75 dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT,0), ^{
76 // background operation
77 });
78
79 This approach is an efficient replacement for pthread_create(3).
80
82 Completion callbacks can be accomplished via nested calls to the
83 dispatch_async() function. It is important to remember to retain the des‐
84 tination queue before the first call to dispatch_async(), and to release
85 that queue at the end of the completion callback to ensure the destina‐
86 tion queue is not deallocated while the completion callback is pending.
87 For example:
88
89 void
90 async_read(object_t obj,
91 void *where, size_t bytes,
92 dispatch_queue_t destination_queue,
93 void (^reply_block)(ssize_t r, int err))
94 {
95 // There are better ways of doing async I/O.
96 // This is just an example of nested blocks.
97
98 dispatch_retain(destination_queue);
99
100 dispatch_async(obj->queue, ^{
101 ssize_t r = read(obj->fd, where, bytes);
102 int err = errno;
103
104 dispatch_async(destination_queue, ^{
105 reply_block(r, err);
106 });
107 dispatch_release(destination_queue);
108 });
109 }
110
112 While dispatch_sync() can replace a lock, it cannot replace a recursive
113 lock. Unlike locks, queues support both asynchronous and synchronous
114 operations, and those operations are ordered by definition. A recursive
115 call to dispatch_sync() causes a simple deadlock as the currently execut‐
116 ing block waits for the next block to complete, but the next block will
117 not start until the currently running block completes.
118
119 As the dispatch framework was designed, we studied recursive locks. We
120 found that the vast majority of recursive locks are deployed retroac‐
121 tively when ill-defined lock hierarchies are discovered. As a conse‐
122 quence, the adoption of recursive locks often mutates obvious bugs into
123 obscure ones. This study also revealed an insight: if reentrancy is
124 unavoidable, then reader/writer locks are preferable to recursive locks.
125 Disciplined use of reader/writer locks enable reentrancy only when reen‐
126 trancy is safe (the "read" side of the lock).
127
128 Nevertheless, if it is absolutely necessary, what follows is an imperfect
129 way of implementing recursive locks using the dispatch framework:
130
131 void
132 sloppy_lock(object_t object, void (^block)(void))
133 {
134 if (object->owner == pthread_self()) {
135 return block();
136 }
137 dispatch_sync(object->queue, ^{
138 object->owner = pthread_self();
139 block();
140 object->owner = NULL;
141 });
142 }
143
144 The above example does not solve the case where queue A runs on thread X
145 which calls dispatch_sync() against queue B which runs on thread Y which
146 recursively calls dispatch_sync() against queue A, which deadlocks both
147 examples. This is bug-for-bug compatible with nontrivial pthread usage.
148 In fact, nontrivial reentrancy is impossible to support in recursive
149 locks once the ultimate level of reentrancy is deployed (IPC or RPC).
150
152 Synchronous functions within the dispatch framework hold an implied ref‐
153 erence on the target queue. In other words, the synchronous function bor‐
154 rows the reference of the calling function (this is valid because the
155 calling function is blocked waiting for the result of the synchronous
156 function, and therefore cannot modify the reference count of the target
157 queue until after the synchronous function has returned). For example:
158
159 queue = dispatch_queue_create("com.example.queue", NULL);
160 assert(queue);
161 dispatch_sync(queue, ^{
162 do_something();
163 //dispatch_release(queue); // NOT SAFE -- dispatch_sync() is still using 'queue'
164 });
165 dispatch_release(queue); // SAFELY balanced outside of the block provided to dispatch_sync()
166
167 This is in contrast to asynchronous functions which must retain both the
168 block and target queue for the duration of the asynchronous operation (as
169 the calling function may immediately release its interest in these
170 objects).
171
173 Conceptually, dispatch_sync() is a convenient wrapper around
174 dispatch_async() with the addition of a semaphore to wait for completion
175 of the block, and a wrapper around the block to signal its completion.
176 See dispatch_semaphore_create(3) for more information about dispatch sem‐
177 aphores. The actual implementation of the dispatch_sync() function may be
178 optimized and differ from the above description.
179
180 The dispatch_async() function is a wrapper around dispatch_async_f().
181 The application-defined context parameter is passed to the function when
182 it is invoked on the target queue.
183
184 The dispatch_sync() function is a wrapper around dispatch_sync_f(). The
185 application-defined context parameter is passed to the function when it
186 is invoked on the target queue.
187
189 dispatch(3), dispatch_apply(3), dispatch_once(3),
190 dispatch_queue_create(3), dispatch_semaphore_create(3)
191
192Darwin May 1, 2009 Darwin