1dispatch_apply(3) BSD Library Functions Manual dispatch_apply(3)
2
4 dispatch_apply — schedule blocks for iterative execution
5
7 #include <dispatch/dispatch.h>
8
9 void
10 dispatch_apply(size_t iterations, dispatch_queue_t queue,
11 void (^block)(size_t));
12
13 void
14 dispatch_apply_f(size_t iterations, dispatch_queue_t queue,
15 void *context, void (*function)(void *, size_t));
16
18 The dispatch_apply() function provides data-level concurrency through a
19 "for (;;)" loop like primitive:
20
21 size_t iterations = 10;
22
23 // 'idx' is zero indexed, just like:
24 // for (idx = 0; idx < iterations; idx++)
25
26 dispatch_apply(iterations, DISPATCH_APPLY_AUTO, ^(size_t idx) {
27 printf("%zu\n", idx);
28 });
29
30 Although any queue can be used, it is strongly recommended to use
31 DISPATCH_APPLY_AUTO as the queue argument to both dispatch_apply() and
32 dispatch_apply_f(), as shown in the example above, since this allows the
33 system to automatically use worker threads that match the configuration
34 of the current thread as closely as possible. No assumptions should be
35 made about which global concurrent queue will be used.
36
37 Like a "for (;;)" loop, the dispatch_apply() function is synchronous. If
38 asynchronous behavior is desired, wrap the call to dispatch_apply() with
39 a call to dispatch_async() against another queue.
40
41 Sometimes, when the block passed to dispatch_apply() is simple, the use
42 of striding can tune performance. Calculating the optimal stride is best
43 left to experimentation. Start with a stride of one and work upwards
44 until the desired performance is achieved (perhaps using a power of two
45 search):
46
47 #define STRIDE 3
48
49 dispatch_apply(count / STRIDE, DISPATCH_APPLY_AUTO, ^(size_t idx) {
50 size_t j = idx * STRIDE;
51 size_t j_stop = j + STRIDE;
52 do {
53 printf("%zu\n", j++);
54 } while (j < j_stop);
55 });
56
57 size_t i;
58 for (i = count - (count % STRIDE); i < count; i++) {
59 printf("%zu\n", i);
60 }
61
63 Synchronous functions within the dispatch framework hold an implied ref‐
64 erence on the target queue. In other words, the synchronous function bor‐
65 rows the reference of the calling function (this is valid because the
66 calling function is blocked waiting for the result of the synchronous
67 function, and therefore cannot modify the reference count of the target
68 queue until after the synchronous function has returned).
69
70 This is in contrast to asynchronous functions which must retain both the
71 block and target queue for the duration of the asynchronous operation (as
72 the calling function may immediately release its interest in these
73 objects).
74
76 dispatch_apply() and dispatch_apply_f() attempt to quickly create enough
77 worker threads to efficiently iterate work in parallel. By contrast, a
78 loop that passes work items individually to dispatch_async() or
79 dispatch_async_f() will incur more overhead and does not express the
80 desired parallel execution semantics to the system, so may not create an
81 optimal number of worker threads for a parallel workload. For this rea‐
82 son, prefer to use dispatch_apply() or dispatch_apply_f() when parallel
83 execution is important.
84
85 The dispatch_apply() function is a wrapper around dispatch_apply_f().
86
88 Unlike dispatch_async(), a block submitted to dispatch_apply() is
89 expected to be either independent or dependent only on work already per‐
90 formed in lower-indexed invocations of the block. If the block's index
91 dependency is non-linear, it is recommended to use a for-loop around
92 invocations of dispatch_async().
93
95 dispatch(3), dispatch_async(3), dispatch_queue_create(3)
96
97Darwin May 1, 2009 Darwin