1Parallel::Iterator(3) User Contributed Perl DocumentationParallel::Iterator(3)
2
3
4
6 Parallel::Iterator - Simple parallel execution
7
9 use Parallel::Iterator qw( iterate );
10
11 # A very expensive way to double 100 numbers...
12
13 my @nums = ( 1 .. 100 );
14
15 my $iter = iterate( sub {
16 my ( $id, $job ) = @_;
17 return $job * 2;
18 }, \@nums );
19
20 my @out = ();
21 while ( my ( $index, $value ) = $iter->() ) {
22 $out[$index] = $value;
23 }
24
25 The "map" function applies a user supplied transformation function to
26 each element in a list, returning a new list containing the transformed
27 elements.
28
30 This module provides a 'parallel map'. Multiple worker processes are
31 forked so that many instances of the transformation function may be
32 executed simultaneously.
33
34 For time consuming operations, particularly operations that spend most
35 of their time waiting for I/O, this is a big performance win. It also
36 provides a simple idiom to make effective use of multi CPU systems.
37
38 There is, however, a considerable overhead associated with forking, so
39 the example in the synopsis (doubling a list of numbers) is not a
40 sensible use of this module.
41
43 Basic Usage
44 Imagine you have an array of URLs to fetch:
45
46 my @urls = qw(
47 http://google.com/
48 http://hexten.net/
49 http://search.cpan.org/
50 ... and lots more ...
51 );
52
53 Write a function that retrieves a URL and returns its contents or undef
54 if it can't be fetched:
55
56 sub fetch {
57 my ($id, $url) = @_;
58 my $resp = $ua->get($url);
59 return unless $resp->is_success;
60 return $resp->content;
61 };
62
63 Now write a function to synthesize a special kind of iterator:
64
65 sub list_iter {
66 my @ar = @_;
67 my $pos = 0;
68 return sub {
69 return if $pos >= @ar;
70 my @r = ( $pos, $ar[$pos] ); # Note: returns ( index, value )
71 $pos++;
72 return @r;
73 };
74 }
75
76 The returned iterator will return each element of the array in turn and
77 then undef. Actually it returns both the index and the value of each
78 element in the array. Because multiple instances of the transformation
79 function execute in parallel the results won't necessarily come back in
80 order. The array index will later allow us to put completed items in
81 the correct place in an output array.
82
83 Get an iterator for the list of URLs:
84
85 my $url_iter = list_iter( @urls );
86
87 Then wrap it in another iterator which will return the transformed
88 results:
89
90 my $page_iter = iterate( \&fetch, $url_iter );
91
92 Finally loop over the returned iterator storing results:
93
94 my @out = ( );
95 while ( my ( $index, $value ) = $page_iter->() ) {
96 $out[$index] = $value;
97 }
98
99 Behind the scenes your program forked into ten (by default) instances
100 of itself and executed the page requests in parallel.
101
102 Simpler interfaces
103 Having to construct an iterator is a pain so "iterate" is smart enough
104 to do that for you. Instead of passing an iterator just pass a
105 reference to the array:
106
107 my $page_iter = iterate( \&fetch, \@urls );
108
109 If you pass a hash reference the iterator you get back will return key,
110 value pairs:
111
112 my $some_iter = iterate( \&fetch, \%some_hash );
113
114 If the returned iterator is inconvenient you can get back a hash or
115 array instead:
116
117 my @done = iterate_as_array( \&fetch, \@urls );
118
119 my %done = iterate_as_hash( \&worker, \%jobs );
120
121 How It Works
122 The current process is forked once for each worker. Each forked child
123 is connected to the parent by a pair of pipes. The child's STDIN,
124 STDOUT and STDERR are unaffected.
125
126 Input values are serialised (using Storable) and passed to the workers.
127 Completed work items are serialised and returned.
128
129 Caveats
130 Parallel::Iterator is designed to be simple to use - but the underlying
131 forking of the main process can cause mystifying problems unless you
132 have an understanding of what is going on behind the scenes.
133
134 Worker execution enviroment
135
136 All code apart from the worker subroutine executes in the parent
137 process as normal. The worker executes in a forked instance of the
138 parent process. That means that things like this won't work as
139 expected:
140
141 my %tally = ();
142 my @r = iterate_as_array( sub {
143 my ($id, $name) = @_;
144 $tally{$name}++; # might not do what you think it does
145 return reverse $name;
146 }, \@names );
147
148 # Now print out the tally...
149 while ( my ( $name, $count ) = each %tally ) {
150 printf("%5d : %s\n", $count, $name);
151 }
152
153 Because the worker is a closure it can see the %tally hash from its
154 enclosing scope; but because it's running in a forked clone of the
155 parent process it modifies its own copy of %tally rather than the copy
156 for the parent process.
157
158 That means that after the job terminates the %tally in the parent
159 process will be empty.
160
161 In general you should avoid side effects in your worker subroutines.
162
163 Serialization
164
165 Values are serialised using Storable to pass to the worker subroutine
166 and results from the worker are again serialised before being passed
167 back. Be careful what your values refer to: everything has to be
168 serialised. If there's an indirect way to reach a large object graph
169 Storable will find it and performance will suffer.
170
171 To find out how large your serialised values are serialise one of them
172 and check its size:
173
174 use Storable qw( freeze );
175 my $serialized = freeze $some_obj;
176 print length($serialized), " bytes\n";
177
178 In your tests you may wish to guard against the possibility of a change
179 to the structure of your values resulting in a sudden increase in
180 serialized size:
181
182 ok length(freeze $some_obj) < 1000, "Object too bulky?";
183
184 See the documetation for Storable for other caveats.
185
186 Performance
187
188 Process forking is expensive. Only use Parallel::Iterator in cases
189 where:
190
191 the worker waits for I/O
192 The case of fetching web pages is a good example of this. Fetching
193 a page with LWP::UserAgent may take as long as a few seconds but
194 probably consumes only a few milliseconds of processor time.
195 Running many requests in parallel is a huge win - but be kind to
196 the server you're talking to: don't launch a lot of parallel
197 requests unless it's your server or you know it can handle the
198 load.
199
200 the worker is CPU intensive and you have multiple cores / CPUs
201 If the worker is doing an expensive calculation you can parallelise
202 that across multiple CPU cores. Benchmark first though. There's a
203 considerable overhead associated with Parallel::Iterator; unless
204 your calculations are time consuming that overhead will dwarf
205 whatever time they take.
206
208 "iterate( [ $options ], $worker, $iterator )"
209 Get an iterator that applies the supplied transformation function to
210 each value returned by the input iterator.
211
212 Instead of an iterator you may pass an array or hash reference and
213 "iterate" will convert it internally into a suitable iterator.
214
215 If you are doing this you may wish to investigate "iterate_as_hash" and
216 "iterate_as_array".
217
218 Options
219
220 A reference to a hash of options may be supplied as the first argument.
221 The following options are supported:
222
223 "workers"
224 The number of concurrent processes to launch. Set this to 0 to
225 disable forking. Defaults to 10 on systems that support fork and 0
226 (disable forking) on those that do not.
227
228 "nowarn"
229 Normally "iterate" will issue a warning and fall back to single
230 process mode on systems on which fork is not available. This option
231 supresses that warning.
232
233 "batch"
234 Ordinarily items are passed to the worker one at a time. If you are
235 processing a large number of items it may be more efficient to
236 process them in batches. Specify the batch size using this option.
237
238 Batching is transparent from the caller's perspective. Internally
239 it modifies the iterators and worker (by wrapping them in
240 additional closures) so that they pack, process and unpack chunks
241 of work.
242
243 "adaptive"
244 Extending the idea of batching a number of work items to amortize
245 the overhead of passing work to and from parallel workers you may
246 also ask "iterate" to heuristically determine the batch size by
247 setting the "adaptive" option to a numeric value.
248
249 The batch size will be computed as
250
251 <number of items seen> / <number of workers> / <adaptive>
252
253 A larger value for "adaptive" will reduce the rate at which the
254 batch size increases. Good values tend to be in the range 1 to 2.
255
256 You can also specify lower and, optionally, upper bounds on the
257 batch size by passing an reference to an array containing ( lower
258 bound, growth ratio, upper bound ). The upper bound may be omitted.
259
260 my $iter = iterate(
261 { adaptive => [ 5, 2, 100 ] },
262 $worker, \@stuff );
263
264 "onerror"
265 The action to take when an error is thrown in the iterator.
266 Possible values are 'die', 'warn' or a reference to a subroutine
267 that will be called with the index of the job that threw the
268 exception and the value of $@ thrown.
269
270 iterate( {
271 onerror => sub {
272 my ($id, $err) = @_;
273 $self->log( "Error for index $id: $err" );
274 },
275 $worker,
276 \@jobs
277 );
278
279 The default is 'die'.
280
281 "iterate_as_array"
282 As "iterate" but instead of returning an iterator returns an array
283 containing the collected output from the iterator. In a scalar context
284 returns a reference to the same array.
285
286 For this to work properly the input iterator must return (index, value)
287 pairs. This allows the results to be placed in the correct slots in the
288 output array. The simplest way to do this is to pass an array reference
289 as the input iterator:
290
291 my @output = iterate_as_array( \&some_handler, \@input );
292
293 "iterate_as_hash"
294 As "iterate" but instead of returning an iterator returns a hash
295 containing the collected output from the iterator. In a scalar context
296 returns a reference to the same hash.
297
298 For this to work properly the input iterator must return (key, value)
299 pairs. This allows the results to be placed in the correct slots in the
300 output hash. The simplest way to do this is to pass a hash reference as
301 the input iterator:
302
303 my %output = iterate_as_hash( \&some_handler, \%input );
304
306 No bugs have been reported.
307
308 Please report any bugs or feature requests to
309 "bug-parallel-iterator@rt.cpan.org", or through the web interface at
310 <http://rt.cpan.org>.
311
313 Aristotle Pagaltzis for the END handling suggestion and patch.
314
316 Andy Armstrong <andy@hexten.net>
317
319 This software is copyright (c) 2007 by Andy Armstrong.
320
321 This is free software; you can redistribute it and/or modify it under
322 the same terms as the Perl 5 programming language system itself.
323
324
325
326perl v5.36.0 2023-01-20 Parallel::Iterator(3)