1PARALLEL_ALTERNATIVES(7)           parallel           PARALLEL_ALTERNATIVES(7)
2
3
4

NAME

6       parallel_alternatives - Alternatives to GNU parallel
7

DIFFERENCES BETWEEN GNU Parallel AND ALTERNATIVES

9       There are a lot programs with some of the functionality of GNU
10       parallel. GNU parallel strives to include the best of the functionality
11       without sacrificing ease of use.
12
13       parallel has existed since 2002 and as GNU parallel since 2010. A lot
14       of the alternatives have not had the vitality to survive that long, but
15       have come and gone during that time.
16
17       GNU parallel is actively maintained with a new release every month
18       since 2010. Most other alternatives are fleeting interests of the
19       developers with irregular releases and only maintained for a few years.
20
21   SUMMARY TABLE
22       The following features are in some of the comparable tools:
23
24       Inputs
25        I1. Arguments can be read from stdin
26        I2. Arguments can be read from a file
27        I3. Arguments can be read from multiple files
28        I4. Arguments can be read from command line
29        I5. Arguments can be read from a table
30        I6. Arguments can be read from the same file using #! (shebang)
31        I7. Line oriented input as default (Quoting of special chars not
32       needed)
33
34       Manipulation of input
35        M1. Composed command
36        M2. Multiple arguments can fill up an execution line
37        M3. Arguments can be put anywhere in the execution line
38        M4. Multiple arguments can be put anywhere in the execution line
39        M5. Arguments can be replaced with context
40        M6. Input can be treated as the complete command line
41
42       Outputs
43        O1. Grouping output so output from different jobs do not mix
44        O2. Send stderr (standard error) to stderr (standard error)
45        O3. Send stdout (standard output) to stdout (standard output)
46        O4. Order of output can be same as order of input
47        O5. Stdout only contains stdout (standard output) from the command
48        O6. Stderr only contains stderr (standard error) from the command
49        O7. Buffering on disk
50        O8. Cleanup of file if killed
51        O9. Test if disk runs full during run
52
53       Execution
54        E1. Running jobs in parallel
55        E2. List running jobs
56        E3. Finish running jobs, but do not start new jobs
57        E4. Number of running jobs can depend on number of cpus
58        E5. Finish running jobs, but do not start new jobs after first failure
59        E6. Number of running jobs can be adjusted while running
60        E7. Only spawn new jobs if load is less than a limit
61
62       Remote execution
63        R1. Jobs can be run on remote computers
64        R2. Basefiles can be transferred
65        R3. Argument files can be transferred
66        R4. Result files can be transferred
67        R5. Cleanup of transferred files
68        R6. No config files needed
69        R7. Do not run more than SSHD's MaxStartups can handle
70        R8. Configurable SSH command
71        R9. Retry if connection breaks occasionally
72
73       Semaphore
74        S1. Possibility to work as a mutex
75        S2. Possibility to work as a counting semaphore
76
77       Legend
78        - = no
79        x = not applicable
80        ID = yes
81
82       As every new version of the programs are not tested the table may be
83       outdated. Please file a bug-report if you find errors (See REPORTING
84       BUGS).
85
86       parallel: I1 I2 I3 I4 I5 I6 I7 M1 M2 M3 M4 M5 M6 O1 O2 O3 O4 O5 O6 O7
87       O8 O9 E1 E2 E3 E4 E5 E6 E7 R1 R2 R3 R4 R5 R6 R7 R8 R9 S1 S2
88
89       find -exec: -  -  -  x  -  x  - -  M2 M3 -  -  -  - -  O2 O3 O4 O5 O6 -
90       -  -  -  -  -  - -  -  -  -  -  -  -  -  - x  x
91
92       make -j: -  -  -  -  -  -  - -  -  -  -  -  - O1 O2 O3 -  x  O6 E1 -  -
93       -  E5 - -  -  -  -  -  -  -  -  - -  -
94
95       xjobs, prll, dxargs, mdm/middelman, xapply, paexec, ladon, jobflow,
96       ClusterSSH: TODO - Please file a bug-report if you know what features
97       they support (See REPORTING BUGS).
98
99   DIFFERENCES BETWEEN xargs AND GNU Parallel
100       Summary table (see legend above): I1 I2 - - - - - - M2 M3 - - - - O2 O3
101       - O5 O6 E1 - - - - - - - - - - - x - - - - -
102
103       xargs offers some of the same possibilities as GNU parallel.
104
105       xargs deals badly with special characters (such as space, \, ' and ").
106       To see the problem try this:
107
108         touch important_file
109         touch 'not important_file'
110         ls not* | xargs rm
111         mkdir -p "My brother's 12\" records"
112         ls | xargs rmdir
113         touch 'c:\windows\system32\clfs.sys'
114         echo 'c:\windows\system32\clfs.sys' | xargs ls -l
115
116       You can specify -0, but many input generators are not optimized for
117       using NUL as separator but are optimized for newline as separator. E.g.
118       awk, ls, echo, tar -v, head (requires using -z), tail (requires using
119       -z), sed (requires using -z), perl (-0 and \0 instead of \n), locate
120       (requires using -0), find (requires using -print0), grep (requires
121       using -z or -Z), sort (requires using -z).
122
123       GNU parallel's newline separation can be emulated with:
124
125       cat | xargs -d "\n" -n1 command
126
127       xargs can run a given number of jobs in parallel, but has no support
128       for running number-of-cpu-cores jobs in parallel.
129
130       xargs has no support for grouping the output, therefore output may run
131       together, e.g. the first half of a line is from one process and the
132       last half of the line is from another process. The example Parallel
133       grep cannot be done reliably with xargs because of this. To see this in
134       action try:
135
136         parallel perl -e '\$a=\"1\".\"{}\"x10000000\;print\ \$a,\"\\n\"' \
137           '>' {} ::: a b c d e f g h
138         # Serial = no mixing = the wanted result
139         # 'tr -s a-z' squeezes repeating letters into a single letter
140         echo a b c d e f g h | xargs -P1 -n1 grep 1 | tr -s a-z
141         # Compare to 8 jobs in parallel
142         parallel -kP8 -n1 grep 1 ::: a b c d e f g h | tr -s a-z
143         echo a b c d e f g h | xargs -P8 -n1 grep 1 | tr -s a-z
144         echo a b c d e f g h | xargs -P8 -n1 grep --line-buffered 1 | \
145           tr -s a-z
146
147       Or try this:
148
149         slow_seq() {
150           echo Count to "$@"
151           seq "$@" |
152             perl -ne '$|=1; for(split//){ print; select($a,$a,$a,0.100);}'
153         }
154         export -f slow_seq
155         # Serial = no mixing = the wanted result
156         seq 8 | xargs -n1 -P1 -I {} bash -c 'slow_seq {}'
157         # Compare to 8 jobs in parallel
158         seq 8 | parallel -P8 slow_seq {}
159         seq 8 | xargs -n1 -P8 -I {} bash -c 'slow_seq {}'
160
161       xargs has no support for keeping the order of the output, therefore if
162       running jobs in parallel using xargs the output of the second job
163       cannot be postponed till the first job is done.
164
165       xargs has no support for running jobs on remote computers.
166
167       xargs has no support for context replace, so you will have to create
168       the arguments.
169
170       If you use a replace string in xargs (-I) you can not force xargs to
171       use more than one argument.
172
173       Quoting in xargs works like -q in GNU parallel. This means composed
174       commands and redirection require using bash -c.
175
176         ls | parallel "wc {} >{}.wc"
177         ls | parallel "echo {}; ls {}|wc"
178
179       becomes (assuming you have 8 cores and that none of the filenames
180       contain space, " or ').
181
182         ls | xargs -d "\n" -P8 -I {} bash -c "wc {} >{}.wc"
183         ls | xargs -d "\n" -P8 -I {} bash -c "echo {}; ls {}|wc"
184
185       https://www.gnu.org/software/findutils/
186
187   DIFFERENCES BETWEEN find -exec AND GNU Parallel
188       find -exec offers some of the same possibilities as GNU parallel.
189
190       find -exec only works on files. Processing other input (such as hosts
191       or URLs) will require creating these inputs as files. find -exec has no
192       support for running commands in parallel.
193
194       https://www.gnu.org/software/findutils/ (Last checked: 2019-01)
195
196   DIFFERENCES BETWEEN make -j AND GNU Parallel
197       make -j can run jobs in parallel, but requires a crafted Makefile to do
198       this. That results in extra quoting to get filenames containing
199       newlines to work correctly.
200
201       make -j computes a dependency graph before running jobs. Jobs run by
202       GNU parallel does not depend on each other.
203
204       (Very early versions of GNU parallel were coincidentally implemented
205       using make -j).
206
207       https://www.gnu.org/software/make/ (Last checked: 2019-01)
208
209   DIFFERENCES BETWEEN ppss AND GNU Parallel
210       Summary table (see legend above): I1 I2 - - - - I7 M1 - M3 - - M6 O1 -
211       - x - - E1 E2 ?E3 E4 - - - R1 R2 R3 R4 - - ?R7 ? ?  - -
212
213       ppss is also a tool for running jobs in parallel.
214
215       The output of ppss is status information and thus not useful for using
216       as input for another command. The output from the jobs are put into
217       files.
218
219       The argument replace string ($ITEM) cannot be changed. Arguments must
220       be quoted - thus arguments containing special characters (space '"&!*)
221       may cause problems. More than one argument is not supported. Filenames
222       containing newlines are not processed correctly. When reading input
223       from a file null cannot be used as a terminator. ppss needs to read the
224       whole input file before starting any jobs.
225
226       Output and status information is stored in ppss_dir and thus requires
227       cleanup when completed. If the dir is not removed before running ppss
228       again it may cause nothing to happen as ppss thinks the task is already
229       done. GNU parallel will normally not need cleaning up if running
230       locally and will only need cleaning up if stopped abnormally and
231       running remote (--cleanup may not complete if stopped abnormally). The
232       example Parallel grep would require extra postprocessing if written
233       using ppss.
234
235       For remote systems PPSS requires 3 steps: config, deploy, and start.
236       GNU parallel only requires one step.
237
238       EXAMPLES FROM ppss MANUAL
239
240       Here are the examples from ppss's manual page with the equivalent using
241       GNU parallel:
242
243       1 ./ppss.sh standalone -d /path/to/files -c 'gzip '
244
245       1 find /path/to/files -type f | parallel gzip
246
247       2 ./ppss.sh standalone -d /path/to/files -c 'cp "$ITEM"
248       /destination/dir '
249
250       2 find /path/to/files -type f | parallel cp {} /destination/dir
251
252       3 ./ppss.sh standalone -f list-of-urls.txt -c 'wget -q '
253
254       3 parallel -a list-of-urls.txt wget -q
255
256       4 ./ppss.sh standalone -f list-of-urls.txt -c 'wget -q "$ITEM"'
257
258       4 parallel -a list-of-urls.txt wget -q {}
259
260       5 ./ppss config -C config.cfg -c 'encode.sh ' -d /source/dir -m
261       192.168.1.100 -u ppss -k ppss-key.key -S ./encode.sh -n nodes.txt -o
262       /some/output/dir --upload --download ; ./ppss deploy -C config.cfg ;
263       ./ppss start -C config
264
265       5 # parallel does not use configs. If you want a different username put
266       it in nodes.txt: user@hostname
267
268       5 find source/dir -type f | parallel --sshloginfile nodes.txt --trc
269       {.}.mp3 lame -a {} -o {.}.mp3 --preset standard --quiet
270
271       6 ./ppss stop -C config.cfg
272
273       6 killall -TERM parallel
274
275       7 ./ppss pause -C config.cfg
276
277       7 Press: CTRL-Z or killall -SIGTSTP parallel
278
279       8 ./ppss continue -C config.cfg
280
281       8 Enter: fg or killall -SIGCONT parallel
282
283       9 ./ppss.sh status -C config.cfg
284
285       9 killall -SIGUSR2 parallel
286
287       https://github.com/louwrentius/PPSS
288
289   DIFFERENCES BETWEEN pexec AND GNU Parallel
290       Summary table (see legend above): I1 I2 - I4 I5 - - M1 - M3 - - M6 O1
291       O2 O3 - O5 O6 E1 - - E4 - E6 - R1 - - - - R6 - - - S1 -
292
293       pexec is also a tool for running jobs in parallel.
294
295       EXAMPLES FROM pexec MANUAL
296
297       Here are the examples from pexec's info page with the equivalent using
298       GNU parallel:
299
300       1 pexec -o sqrt-%s.dat -p "$(seq 10)" -e NUM -n 4 -c -- \
301         'echo "scale=10000;sqrt($NUM)" | bc'
302
303       1 seq 10 | parallel -j4 'echo "scale=10000;sqrt({})" | bc >
304       sqrt-{}.dat'
305
306       2 pexec -p "$(ls myfiles*.ext)" -i %s -o %s.sort -- sort
307
308       2 ls myfiles*.ext | parallel sort {} ">{}.sort"
309
310       3 pexec -f image.list -n auto -e B -u star.log -c -- \
311         'fistar $B.fits -f 100 -F id,x,y,flux -o $B.star'
312
313       3 parallel -a image.list \
314         'fistar {}.fits -f 100 -F id,x,y,flux -o {}.star' 2>star.log
315
316       4 pexec -r *.png -e IMG -c -o - -- \
317         'convert $IMG ${IMG%.png}.jpeg ; "echo $IMG: done"'
318
319       4 ls *.png | parallel 'convert {} {.}.jpeg; echo {}: done'
320
321       5 pexec -r *.png -i %s -o %s.jpg -c 'pngtopnm | pnmtojpeg'
322
323       5 ls *.png | parallel 'pngtopnm < {} | pnmtojpeg > {}.jpg'
324
325       6 for p in *.png ; do echo ${p%.png} ; done | \
326         pexec -f - -i %s.png -o %s.jpg -c 'pngtopnm | pnmtojpeg'
327
328       6 ls *.png | parallel 'pngtopnm < {} | pnmtojpeg > {.}.jpg'
329
330       7 LIST=$(for p in *.png ; do echo ${p%.png} ; done)
331         pexec -r $LIST -i %s.png -o %s.jpg -c 'pngtopnm | pnmtojpeg'
332
333       7 ls *.png | parallel 'pngtopnm < {} | pnmtojpeg > {.}.jpg'
334
335       8 pexec -n 8 -r *.jpg -y unix -e IMG -c \
336         'pexec -j -m blockread -d $IMG | \
337         jpegtopnm | pnmscale 0.5 | pnmtojpeg | \
338         pexec -j -m blockwrite -s th_$IMG'
339
340       8 Combining GNU parallel and GNU sem.
341
342       8 ls *jpg | parallel -j8 'sem --id blockread cat {} | jpegtopnm |' \
343         'pnmscale 0.5 | pnmtojpeg | sem --id blockwrite cat > th_{}'
344
345       8 If reading and writing is done to the same disk, this may be faster
346       as only one process will be either reading or writing:
347
348       8 ls *jpg | parallel -j8 'sem --id diskio cat {} | jpegtopnm |' \
349         'pnmscale 0.5 | pnmtojpeg | sem --id diskio cat > th_{}'
350
351       https://www.gnu.org/software/pexec/
352
353   DIFFERENCES BETWEEN xjobs AND GNU Parallel
354       xjobs is also a tool for running jobs in parallel. It only supports
355       running jobs on your local computer.
356
357       xjobs deals badly with special characters just like xargs. See the
358       section DIFFERENCES BETWEEN xargs AND GNU Parallel.
359
360       Here are the examples from xjobs's man page with the equivalent using
361       GNU parallel:
362
363       1 ls -1 *.zip | xjobs unzip
364
365       1 ls *.zip | parallel unzip
366
367       2 ls -1 *.zip | xjobs -n unzip
368
369       2 ls *.zip | parallel unzip >/dev/null
370
371       3 find . -name '*.bak' | xjobs gzip
372
373       3 find . -name '*.bak' | parallel gzip
374
375       4 ls -1 *.jar | sed 's/\(.*\)/\1 > \1.idx/' | xjobs jar tf
376
377       4 ls *.jar | parallel jar tf {} '>' {}.idx
378
379       5 xjobs -s script
380
381       5 cat script | parallel
382
383       6 mkfifo /var/run/my_named_pipe; xjobs -s /var/run/my_named_pipe & echo
384       unzip 1.zip >> /var/run/my_named_pipe; echo tar cf /backup/myhome.tar
385       /home/me >> /var/run/my_named_pipe
386
387       6 mkfifo /var/run/my_named_pipe; cat /var/run/my_named_pipe | parallel
388       & echo unzip 1.zip >> /var/run/my_named_pipe; echo tar cf
389       /backup/myhome.tar /home/me >> /var/run/my_named_pipe
390
391       http://www.maier-komor.de/xjobs.html (Last checked: 2019-01)
392
393   DIFFERENCES BETWEEN prll AND GNU Parallel
394       prll is also a tool for running jobs in parallel. It does not support
395       running jobs on remote computers.
396
397       prll encourages using BASH aliases and BASH functions instead of
398       scripts. GNU parallel supports scripts directly, functions if they are
399       exported using export -f, and aliases if using env_parallel.
400
401       prll generates a lot of status information on stderr (standard error)
402       which makes it harder to use the stderr (standard error) output of the
403       job directly as input for another program.
404
405       Here is the example from prll's man page with the equivalent using GNU
406       parallel:
407
408         prll -s 'mogrify -flip $1' *.jpg
409         parallel mogrify -flip ::: *.jpg
410
411       https://github.com/exzombie/prll (Last checked: 2019-01)
412
413   DIFFERENCES BETWEEN dxargs AND GNU Parallel
414       dxargs is also a tool for running jobs in parallel.
415
416       dxargs does not deal well with more simultaneous jobs than SSHD's
417       MaxStartups. dxargs is only built for remote run jobs, but does not
418       support transferring of files.
419
420       https://web.archive.org/web/20120518070250/http://www.
421       semicomplete.com/blog/geekery/distributed-xargs.html (Last checked:
422       2019-01)
423
424   DIFFERENCES BETWEEN mdm/middleman AND GNU Parallel
425       middleman(mdm) is also a tool for running jobs in parallel.
426
427       Here are the shellscripts of
428       https://web.archive.org/web/20110728064735/http://mdm.
429       berlios.de/usage.html ported to GNU parallel:
430
431         seq 19 | parallel buffon -o - | sort -n > result
432         cat files | parallel cmd
433         find dir -execdir sem cmd {} \;
434
435       https://github.com/cklin/mdm (Last checked: 2019-01)
436
437   DIFFERENCES BETWEEN xapply AND GNU Parallel
438       xapply can run jobs in parallel on the local computer.
439
440       Here are the examples from xapply's man page with the equivalent using
441       GNU parallel:
442
443       1 xapply '(cd %1 && make all)' */
444
445       1 parallel 'cd {} && make all' ::: */
446
447       2 xapply -f 'diff %1 ../version5/%1' manifest | more
448
449       2 parallel diff {} ../version5/{} < manifest | more
450
451       3 xapply -p/dev/null -f 'diff %1 %2' manifest1 checklist1
452
453       3 parallel --link diff {1} {2} :::: manifest1 checklist1
454
455       4 xapply 'indent' *.c
456
457       4 parallel indent ::: *.c
458
459       5 find ~ksb/bin -type f ! -perm -111 -print | xapply -f -v 'chmod a+x'
460       -
461
462       5 find ~ksb/bin -type f ! -perm -111 -print | parallel -v chmod a+x
463
464       6 find */ -... | fmt 960 1024 | xapply -f -i /dev/tty 'vi' -
465
466       6 sh <(find */ -... | parallel -s 1024 echo vi)
467
468       6 find */ -... | parallel -s 1024 -Xuj1 vi
469
470       7 find ... | xapply -f -5 -i /dev/tty 'vi' - - - - -
471
472       7 sh <(find ... |parallel -n5 echo vi)
473
474       7 find ... |parallel -n5 -uj1 vi
475
476       8 xapply -fn "" /etc/passwd
477
478       8 parallel -k echo < /etc/passwd
479
480       9 tr ':' '\012' < /etc/passwd | xapply -7 -nf 'chown %1 %6' - - - - - -
481       -
482
483       9 tr ':' '\012' < /etc/passwd | parallel -N7 chown {1} {6}
484
485       10 xapply '[ -d %1/RCS ] || echo %1' */
486
487       10 parallel '[ -d {}/RCS ] || echo {}' ::: */
488
489       11 xapply -f '[ -f %1 ] && echo %1' List | ...
490
491       11 parallel '[ -f {} ] && echo {}' < List | ...
492
493       https://web.archive.org/web/20160702211113/
494       http://carrera.databits.net/~ksb/msrc/local/bin/xapply/xapply.html
495
496   DIFFERENCES BETWEEN AIX apply AND GNU Parallel
497       apply can build command lines based on a template and arguments - very
498       much like GNU parallel. apply does not run jobs in parallel. apply does
499       not use an argument separator (like :::); instead the template must be
500       the first argument.
501
502       Here are the examples from IBM's Knowledge Center and the corresponding
503       command using GNU parallel:
504
505       1. To obtain results similar to those of the ls command, enter:
506
507         apply echo *
508         parallel echo ::: *
509
510       2. To compare the file named a1 to the file named b1, and the file
511       named a2 to the file named b2, enter:
512
513         apply -2 cmp a1 b1 a2 b2
514         parallel -N2 cmp ::: a1 b1 a2 b2
515
516       3. To run the who command five times, enter:
517
518         apply -0 who 1 2 3 4 5
519         parallel -N0 who ::: 1 2 3 4 5
520
521       4. To link all files in the current directory to the directory
522       /usr/joe, enter:
523
524         apply 'ln %1 /usr/joe' *
525         parallel ln {} /usr/joe ::: *
526
527       https://www-01.ibm.com/support/knowledgecenter/
528       ssw_aix_71/com.ibm.aix.cmds1/apply.htm (Last checked: 2019-01)
529
530   DIFFERENCES BETWEEN paexec AND GNU Parallel
531       paexec can run jobs in parallel on both the local and remote computers.
532
533       paexec requires commands to print a blank line as the last output. This
534       means you will have to write a wrapper for most programs.
535
536       paexec has a job dependency facility so a job can depend on another job
537       to be executed successfully. Sort of a poor-man's make.
538
539       Here are the examples from paexec's example catalog with the equivalent
540       using GNU parallel:
541
542       1_div_X_run:
543          ../../paexec -s -l -c "`pwd`/1_div_X_cmd" -n +1 <<EOF [...]
544          parallel echo {} '|' `pwd`/1_div_X_cmd <<EOF [...]
545
546       all_substr_run:
547          ../../paexec -lp -c "`pwd`/all_substr_cmd" -n +3 <<EOF [...]
548          parallel echo {} '|' `pwd`/all_substr_cmd <<EOF [...]
549
550       cc_wrapper_run:
551          ../../paexec -c "env CC=gcc CFLAGS=-O2 `pwd`/cc_wrapper_cmd" \
552                     -n 'host1 host2' \
553                     -t '/usr/bin/ssh -x' <<EOF [...]
554          parallel echo {} '|' "env CC=gcc CFLAGS=-O2 `pwd`/cc_wrapper_cmd" \
555                     -S host1,host2 <<EOF [...]
556          # This is not exactly the same, but avoids the wrapper
557          parallel gcc -O2 -c -o {.}.o {} \
558                     -S host1,host2 <<EOF [...]
559
560       toupper_run:
561          ../../paexec -lp -c "`pwd`/toupper_cmd" -n +10 <<EOF [...]
562          parallel echo {} '|' ./toupper_cmd <<EOF [...]
563          # Without the wrapper:
564          parallel echo {} '| awk {print\ toupper\(\$0\)}' <<EOF [...]
565
566       https://github.com/cheusov/paexec
567
568   DIFFERENCES BETWEEN map(sitaramc) AND GNU Parallel
569       map sees it as a feature to have less features and in doing so it also
570       handles corner cases incorrectly. A lot of GNU parallel's code is to
571       handle corner cases correctly on every platform, so you will not get a
572       nasty surprise if a user, for example, saves a file called: My
573       brother's 12" records.txt
574
575       map's example showing how to deal with special characters fails on
576       special characters:
577
578         echo "The Cure" > My\ brother\'s\ 12\"\ records
579
580         ls | \
581           map 'echo -n `gzip < "%" | wc -c`; echo -n '*100/'; wc -c < "%"' |
582           bc
583
584       It works with GNU parallel:
585
586         ls | \
587           parallel \
588             'echo -n `gzip < {} | wc -c`; echo -n '*100/'; wc -c < {}' | bc
589
590       And you can even get the file name prepended:
591
592         ls | \
593           parallel --tag \
594             '(echo -n `gzip < {} | wc -c`'*100/'; wc -c < {}) | bc'
595
596       map has no support for grouping. So this gives the wrong results
597       without any warnings:
598
599         parallel perl -e '\$a=\"1{}\"x10000000\;print\ \$a,\"\\n\"' '>' {} \
600           ::: a b c d e f
601         ls -l a b c d e f
602         parallel -kP4 -n1 grep 1 > out.par ::: a b c d e f
603         map -p 4 'grep 1' a b c d e f > out.map-unbuf
604         map -p 4 'grep --line-buffered 1' a b c d e f > out.map-linebuf
605         map -p 1 'grep --line-buffered 1' a b c d e f > out.map-serial
606         ls -l out*
607         md5sum out*
608
609       The documentation shows a workaround, but not only does that mix stdout
610       (standard output) with stderr (standard error) it also fails completely
611       for certain jobs (and may even be considered less readable):
612
613         parallel echo -n {} ::: 1 2 3
614
615         map -p 4 'echo -n % 2>&1 | sed -e "s/^/$$:/"' 1 2 3 | \
616           sort | cut -f2- -d:
617
618       maps replacement strings (% %D %B %E) can be simulated in GNU parallel
619       by putting this in ~/.parallel/config:
620
621         --rpl '%'
622         --rpl '%D $_=Q(::dirname($_));'
623         --rpl '%B s:.*/::;s:\.[^/.]+$::;'
624         --rpl '%E s:.*\.::'
625
626       map does not have an argument separator on the command line, but uses
627       the first argument as command. This makes quoting harder which again
628       may affect readability. Compare:
629
630         map -p 2 'perl -ne '"'"'/^\S+\s+\S+$/ and print $ARGV,"\n"'"'" *
631
632         parallel -q perl -ne '/^\S+\s+\S+$/ and print $ARGV,"\n"' ::: *
633
634       map can do multiple arguments with context replace, but not without
635       context replace:
636
637         parallel --xargs echo 'BEGIN{'{}'}END' ::: 1 2 3
638
639         map "echo 'BEGIN{'%'}END'" 1 2 3
640
641       map requires Perl v5.10.0 making it harder to use on old systems.
642
643       map has no way of using % in the command (GNU parallel has -I to
644       specify another replacement string than {}).
645
646       By design map is option incompatible with xargs, it does not have
647       remote job execution, a structured way of saving results, multiple
648       input sources, progress indicator, configurable record delimiter (only
649       field delimiter), logging of jobs run with possibility to resume,
650       keeping the output in the same order as input, --pipe processing, and
651       dynamically timeouts.
652
653       https://github.com/sitaramc/map
654
655   DIFFERENCES BETWEEN ladon AND GNU Parallel
656       ladon can run multiple jobs on files in parallel.
657
658       ladon only works on files and the only way to specify files is using a
659       quoted glob string (such as \*.jpg). It is not possible to list the
660       files manually.
661
662       As replacement strings it uses FULLPATH DIRNAME BASENAME EXT RELDIR
663       RELPATH
664
665       These can be simulated using GNU parallel by putting this in
666       ~/.parallel/config:
667
668           --rpl 'FULLPATH $_=Q($_);chomp($_=qx{readlink -f $_});'
669           --rpl 'DIRNAME $_=Q(::dirname($_));chomp($_=qx{readlink -f $_});'
670           --rpl 'BASENAME s:.*/::;s:\.[^/.]+$::;'
671           --rpl 'EXT s:.*\.::'
672           --rpl 'RELDIR $_=Q($_);chomp(($_,$c)=qx{readlink -f $_;pwd});
673                  s:\Q$c/\E::;$_=::dirname($_);'
674           --rpl 'RELPATH $_=Q($_);chomp(($_,$c)=qx{readlink -f $_;pwd});
675                  s:\Q$c/\E::;'
676
677       ladon deals badly with filenames containing " and newline, and it fails
678       for output larger than 200k:
679
680           ladon '*' -- seq 36000 | wc
681
682       EXAMPLES FROM ladon MANUAL
683
684       It is assumed that the '--rpl's above are put in ~/.parallel/config and
685       that it is run under a shell that supports '**' globbing (such as zsh):
686
687       1 ladon "**/*.txt" -- echo RELPATH
688
689       1 parallel echo RELPATH ::: **/*.txt
690
691       2 ladon "~/Documents/**/*.pdf" -- shasum FULLPATH >hashes.txt
692
693       2 parallel shasum FULLPATH ::: ~/Documents/**/*.pdf >hashes.txt
694
695       3 ladon -m thumbs/RELDIR "**/*.jpg" -- convert FULLPATH -thumbnail
696       100x100^ -gravity center -extent 100x100 thumbs/RELPATH
697
698       3 parallel mkdir -p thumbs/RELDIR\; convert FULLPATH -thumbnail
699       100x100^ -gravity center -extent 100x100 thumbs/RELPATH ::: **/*.jpg
700
701       4 ladon "~/Music/*.wav" -- lame -V 2 FULLPATH DIRNAME/BASENAME.mp3
702
703       4 parallel lame -V 2 FULLPATH DIRNAME/BASENAME.mp3 ::: ~/Music/*.wav
704
705       https://github.com/danielgtaylor/ladon (Last checked: 2019-01)
706
707   DIFFERENCES BETWEEN jobflow AND GNU Parallel
708       jobflow can run multiple jobs in parallel.
709
710       Just like xargs output from jobflow jobs running in parallel mix
711       together by default. jobflow can buffer into files (placed in
712       /run/shm), but these are not cleaned up if jobflow dies unexpectedly
713       (e.g. by Ctrl-C). If the total output is big (in the order of RAM+swap)
714       it can cause the system to slow to a crawl and eventually run out of
715       memory.
716
717       jobflow gives no error if the command is unknown, and like xargs
718       redirection and composed commands require wrapping with bash -c.
719
720       Input lines can at most be 4096 bytes. You can at most have 16 {}'s in
721       the command template. More than that either crashes the program or
722       simple does not execute the command.
723
724       jobflow has no equivalent for --pipe, or --sshlogin.
725
726       jobflow makes it possible to set resource limits on the running jobs.
727       This can be emulated by GNU parallel using bash's ulimit:
728
729         jobflow -limits=mem=100M,cpu=3,fsize=20M,nofiles=300 myjob
730
731         parallel 'ulimit -v 102400 -t 3 -f 204800 -n 300 myjob'
732
733       EXAMPLES FROM jobflow README
734
735       1 cat things.list | jobflow -threads=8 -exec ./mytask {}
736
737       1 cat things.list | parallel -j8 ./mytask {}
738
739       2 seq 100 | jobflow -threads=100 -exec echo {}
740
741       2 seq 100 | parallel -j100 echo {}
742
743       3 cat urls.txt | jobflow -threads=32 -exec wget {}
744
745       3 cat urls.txt | parallel -j32 wget {}
746
747       4 find . -name '*.bmp' | jobflow -threads=8 -exec bmp2jpeg {.}.bmp
748       {.}.jpg
749
750       4 find . -name '*.bmp' | parallel -j8 bmp2jpeg {.}.bmp {.}.jpg
751
752       https://github.com/rofl0r/jobflow
753
754   DIFFERENCES BETWEEN gargs AND GNU Parallel
755       gargs can run multiple jobs in parallel.
756
757       Older versions cache output in memory. This causes it to be extremely
758       slow when the output is larger than the physical RAM, and can cause the
759       system to run out of memory.
760
761       See more details on this in man parallel_design.
762
763       Newer versions cache output in files, but leave files in $TMPDIR if it
764       is killed.
765
766       Output to stderr (standard error) is changed if the command fails.
767
768       Here are the two examples from gargs website.
769
770       1 seq 12 -1 1 | gargs -p 4 -n 3 "sleep {0}; echo {1} {2}"
771
772       1 seq 12 -1 1 | parallel -P 4 -n 3 "sleep {1}; echo {2} {3}"
773
774       2 cat t.txt | gargs --sep "\s+" -p 2 "echo '{0}:{1}-{2}' full-line:
775       \'{}\'"
776
777       2 cat t.txt | parallel --colsep "\\s+" -P 2 "echo '{1}:{2}-{3}' full-
778       line: \'{}\'"
779
780       https://github.com/brentp/gargs
781
782   DIFFERENCES BETWEEN orgalorg AND GNU Parallel
783       orgalorg can run the same job on multiple machines. This is related to
784       --onall and --nonall.
785
786       orgalorg supports entering the SSH password - provided it is the same
787       for all servers. GNU parallel advocates using ssh-agent instead, but it
788       is possible to emulate orgalorg's behavior by setting SSHPASS and by
789       using --ssh "sshpass ssh".
790
791       To make the emulation easier, make a simple alias:
792
793         alias par_emul="parallel -j0 --ssh 'sshpass ssh' --nonall --tag --lb"
794
795       If you want to supply a password run:
796
797         SSHPASS=`ssh-askpass`
798
799       or set the password directly:
800
801         SSHPASS=P4$$w0rd!
802
803       If the above is set up you can then do:
804
805         orgalorg -o frontend1 -o frontend2 -p -C uptime
806         par_emul -S frontend1 -S frontend2 uptime
807
808         orgalorg -o frontend1 -o frontend2 -p -C top -bid 1
809         par_emul -S frontend1 -S frontend2 top -bid 1
810
811         orgalorg -o frontend1 -o frontend2 -p -er /tmp -n \
812           'md5sum /tmp/bigfile' -S bigfile
813         par_emul -S frontend1 -S frontend2 --basefile bigfile \
814           --workdir /tmp md5sum /tmp/bigfile
815
816       orgalorg has a progress indicator for the transferring of a file. GNU
817       parallel does not.
818
819       https://github.com/reconquest/orgalorg
820
821   DIFFERENCES BETWEEN Rust parallel AND GNU Parallel
822       Rust parallel focuses on speed. It is almost as fast as xargs. It
823       implements a few features from GNU parallel, but lacks many functions.
824       All these fail:
825
826         # Read arguments from file
827         parallel -a file echo
828         # Changing the delimiter
829         parallel -d _ echo ::: a_b_c_
830
831       These do something different from GNU parallel
832
833         # -q to protect quoted $ and space
834         parallel -q perl -e '$a=shift; print "$a"x10000000' ::: a b c
835         # Generation of combination of inputs
836         parallel echo {1} {2} ::: red green blue ::: S M L XL XXL
837         # {= perl expression =} replacement string
838         parallel echo '{= s/new/old/ =}' ::: my.new your.new
839         # --pipe
840         seq 100000 | parallel --pipe wc
841         # linked arguments
842         parallel echo ::: S M L :::+ sml med lrg ::: R G B :::+ red grn blu
843         # Run different shell dialects
844         zsh -c 'parallel echo \={} ::: zsh && true'
845         csh -c 'parallel echo \$\{\} ::: shell && true'
846         bash -c 'parallel echo \$\({}\) ::: pwd && true'
847         # Rust parallel does not start before the last argument is read
848         (seq 10; sleep 5; echo 2) | time parallel -j2 'sleep 2; echo'
849         tail -f /var/log/syslog | parallel echo
850
851       Most of the examples from the book GNU Parallel 2018 do not work, thus
852       Rust parallel is not close to being a compatible replacement.
853
854       Rust parallel has no remote facilities.
855
856       It uses /tmp/parallel for tmp files and does not clean up if terminated
857       abruptly. If another user on the system uses Rust parallel, then
858       /tmp/parallel will have the wrong permissions and Rust parallel will
859       fail. A malicious user can setup the right permissions and symlink the
860       output file to one of the user's files and next time the user uses Rust
861       parallel it will overwrite this file.
862
863         attacker$ mkdir /tmp/parallel
864         attacker$ chmod a+rwX /tmp/parallel
865         # Symlink to the file the attacker wants to zero out
866         attacker$ ln -s ~victim/.important-file /tmp/parallel/stderr_1
867         victim$ seq 1000 | parallel echo
868         # This file is now overwritten with stderr from 'echo'
869         victim$ cat ~victim/.important-file
870
871       If /tmp/parallel runs full during the run, Rust parallel does not
872       report this, but finishes with success - thereby risking data loss.
873
874       https://github.com/mmstick/parallel
875
876   DIFFERENCES BETWEEN Rush AND GNU Parallel
877       rush (https://github.com/shenwei356/rush) is written in Go and based on
878       gargs.
879
880       Just like GNU parallel rush buffers in temporary files. But opposite
881       GNU parallel rush does not clean up, if the process dies abnormally.
882
883       rush has some string manipulations that can be emulated by putting this
884       into ~/.parallel/config (/ is used instead of %, and % is used instead
885       of ^ as that is closer to bash's ${var%postfix}):
886
887         --rpl '{:} s:(\.[^/]+)*$::'
888         --rpl '{:%([^}]+?)} s:$$1(\.[^/]+)*$::'
889         --rpl '{/:%([^}]*?)} s:.*/(.*)$$1(\.[^/]+)*$:$1:'
890         --rpl '{/:} s:(.*/)?([^/.]+)(\.[^/]+)*$:$2:'
891         --rpl '{@(.*?)} /$$1/ and $_=$1;'
892
893       Here are the examples from rush's website with the equivalent command
894       in GNU parallel.
895
896       EXAMPLES
897
898       1. Simple run, quoting is not necessary
899
900         $ seq 1 3 | rush echo {}
901
902         $ seq 1 3 | parallel echo {}
903
904       2. Read data from file (`-i`)
905
906         $ rush echo {} -i data1.txt -i data2.txt
907
908         $ cat data1.txt data2.txt | parallel echo {}
909
910       3. Keep output order (`-k`)
911
912         $ seq 1 3 | rush 'echo {}' -k
913
914         $ seq 1 3 | parallel -k echo {}
915
916       4. Timeout (`-t`)
917
918         $ time seq 1 | rush 'sleep 2; echo {}' -t 1
919
920         $ time seq 1 | parallel --timeout 1 'sleep 2; echo {}'
921
922       5. Retry (`-r`)
923
924         $ seq 1 | rush 'python unexisted_script.py' -r 1
925
926         $ seq 1 | parallel --retries 2 'python unexisted_script.py'
927
928       Use -u to see it is really run twice:
929
930         $ seq 1 | parallel -u --retries 2 'python unexisted_script.py'
931
932       6. Dirname (`{/}`) and basename (`{%}`) and remove custom suffix
933       (`{^suffix}`)
934
935         $ echo dir/file_1.txt.gz | rush 'echo {/} {%} {^_1.txt.gz}'
936
937         $ echo dir/file_1.txt.gz |
938             parallel --plus echo {//} {/} {%_1.txt.gz}
939
940       7. Get basename, and remove last (`{.}`) or any (`{:}`) extension
941
942         $ echo dir.d/file.txt.gz | rush 'echo {.} {:} {%.} {%:}'
943
944         $ echo dir.d/file.txt.gz | parallel 'echo {.} {:} {/.} {/:}'
945
946       8. Job ID, combine fields index and other replacement strings
947
948         $ echo 12 file.txt dir/s_1.fq.gz |
949             rush 'echo job {#}: {2} {2.} {3%:^_1}'
950
951         $ echo 12 file.txt dir/s_1.fq.gz |
952             parallel --colsep ' ' 'echo job {#}: {2} {2.} {3/:%_1}'
953
954       9. Capture submatch using regular expression (`{@regexp}`)
955
956         $ echo read_1.fq.gz | rush 'echo {@(.+)_\d}'
957
958         $ echo read_1.fq.gz | parallel 'echo {@(.+)_\d}'
959
960       10. Custom field delimiter (`-d`)
961
962         $ echo a=b=c | rush 'echo {1} {2} {3}' -d =
963
964         $ echo a=b=c | parallel -d = echo {1} {2} {3}
965
966       11. Send multi-lines to every command (`-n`)
967
968         $ seq 5 | rush -n 2 -k 'echo "{}"; echo'
969
970         $ seq 5 |
971             parallel -n 2 -k \
972               'echo {=-1 $_=join"\n",@arg[1..$#arg] =}; echo'
973
974         $ seq 5 | rush -n 2 -k 'echo "{}"; echo' -J ' '
975
976         $ seq 5 | parallel -n 2 -k 'echo {}; echo'
977
978       12. Custom record delimiter (`-D`), note that empty records are not
979       used.
980
981         $ echo a b c d | rush -D " " -k 'echo {}'
982
983         $ echo a b c d | parallel -d " " -k 'echo {}'
984
985         $ echo abcd | rush -D "" -k 'echo {}'
986
987         Cannot be done by GNU Parallel
988
989         $ cat fasta.fa
990         >seq1
991         tag
992         >seq2
993         cat
994         gat
995         >seq3
996         attac
997         a
998         cat
999
1000         $ cat fasta.fa | rush -D ">" \
1001             'echo FASTA record {#}: name: {1} sequence: {2}' -k -d "\n"
1002         # rush fails to join the multiline sequences
1003
1004         $ cat fasta.fa | (read -n1 ignore_first_char;
1005             parallel -d '>' --colsep '\n' echo FASTA record {#}: \
1006               name: {1} sequence: '{=2 $_=join"",@arg[2..$#arg]=}'
1007           )
1008
1009       13. Assign value to variable, like `awk -v` (`-v`)
1010
1011         $ seq 1 |
1012             rush 'echo Hello, {fname} {lname}!' -v fname=Wei -v lname=Shen
1013
1014         $ seq 1 |
1015             parallel -N0 \
1016               'fname=Wei; lname=Shen; echo Hello, ${fname} ${lname}!'
1017
1018         $ for var in a b; do \
1019         $   seq 1 3 | rush -k -v var=$var 'echo var: {var}, data: {}'; \
1020         $ done
1021
1022       In GNU parallel you would typically do:
1023
1024         $ seq 1 3 | parallel -k echo var: {1}, data: {2} ::: a b :::: -
1025
1026       If you really want the var:
1027
1028         $ seq 1 3 |
1029             parallel -k var={1} ';echo var: $var, data: {}' ::: a b :::: -
1030
1031       If you really want the for-loop:
1032
1033         $ for var in a b; do
1034         >   export var;
1035         >   seq 1 3 | parallel -k 'echo var: $var, data: {}';
1036         > done
1037
1038       Contrary to rush this also works if the value is complex like:
1039
1040         My brother's 12" records
1041
1042       14. Preset variable (`-v`), avoid repeatedly writing verbose
1043       replacement strings
1044
1045         # naive way
1046         $ echo read_1.fq.gz | rush 'echo {:^_1} {:^_1}_2.fq.gz'
1047
1048         $ echo read_1.fq.gz | parallel 'echo {:%_1} {:%_1}_2.fq.gz'
1049
1050         # macro + removing suffix
1051         $ echo read_1.fq.gz |
1052             rush -v p='{:^_1}' 'echo {p} {p}_2.fq.gz'
1053
1054         $ echo read_1.fq.gz |
1055             parallel 'p={:%_1}; echo $p ${p}_2.fq.gz'
1056
1057         # macro + regular expression
1058         $ echo read_1.fq.gz | rush -v p='{@(.+?)_\d}' 'echo {p} {p}_2.fq.gz'
1059
1060         $ echo read_1.fq.gz | parallel 'p={@(.+?)_\d}; echo $p ${p}_2.fq.gz'
1061
1062       Contrary to rush GNU parallel works with complex values:
1063
1064         echo "My brother's 12\"read_1.fq.gz" |
1065           parallel 'p={@(.+?)_\d}; echo $p ${p}_2.fq.gz'
1066
1067       15. Interrupt jobs by `Ctrl-C`, rush will stop unfinished commands and
1068       exit.
1069
1070         $ seq 1 20 | rush 'sleep 1; echo {}'
1071         ^C
1072
1073         $ seq 1 20 | parallel 'sleep 1; echo {}'
1074         ^C
1075
1076       16. Continue/resume jobs (`-c`). When some jobs failed (by execution
1077       failure, timeout, or canceling by user with `Ctrl + C`), please switch
1078       flag `-c/--continue` on and run again, so that `rush` can save
1079       successful commands and ignore them in NEXT run.
1080
1081         $ seq 1 3 | rush 'sleep {}; echo {}' -t 3 -c
1082         $ cat successful_cmds.rush
1083         $ seq 1 3 | rush 'sleep {}; echo {}' -t 3 -c
1084
1085         $ seq 1 3 | parallel --joblog mylog --timeout 2 \
1086             'sleep {}; echo {}'
1087         $ cat mylog
1088         $ seq 1 3 | parallel --joblog mylog --retry-failed \
1089             'sleep {}; echo {}'
1090
1091       Multi-line jobs:
1092
1093         $ seq 1 3 | rush 'sleep {}; echo {}; \
1094           echo finish {}' -t 3 -c -C finished.rush
1095         $ cat finished.rush
1096         $ seq 1 3 | rush 'sleep {}; echo {}; \
1097           echo finish {}' -t 3 -c -C finished.rush
1098
1099         $ seq 1 3 |
1100             parallel --joblog mylog --timeout 2 'sleep {}; echo {}; \
1101           echo finish {}'
1102         $ cat mylog
1103         $ seq 1 3 |
1104             parallel --joblog mylog --retry-failed 'sleep {}; echo {}; \
1105               echo finish {}'
1106
1107       17. A comprehensive example: downloading 1K+ pages given by three URL
1108       list files using `phantomjs save_page.js` (some page contents are
1109       dynamically generated by Javascript, so `wget` does not work). Here I
1110       set max jobs number (`-j`) as `20`, each job has a max running time
1111       (`-t`) of `60` seconds and `3` retry changes (`-r`). Continue flag `-c`
1112       is also switched on, so we can continue unfinished jobs. Luckily, it's
1113       accomplished in one run :)
1114
1115         $ for f in $(seq 2014 2016); do \
1116         $    /bin/rm -rf $f; mkdir -p $f; \
1117         $    cat $f.html.txt | rush -v d=$f -d = \
1118                'phantomjs save_page.js "{}" > {d}/{3}.html' \
1119                -j 20 -t 60 -r 3 -c; \
1120         $ done
1121
1122       GNU parallel can append to an existing joblog with '+':
1123
1124         $ rm mylog
1125         $ for f in $(seq 2014 2016); do
1126             /bin/rm -rf $f; mkdir -p $f;
1127             cat $f.html.txt |
1128               parallel -j20 --timeout 60 --retries 4 --joblog +mylog \
1129                 --colsep = \
1130                 phantomjs save_page.js {1}={2}={3} '>' $f/{3}.html
1131           done
1132
1133       18. A bioinformatics example: mapping with `bwa`, and processing result
1134       with `samtools`:
1135
1136         $ ref=ref/xxx.fa
1137         $ threads=25
1138         $ ls -d raw.cluster.clean.mapping/* \
1139           | rush -v ref=$ref -v j=$threads -v p='{}/{%}' \
1140               'bwa mem -t {j} -M -a {ref} {p}_1.fq.gz {p}_2.fq.gz >{p}.sam;\
1141               samtools view -bS {p}.sam > {p}.bam; \
1142               samtools sort -T {p}.tmp -@ {j} {p}.bam -o {p}.sorted.bam; \
1143               samtools index {p}.sorted.bam; \
1144               samtools flagstat {p}.sorted.bam > {p}.sorted.bam.flagstat; \
1145               /bin/rm {p}.bam {p}.sam;' \
1146               -j 2 --verbose -c -C mapping.rush
1147
1148       GNU parallel would use a function:
1149
1150         $ ref=ref/xxx.fa
1151         $ export ref
1152         $ thr=25
1153         $ export thr
1154         $ bwa_sam() {
1155             p="$1"
1156             bam="$p".bam
1157             sam="$p".sam
1158             sortbam="$p".sorted.bam
1159             bwa mem -t $thr -M -a $ref ${p}_1.fq.gz ${p}_2.fq.gz > "$sam"
1160             samtools view -bS "$sam" > "$bam"
1161             samtools sort -T ${p}.tmp -@ $thr "$bam" -o "$sortbam"
1162             samtools index "$sortbam"
1163             samtools flagstat "$sortbam" > "$sortbam".flagstat
1164             /bin/rm "$bam" "$sam"
1165           }
1166         $ export -f bwa_sam
1167         $ ls -d raw.cluster.clean.mapping/* |
1168             parallel -j 2 --verbose --joblog mylog bwa_sam
1169
1170       Other rush features
1171
1172       rush has:
1173
1174       ·   awk -v like custom defined variables (-v)
1175
1176           With GNU parallel you would simply set a shell variable:
1177
1178              parallel 'v={}; echo "$v"' ::: foo
1179              echo foo | rush -v v={} 'echo {v}'
1180
1181           Also rush does not like special chars. So these do not work:
1182
1183              echo does not work | rush -v v=\" 'echo {v}'
1184              echo "My  brother's  12\"  records" | rush -v v={} 'echo {v}'
1185
1186           Whereas the corresponding GNU parallel version works:
1187
1188              parallel 'v=\"; echo "$v"' ::: works
1189              parallel 'v={}; echo "$v"' ::: "My  brother's  12\"  records"
1190
1191       ·   Exit on first error(s) (-e)
1192
1193           This is called --halt now,fail=1 (or shorter: --halt 2) when used
1194           with GNU parallel.
1195
1196       ·   Settable records sending to every command (-n, default 1)
1197
1198           This is also called -n in GNU parallel.
1199
1200       ·   Practical replacement strings
1201
1202           {:} remove any extension
1203               With GNU parallel this can be emulated by:
1204
1205                 parallel --plus echo '{/\..*/}' ::: foo.ext.bar.gz
1206
1207           {^suffix}, remove suffix
1208               With GNU parallel this can be emulated by:
1209
1210                 parallel --plus echo '{%.bar.gz}' ::: foo.ext.bar.gz
1211
1212           {@regexp}, capture submatch using regular expression
1213               With GNU parallel this can be emulated by:
1214
1215                 parallel --rpl '{@(.*?)} /$$1/ and $_=$1;' \
1216                   echo '{@\d_(.*).gz}' ::: 1_foo.gz
1217
1218           {%.}, {%:}, basename without extension
1219               With GNU parallel this can be emulated by:
1220
1221                 parallel echo '{= s:.*/::;s/\..*// =}' ::: dir/foo.bar.gz
1222
1223               And if you need it often, you define a --rpl in
1224               $HOME/.parallel/config:
1225
1226                 --rpl '{%.} s:.*/::;s/\..*//'
1227                 --rpl '{%:} s:.*/::;s/\..*//'
1228
1229               Then you can use them as:
1230
1231                 parallel echo {%.} {%:} ::: dir/foo.bar.gz
1232
1233       ·   Preset variable (macro)
1234
1235           E.g.
1236
1237             echo foosuffix | rush -v p={^suffix} 'echo {p}_new_suffix'
1238
1239           With GNU parallel this can be emulated by:
1240
1241             echo foosuffix |
1242               parallel --plus 'p={%suffix}; echo ${p}_new_suffix'
1243
1244           Opposite rush GNU parallel works fine if the input contains double
1245           space, ' and ":
1246
1247             echo "1'6\"  foosuffix" |
1248               parallel --plus 'p={%suffix}; echo "${p}"_new_suffix'
1249
1250       ·   Commands of multi-lines
1251
1252           While you can use multi-lined commands in GNU parallel, to improve
1253           readability GNU parallel discourages the use of multi-line
1254           commands. In most cases it can be written as a function:
1255
1256             seq 1 3 |
1257               parallel --timeout 2 --joblog my.log 'sleep {}; echo {}; \
1258                 echo finish {}'
1259
1260           Could be written as:
1261
1262             doit() {
1263               sleep "$1"
1264               echo "$1"
1265               echo finish "$1"
1266             }
1267             export -f doit
1268             seq 1 3 | parallel --timeout 2 --joblog my.log doit
1269
1270           The failed commands can be resumed with:
1271
1272             seq 1 3 |
1273               parallel --resume-failed --joblog my.log 'sleep {}; echo {};\
1274                 echo finish {}'
1275
1276       https://github.com/shenwei356/rush
1277
1278   DIFFERENCES BETWEEN ClusterSSH AND GNU Parallel
1279       ClusterSSH solves a different problem than GNU parallel.
1280
1281       ClusterSSH opens a terminal window for each computer and using a master
1282       window you can run the same command on all the computers. This is
1283       typically used for administrating several computers that are almost
1284       identical.
1285
1286       GNU parallel runs the same (or different) commands with different
1287       arguments in parallel possibly using remote computers to help
1288       computing. If more than one computer is listed in -S GNU parallel may
1289       only use one of these (e.g. if there are 8 jobs to be run and one
1290       computer has 8 cores).
1291
1292       GNU parallel can be used as a poor-man's version of ClusterSSH:
1293
1294       parallel --nonall -S server-a,server-b do_stuff foo bar
1295
1296       https://github.com/duncs/clusterssh
1297
1298   DIFFERENCES BETWEEN coshell AND GNU Parallel
1299       coshell only accepts full commands on standard input. Any quoting needs
1300       to be done by the user.
1301
1302       Commands are run in sh so any bash/tcsh/zsh specific syntax will not
1303       work.
1304
1305       Output can be buffered by using -d. Output is buffered in memory, so
1306       big output can cause swapping and therefore be terrible slow or even
1307       cause out of memory.
1308
1309       https://github.com/gdm85/coshell (Last checked: 2019-01)
1310
1311   DIFFERENCES BETWEEN spread AND GNU Parallel
1312       spread runs commands on all directories.
1313
1314       It can be emulated with GNU parallel using this Bash function:
1315
1316         spread() {
1317           _cmds() {
1318             perl -e '$"=" && ";print "@ARGV"' "cd {}" "$@"
1319           }
1320           parallel $(_cmds "$@")'|| echo exit status $?' ::: */
1321         }
1322
1323       This works except for the --exclude option.
1324
1325       (Last checked: 2017-11)
1326
1327   DIFFERENCES BETWEEN pyargs AND GNU Parallel
1328       pyargs deals badly with input containing spaces. It buffers stdout, but
1329       not stderr. It buffers in RAM. {} does not work as replacement string.
1330       It does not support running functions.
1331
1332       pyargs does not support composed commands if run with --lines, and
1333       fails on pyargs traceroute gnu.org fsf.org.
1334
1335       Examples
1336
1337         seq 5 | pyargs -P50 -L seq
1338         seq 5 | parallel -P50 --lb seq
1339
1340         seq 5 | pyargs -P50 --mark -L seq
1341         seq 5 | parallel -P50 --lb \
1342           --tagstring OUTPUT'[{= $_=$job->replaced()=}]' seq
1343         # Similar, but not precisely the same
1344         seq 5 | parallel -P50 --lb --tag seq
1345
1346         seq 5 | pyargs -P50  --mark command
1347         # Somewhat longer with GNU Parallel due to the special
1348         #   --mark formatting
1349         cmd="$(echo "command" | parallel --shellquote)"
1350         wrap_cmd() {
1351            echo "MARK $cmd $@================================" >&3
1352            echo "OUTPUT START[$cmd $@]:"
1353            eval $cmd "$@"
1354            echo "OUTPUT END[$cmd $@]"
1355         }
1356         (seq 5 | env_parallel -P2 wrap_cmd) 3>&1
1357         # Similar, but not exactly the same
1358         seq 5 | parallel -t --tag command
1359
1360         (echo '1  2  3';echo 4 5 6) | pyargs  --stream seq
1361         (echo '1  2  3';echo 4 5 6) | perl -pe 's/\n/ /' |
1362           parallel -r -d' ' seq
1363         # Similar, but not exactly the same
1364         parallel seq ::: 1 2 3 4 5 6
1365
1366       https://github.com/robertblackwell/pyargs (Last checked: 2019-01)
1367
1368   DIFFERENCES BETWEEN concurrently AND GNU Parallel
1369       concurrently runs jobs in parallel.
1370
1371       The output is prepended with the job number, and may be incomplete:
1372
1373         $ concurrently 'seq 100000' | (sleep 3;wc -l)
1374         7165
1375
1376       When pretty printing it caches output in memory. Output mixes by using
1377       test MIX below whether or not output is cached.
1378
1379       There seems to be no way of making a template command and have
1380       concurrently fill that with different args. The full commands must be
1381       given on the command line.
1382
1383       There is also no way of controlling how many jobs should be run in
1384       parallel at a time - i.e. "number of jobslots". Instead all jobs are
1385       simply started in parallel.
1386
1387       https://github.com/kimmobrunfeldt/concurrently (Last checked: 2019-01)
1388
1389   DIFFERENCES BETWEEN map(soveran) AND GNU Parallel
1390       map does not run jobs in parallel by default. The README suggests
1391       using:
1392
1393         ... | map t 'sleep $t && say done &'
1394
1395       But this fails if more jobs are run in parallel than the number of
1396       available processes. Since there is no support for parallelization in
1397       map itself, the output also mixes:
1398
1399         seq 10 | map i 'echo start-$i && sleep 0.$i && echo end-$i &'
1400
1401       The major difference is that GNU parallel is built for parallelization
1402       and map is not. So GNU parallel has lots of ways of dealing with the
1403       issues that parallelization raises:
1404
1405       ·   Keep the number of processes manageable
1406
1407       ·   Make sure output does not mix
1408
1409       ·   Make Ctrl-C kill all running processes
1410
1411       Here are the 5 examples converted to GNU Parallel:
1412
1413         1$ ls *.c | map f 'foo $f'
1414         1$ ls *.c | parallel foo
1415
1416         2$ ls *.c | map f 'foo $f; bar $f'
1417         2$ ls *.c | parallel 'foo {}; bar {}'
1418
1419         3$ cat urls | map u 'curl -O $u'
1420         3$ cat urls | parallel curl -O
1421
1422         4$ printf "1\n1\n1\n" | map t 'sleep $t && say done'
1423         4$ printf "1\n1\n1\n" | parallel 'sleep {} && say done'
1424         4$ parallel 'sleep {} && say done' ::: 1 1 1
1425
1426         5$ printf "1\n1\n1\n" | map t 'sleep $t && say done &'
1427         5$ printf "1\n1\n1\n" | parallel -j0 'sleep {} && say done'
1428         5$ parallel -j0 'sleep {} && say done' ::: 1 1 1
1429
1430       https://github.com/soveran/map (Last checked: 2019-01)
1431
1432   DIFFERENCES BETWEEN loop AND GNU Parallel
1433       loop mixes stdout and stderr:
1434
1435           loop 'ls /no-such-file' >/dev/null
1436
1437       loop's replacement string $ITEM does not quote strings:
1438
1439           echo 'two  spaces' | loop 'echo $ITEM'
1440
1441       loop cannot run functions:
1442
1443           myfunc() { echo joe; }
1444           export -f myfunc
1445           loop 'myfunc this fails'
1446
1447       Some of the examples from https://github.com/Miserlou/Loop/ can be
1448       emulated with GNU parallel:
1449
1450           # A couple of functions will make the code easier to read
1451           $ loopy() {
1452               yes | parallel -uN0 -j1 "$@"
1453             }
1454           $ export -f loopy
1455           $ time_out() {
1456               parallel -uN0 -q --timeout "$@" ::: 1
1457             }
1458           $ match() {
1459               perl -0777 -ne 'grep /'"$1"'/,$_ and print or exit 1'
1460             }
1461           $ export -f match
1462
1463           $ loop 'ls' --every 10s
1464           $ loopy --delay 10s ls
1465
1466           $ loop 'touch $COUNT.txt' --count-by 5
1467           $ loopy touch '{= $_=seq()*5 =}'.txt
1468
1469           $ loop --until-contains 200 -- \
1470               ./get_response_code.sh --site mysite.biz`
1471           $ loopy --halt now,success=1 \
1472               './get_response_code.sh --site mysite.biz | match 200'
1473
1474           $ loop './poke_server' --for-duration 8h
1475           $ time_out 8h loopy ./poke_server
1476
1477           $ loop './poke_server' --until-success
1478           $ loopy --halt now,success=1 ./poke_server
1479
1480           $ cat files_to_create.txt | loop 'touch $ITEM'
1481           $ cat files_to_create.txt | parallel touch {}
1482
1483           $ loop 'ls' --for-duration 10min --summary
1484           # --joblog is somewhat more verbose than --summary
1485           $ time_out 10m loopy --joblog my.log ./poke_server; cat my.log
1486
1487           $ loop 'echo hello'
1488           $ loopy echo hello
1489
1490           $ loop 'echo $COUNT'
1491           # GNU Parallel counts from 1
1492           $ loopy echo {#}
1493           # Counting from 0 can be forced
1494           $ loopy echo '{= $_=seq()-1 =}'
1495
1496           $ loop 'echo $COUNT' --count-by 2
1497           $ loopy echo '{= $_=2*(seq()-1) =}'
1498
1499           $ loop 'echo $COUNT' --count-by 2 --offset 10
1500           $ loopy echo '{= $_=10+2*(seq()-1) =}'
1501
1502           $ loop 'echo $COUNT' --count-by 1.1
1503           # GNU Parallel rounds 3.3000000000000003 to 3.3
1504           $ loopy echo '{= $_=1.1*(seq()-1) =}'
1505
1506           $ loop 'echo $COUNT $ACTUALCOUNT' --count-by 2
1507           $ loopy echo '{= $_=2*(seq()-1) =} {#}'
1508
1509           $ loop 'echo $COUNT' --num 3 --summary
1510           # --joblog is somewhat more verbose than --summary
1511           $ seq 3 | parallel --joblog my.log echo; cat my.log
1512
1513           $ loop 'ls -foobarbatz' --num 3 --summary
1514           # --joblog is somewhat more verbose than --summary
1515           $ seq 3 | parallel --joblog my.log -N0 ls -foobarbatz; cat my.log
1516
1517           $ loop 'echo $COUNT' --count-by 2 --num 50 --only-last
1518           # Can be emulated by running 2 jobs
1519           $ seq 49 | parallel echo '{= $_=2*(seq()-1) =}' >/dev/null
1520           $ echo 50| parallel echo '{= $_=2*(seq()-1) =}'
1521
1522           $ loop 'date' --every 5s
1523           $ loopy --delay 5s date
1524
1525           $ loop 'date' --for-duration 8s --every 2s
1526           $ time_out 8s loopy --delay 2s date
1527
1528           $ loop 'date -u' --until-time '2018-05-25 20:50:00' --every 5s
1529           $ seconds=$((`date -d 2019-05-25T20:50:00 +%s` - `date  +%s`))s
1530           $ time_out $seconds loopy --delay 5s date -u
1531
1532           $ loop 'echo $RANDOM' --until-contains "666"
1533           $ loopy --halt now,success=1 'echo $RANDOM | match 666'
1534
1535           $ loop 'if (( RANDOM % 2 )); then
1536                     (echo "TRUE"; true);
1537                   else
1538                     (echo "FALSE"; false);
1539                   fi' --until-success
1540           $ loopy --halt now,success=1 'if (( $RANDOM % 2 )); then
1541                                           (echo "TRUE"; true);
1542                                         else
1543                                           (echo "FALSE"; false);
1544                                         fi'
1545
1546           $ loop 'if (( RANDOM % 2 )); then
1547               (echo "TRUE"; true);
1548             else
1549               (echo "FALSE"; false);
1550             fi' --until-error
1551           $ loopy --halt now,fail=1 'if (( $RANDOM % 2 )); then
1552                                        (echo "TRUE"; true);
1553                                      else
1554                                        (echo "FALSE"; false);
1555                                      fi'
1556
1557           $ loop 'date' --until-match "(\d{4})"
1558           $ loopy --halt now,success=1 'date | match [0-9][0-9][0-9][0-9]'
1559
1560           $ loop 'echo $ITEM' --for red,green,blue
1561           $ parallel echo ::: red green blue
1562
1563           $ cat /tmp/my-list-of-files-to-create.txt | loop 'touch $ITEM'
1564           $ cat /tmp/my-list-of-files-to-create.txt | parallel touch
1565
1566           $ ls | loop 'cp $ITEM $ITEM.bak'; ls
1567           $ ls | parallel cp {} {}.bak; ls
1568
1569           $ loop 'echo $ITEM | tr a-z A-Z' -i
1570           $ parallel 'echo {} | tr a-z A-Z'
1571           # Or more efficiently:
1572           $ parallel --pipe tr a-z A-Z
1573
1574           $ loop 'echo $ITEM' --for "`ls`"
1575           $ parallel echo {} ::: "`ls`"
1576
1577           $ ls | loop './my_program $ITEM' --until-success;
1578           $ ls | parallel --halt now,success=1 ./my_program {}
1579
1580           $ ls | loop './my_program $ITEM' --until-fail;
1581           $ ls | parallel --halt now,fail=1 ./my_program {}
1582
1583           $ ./deploy.sh;
1584             loop 'curl -sw "%{http_code}" http://coolwebsite.biz' \
1585               --every 5s --until-contains 200;
1586             ./announce_to_slack.sh
1587           $ ./deploy.sh;
1588             loopy --delay 5s --halt now,success=1 \
1589             'curl -sw "%{http_code}" http://coolwebsite.biz | match 200';
1590             ./announce_to_slack.sh
1591
1592           $ loop "ping -c 1 mysite.com" --until-success; ./do_next_thing
1593           $ loopy --halt now,success=1 ping -c 1 mysite.com; ./do_next_thing
1594
1595           $ ./create_big_file -o my_big_file.bin;
1596             loop 'ls' --until-contains 'my_big_file.bin';
1597             ./upload_big_file my_big_file.bin
1598           # inotifywait is a better tool to detect file system changes.
1599           # It can even make sure the file is complete
1600           # so you are not uploading an incomplete file
1601           $ inotifywait -qmre MOVED_TO -e CLOSE_WRITE --format %w%f . |
1602               grep my_big_file.bin
1603
1604           $ ls | loop 'cp $ITEM $ITEM.bak'
1605           $ ls | parallel cp {} {}.bak
1606
1607           $ loop './do_thing.sh' --every 15s --until-success --num 5
1608           $ parallel --retries 5 --delay 15s ::: ./do_thing.sh
1609
1610       https://github.com/Miserlou/Loop/ (Last checked: 2018-10)
1611
1612   DIFFERENCES BETWEEN lorikeet AND GNU Parallel
1613       lorikeet can run jobs in parallel. It does this based on a dependency
1614       graph described in a file, so this is similar to make.
1615
1616       https://github.com/cetra3/lorikeet (Last checked: 2018-10)
1617
1618   DIFFERENCES BETWEEN spp AND GNU Parallel
1619       spp can run jobs in parallel. spp does not use a command template to
1620       generate the jobs, but requires jobs to be in a file. Output from the
1621       jobs mix.
1622
1623       https://github.com/john01dav/spp (Last checked: 2019-01)
1624
1625   DIFFERENCES BETWEEN paral AND GNU Parallel
1626       paral prints a lot of status information and stores the output from the
1627       commands run into files. This means it cannot be used the middle of a
1628       pipe like this
1629
1630         paral "echo this" "echo does not" "echo work" | wc
1631
1632       Instead it puts the output into files named like out_#_command.out.log.
1633       To get a very similar behaviour with GNU parallel use --results
1634       'out_{#}_{=s/[^\sa-z_0-9]//g;s/\s+/_/g=}.log' --eta
1635
1636       paral only takes arguments on the command line and each argument should
1637       be a full command. Thus it does not use command templates.
1638
1639       This limits how many jobs it can run in total, because they all need to
1640       fit on a single command line.
1641
1642       paral has no support for running jobs remotely.
1643
1644       The examples from README.markdown and the corresponding command run
1645       with GNU parallel (--results
1646       'out_{#}_{=s/[^\sa-z_0-9]//g;s/\s+/_/g=}.log' --eta is omitted from the
1647       GNU parallel command):
1648
1649         paral "command 1" "command 2 --flag" "command arg1 arg2"
1650         parallel ::: "command 1" "command 2 --flag" "command arg1 arg2"
1651
1652         paral "sleep 1 && echo c1" "sleep 2 && echo c2" \
1653           "sleep 3 && echo c3" "sleep 4 && echo c4"  "sleep 5 && echo c5"
1654         parallel ::: "sleep 1 && echo c1" "sleep 2 && echo c2" \
1655           "sleep 3 && echo c3" "sleep 4 && echo c4"  "sleep 5 && echo c5"
1656         # Or shorter:
1657         parallel "sleep {} && echo c{}" ::: {1..5}
1658
1659         paral -n=0 "sleep 5 && echo c5" "sleep 4 && echo c4" \
1660           "sleep 3 && echo c3" "sleep 2 && echo c2" "sleep 1 && echo c1"
1661         parallel ::: "sleep 5 && echo c5" "sleep 4 && echo c4" \
1662           "sleep 3 && echo c3" "sleep 2 && echo c2" "sleep 1 && echo c1"
1663         # Or shorter:
1664         parallel -j0 "sleep {} && echo c{}" ::: 5 4 3 2 1
1665
1666         paral -n=1 "sleep 5 && echo c5" "sleep 4 && echo c4" \
1667           "sleep 3 && echo c3" "sleep 2 && echo c2" "sleep 1 && echo c1"
1668         parallel -j1 "sleep {} && echo c{}" ::: 5 4 3 2 1
1669
1670         paral -n=2 "sleep 5 && echo c5" "sleep 4 && echo c4" \
1671           "sleep 3 && echo c3" "sleep 2 && echo c2" "sleep 1 && echo c1"
1672         parallel -j2 "sleep {} && echo c{}" ::: 5 4 3 2 1
1673
1674         paral -n=5 "sleep 5 && echo c5" "sleep 4 && echo c4" \
1675           "sleep 3 && echo c3" "sleep 2 && echo c2" "sleep 1 && echo c1"
1676         parallel -j5 "sleep {} && echo c{}" ::: 5 4 3 2 1
1677
1678         paral -n=1 "echo a && sleep 0.5 && echo b && sleep 0.5 && \
1679           echo c && sleep 0.5 && echo d && sleep 0.5 && \
1680           echo e && sleep 0.5 && echo f && sleep 0.5 && \
1681           echo g && sleep 0.5 && echo h"
1682         parallel ::: "echo a && sleep 0.5 && echo b && sleep 0.5 && \
1683           echo c && sleep 0.5 && echo d && sleep 0.5 && \
1684           echo e && sleep 0.5 && echo f && sleep 0.5 && \
1685           echo g && sleep 0.5 && echo h"
1686
1687       https://github.com/amattn/paral (Last checked: 2019-01)
1688
1689   DIFFERENCES BETWEEN concurr AND GNU Parallel
1690       concurr is built to run jobs in parallel using a client/server model.
1691
1692       The examples from README.md:
1693
1694         concurr 'echo job {#} on slot {%}: {}' : arg1 arg2 arg3 arg4
1695         parallel 'echo job {#} on slot {%}: {}' ::: arg1 arg2 arg3 arg4
1696
1697         concurr 'echo job {#} on slot {%}: {}' :: file1 file2 file3
1698         parallel 'echo job {#} on slot {%}: {}' :::: file1 file2 file3
1699
1700         concurr 'echo {}' < input_file
1701         parallel 'echo {}' < input_file
1702
1703         cat file | concurr 'echo {}'
1704         cat file | parallel 'echo {}'
1705
1706       concurr deals badly empty input files and with output larger than 64
1707       KB.
1708
1709       https://github.com/mmstick/concurr (Last checked: 2019-01)
1710
1711   DIFFERENCES BETWEEN lesser-parallel AND GNU Parallel
1712       lesser-parallel is the inspiration for parallel --embed. Both lesser-
1713       parallel and parallel --embed define bash functions that can be
1714       included as part of a bash script to run jobs in parallel.
1715
1716       lesser-parallel implements a few of the replacement strings, but hardly
1717       any options, whereas parallel --embed gives you the full GNU parallel
1718       experience.
1719
1720       https://github.com/kou1okada/lesser-parallel (Last checked: 2019-01)
1721
1722   DIFFERENCES BETWEEN npm-parallel AND GNU Parallel
1723       npm-parallel can run npm tasks in parallel.
1724
1725       There are no examples and very little documentation, so it is hard to
1726       compare to GNU parallel.
1727
1728       https://github.com/spion/npm-parallel (Last checked: 2019-01)
1729
1730   DIFFERENCES BETWEEN machma AND GNU Parallel
1731       machma runs tasks in parallel. It gives time stamped output. It buffers
1732       in RAM. The examples from README.md:
1733
1734         # Put shorthand for timestamp in config for the examples
1735         echo '--rpl '\
1736           \''{time} $_=::strftime("%Y-%m-%d %H:%M:%S",localtime())'\' \
1737           > ~/.parallel/machma
1738         echo '--line-buffer --tagstring "{#} {time} {}"' >> ~/.parallel/machma
1739
1740         find . -iname '*.jpg' |
1741           machma --  mogrify -resize 1200x1200 -filter Lanczos {}
1742         find . -iname '*.jpg' |
1743           parallel --bar -Jmachma mogrify -resize 1200x1200 -filter Lanczos {}
1744
1745         cat /tmp/ips | machma -p 2 -- ping -c 2 -q {}
1746         cat /tmp/ips | parallel -j2 -Jmachma ping -c 2 -q {}
1747
1748         cat /tmp/ips |
1749           machma -- sh -c 'ping -c 2 -q $0 > /dev/null && echo alive' {}
1750         cat /tmp/ips |
1751           parallel -Jmachma 'ping -c 2 -q {} > /dev/null && echo alive'
1752
1753         find . -iname '*.jpg' |
1754           machma --timeout 5s -- mogrify -resize 1200x1200 -filter Lanczos {}
1755         find . -iname '*.jpg' |
1756           parallel --timeout 5s --bar mogrify -resize 1200x1200 \
1757             -filter Lanczos {}
1758
1759         find . -iname '*.jpg' -print0 |
1760           machma --null --  mogrify -resize 1200x1200 -filter Lanczos {}
1761         find . -iname '*.jpg' -print0 |
1762           parallel --null --bar mogrify -resize 1200x1200 -filter Lanczos {}
1763
1764       https://github.com/fd0/machma (Last checked: 2019-06)
1765
1766   DIFFERENCES BETWEEN interlace AND GNU Parallel
1767       Summary table (see legend above): - I2 I3 I4 - - - M1 - M3 - - M6 - O2
1768       O3 - - - - x x E1 E2 - - - - - - - - - - - - - - - -
1769
1770       interlace is built for network analysis to run network tools in
1771       parallel.
1772
1773       interface does not buffer output, so output from different jobs mixes.
1774
1775       The overhead for each target is O(n*n), so with 1000 targets it becomes
1776       very slow with an overhead in the order of 500ms/target.
1777
1778       Using prips most of the examples from
1779       https://github.com/codingo/Interlace can be run with GNU parallel:
1780
1781       Blocker
1782
1783         commands.txt:
1784           mkdir -p _output_/_target_/scans/
1785           _blocker_
1786           nmap _target_ -oA _output_/_target_/scans/_target_-nmap
1787         interlace -tL ./targets.txt -cL commands.txt -o $output
1788
1789         parallel -a targets.txt \
1790           mkdir -p $output/{}/scans/\; nmap {} -oA $output/{}/scans/{}-nmap
1791
1792       Blocks
1793
1794         commands.txt:
1795           _block:nmap_
1796           mkdir -p _target_/output/scans/
1797           nmap _target_ -oN _target_/output/scans/_target_-nmap
1798           _block:nmap_
1799           nikto --host _target_
1800         interlace -tL ./targets.txt -cL commands.txt
1801
1802         _nmap() {
1803           mkdir -p $1/output/scans/
1804           nmap $1 -oN $1/output/scans/$1-nmap
1805         }
1806         export -f _nmap
1807         parallel ::: _nmap "nikto --host" :::: targets.txt
1808
1809       Run Nikto Over Multiple Sites
1810
1811         interlace -tL ./targets.txt -threads 5 \
1812           -c "nikto --host _target_ > ./_target_-nikto.txt" -v
1813
1814         parallel -a targets.txt -P5 nikto --host {} \> ./{}_-nikto.txt
1815
1816       Run Nikto Over Multiple Sites and Ports
1817
1818         interlace -tL ./targets.txt -threads 5 -c \
1819           "nikto --host _target_:_port_ > ./_target_-_port_-nikto.txt" \
1820           -p 80,443 -v
1821
1822         parallel -P5 nikto --host {1}:{2} \> ./{1}-{2}-nikto.txt \
1823           :::: targets.txt ::: 80 443
1824
1825       Run a List of Commands against Target Hosts
1826
1827         commands.txt:
1828           nikto --host _target_:_port_ > _output_/_target_-nikto.txt
1829           sslscan _target_:_port_ >  _output_/_target_-sslscan.txt
1830           testssl.sh _target_:_port_ > _output_/_target_-testssl.txt
1831         interlace -t example.com -o ~/Engagements/example/ \
1832           -cL ./commands.txt -p 80,443
1833
1834         parallel --results ~/Engagements/example/{2}:{3}{1} {1} {2}:{3} \
1835           ::: "nikto --host" sslscan testssl.sh ::: example.com ::: 80 443
1836
1837       CIDR notation with an application that doesn't support it
1838
1839         interlace -t 192.168.12.0/24 -c "vhostscan _target_ \
1840           -oN _output_/_target_-vhosts.txt" -o ~/scans/ -threads 50
1841
1842         prips 192.168.12.0/24 |
1843           parallel -P50 vhostscan {} -oN ~/scans/{}-vhosts.txt
1844
1845       Glob notation with an application that doesn't support it
1846
1847         interlace -t 192.168.12.* -c "vhostscan _target_ \
1848           -oN _output_/_target_-vhosts.txt" -o ~/scans/ -threads 50
1849
1850         # Glob is not supported in prips
1851         prips 192.168.12.0/24 |
1852           parallel -P50 vhostscan {} -oN ~/scans/{}-vhosts.txt
1853
1854       Dash (-) notation with an application that doesn't support it
1855
1856         interlace -t 192.168.12.1-15 -c \
1857           "vhostscan _target_ -oN _output_/_target_-vhosts.txt" \
1858           -o ~/scans/ -threads 50
1859
1860         # Dash notation is not supported in prips
1861         prips 192.168.12.1 192.168.12.15 |
1862           parallel -P50 vhostscan {} -oN ~/scans/{}-vhosts.txt
1863
1864       Threading Support for an application that doesn't support it
1865
1866         interlace -tL ./target-list.txt -c \
1867           "vhostscan -t _target_ -oN _output_/_target_-vhosts.txt" \
1868           -o ~/scans/ -threads 50
1869
1870         cat ./target-list.txt |
1871           parallel -P50 vhostscan -t {} -oN ~/scans/{}-vhosts.txt
1872
1873       alternatively
1874
1875         ./vhosts-commands.txt:
1876           vhostscan -t $target -oN _output_/_target_-vhosts.txt
1877         interlace -cL ./vhosts-commands.txt -tL ./target-list.txt \
1878           -threads 50 -o ~/scans
1879
1880         ./vhosts-commands.txt:
1881           vhostscan -t "$1" -oN "$2"
1882         parallel -P50 ./vhosts-commands.txt {} ~/scans/{}-vhosts.txt \
1883           :::: ./target-list.txt
1884
1885       Exclusions
1886
1887         interlace -t 192.168.12.0/24 -e 192.168.12.0/26 -c \
1888           "vhostscan _target_ -oN _output_/_target_-vhosts.txt" \
1889           -o ~/scans/ -threads 50
1890
1891         prips 192.168.12.0/24 | grep -xv -Ff <(prips 192.168.12.0/26) |
1892           parallel -P50 vhostscan {} -oN ~/scans/{}-vhosts.txt
1893
1894       Run Nikto Using Multiple Proxies
1895
1896          interlace -tL ./targets.txt -pL ./proxies.txt -threads 5 -c \
1897            "nikto --host _target_:_port_ -useproxy _proxy_ > \
1898             ./_target_-_port_-nikto.txt" -p 80,443 -v
1899
1900          parallel -j5 \
1901            "nikto --host {1}:{2} -useproxy {3} > ./{1}-{2}-nikto.txt" \
1902            :::: ./targets.txt ::: 80 443 :::: ./proxies.txt
1903
1904       https://github.com/codingo/Interlace (Last checked: 2019-09)
1905
1906   DIFFERENCES BETWEEN otonvm Parallel AND GNU Parallel
1907       I have been unable to get the code to run at all. It seems unfinished.
1908
1909       https://github.com/otonvm/Parallel (Last checked: 2019-02)
1910
1911   DIFFERENCES BETWEEN k-bx par AND GNU Parallel
1912       par requires Haskell to work. This limits the number of platforms this
1913       can work on.
1914
1915       par does line buffering in memory. The memory usage is 3x the longest
1916       line (compared to 1x for parallel --lb). Commands must be given as
1917       arguments. There is no template.
1918
1919       These are the examples from https://github.com/k-bx/par with the
1920       corresponding GNU parallel command.
1921
1922         par "echo foo; sleep 1; echo foo; sleep 1; echo foo" \
1923             "echo bar; sleep 1; echo bar; sleep 1; echo bar" && echo "success"
1924         parallel --lb ::: "echo foo; sleep 1; echo foo; sleep 1; echo foo" \
1925             "echo bar; sleep 1; echo bar; sleep 1; echo bar" && echo "success"
1926
1927         par "echo foo; sleep 1; foofoo" \
1928             "echo bar; sleep 1; echo bar; sleep 1; echo bar" && echo "success"
1929         parallel --lb --halt 1 ::: "echo foo; sleep 1; foofoo" \
1930             "echo bar; sleep 1; echo bar; sleep 1; echo bar" && echo "success"
1931
1932         par "PARPREFIX=[fooechoer] echo foo" "PARPREFIX=[bar] echo bar"
1933         parallel --lb --colsep , --tagstring {1} {2} \
1934           ::: "[fooechoer],echo foo" "[bar],echo bar"
1935
1936         par --succeed "foo" "bar" && echo 'wow'
1937         parallel "foo" "bar"; true && echo 'wow'
1938
1939       https://github.com/k-bx/par (Last checked: 2019-02)
1940
1941   DIFFERENCES BETWEEN parallelshell AND GNU Parallel
1942       parallelshell does not allow for composed commands:
1943
1944         # This does not work
1945         parallelshell 'echo foo;echo bar' 'echo baz;echo quuz'
1946
1947       Instead you have to wrap that in a shell:
1948
1949         parallelshell 'sh -c "echo foo;echo bar"' 'sh -c "echo baz;echo quuz"'
1950
1951       It buffers output in RAM. All commands must be given on the command
1952       line and all commands are started in parallel at the same time. This
1953       will cause the system to freeze if there are so many jobs that there is
1954       not enough memory to run them all at the same time.
1955
1956       https://github.com/keithamus/parallelshell (Last checked: 2019-02)
1957
1958       https://github.com/darkguy2008/parallelshell (Last checked: 2019-03)
1959
1960   DIFFERENCES BETWEEN shell-executor AND GNU Parallel
1961       shell-executor does not allow for composed commands:
1962
1963         # This does not work
1964         sx 'echo foo;echo bar' 'echo baz;echo quuz'
1965
1966       Instead you have to wrap that in a shell:
1967
1968         sx 'sh -c "echo foo;echo bar"' 'sh -c "echo baz;echo quuz"'
1969
1970       It buffers output in RAM. All commands must be given on the command
1971       line and all commands are started in parallel at the same time. This
1972       will cause the system to freeze if there are so many jobs that there is
1973       not enough memory to run them all at the same time.
1974
1975       https://github.com/royriojas/shell-executor (Last checked: 2019-02)
1976
1977   DIFFERENCES BETWEEN non-GNU par AND GNU Parallel
1978       par buffers in memory to avoid mixing of jobs. It takes 1s per 1
1979       million output lines.
1980
1981       par needs to have all commands before starting the first job. The jobs
1982       are read from stdin (standard input) so any quoting will have to be
1983       done by the user.
1984
1985       Stdout (standard output) is prepended with o:. Stderr (standard error)
1986       is sendt to stdout (standard output) and prepended with e:.
1987
1988       For short jobs with little output par is 20% faster than GNU parallel
1989       and 60% slower than xargs.
1990
1991       http://savannah.nongnu.org/projects/par (Last checked: 2019-02)
1992
1993   DIFFERENCES BETWEEN fd AND GNU Parallel
1994       fd does not support composed commands, so commands must be wrapped in
1995       sh -c.
1996
1997       It buffers output in RAM.
1998
1999       It only takes file names from the filesystem as input (similar to
2000       find).
2001
2002       https://github.com/sharkdp/fd (Last checked: 2019-02)
2003
2004   DIFFERENCES BETWEEN lateral AND GNU Parallel
2005       lateral is very similar to sem: It takes a single command and runs it
2006       in the background. The design means that output from parallel running
2007       jobs may mix. If it dies unexpectly it leaves a socket in
2008       ~/.lateral/socket.PID.
2009
2010       lateral deals badly with too long command lines. This makes the lateral
2011       server crash:
2012
2013         lateral run echo `seq 100000| head -c 1000k`
2014
2015       Any options will be read by lateral so this does not work (lateral
2016       interprets the -l):
2017
2018         lateral run ls -l
2019
2020       Composed commands do not work:
2021
2022         lateral run pwd ';' ls
2023
2024       Functions do not work:
2025
2026         myfunc() { echo a; }
2027         export -f myfunc
2028         lateral run myfunc
2029
2030       Running emacs in the terminal causes the parent shell to die:
2031
2032         echo '#!/bin/bash' > mycmd
2033         echo emacs -nw >> mycmd
2034         chmod +x mycmd
2035         lateral start
2036         lateral run ./mycmd
2037
2038       Here are the examples from https://github.com/akramer/lateral with the
2039       corresponding GNU sem and GNU parallel commands:
2040
2041         1$ lateral start
2042         1$ for i in $(cat /tmp/names); do
2043         1$   lateral run -- some_command $i
2044         1$ done
2045         1$ lateral wait
2046         1$
2047         1$ for i in $(cat /tmp/names); do
2048         1$   sem some_command $i
2049         1$ done
2050         1$ sem --wait
2051         1$
2052         1$ parallel some_command :::: /tmp/names
2053
2054         2$ lateral start
2055         2$ for i in $(seq 1 100); do
2056         2$   lateral run -- my_slow_command < workfile$i > /tmp/logfile$i
2057         2$ done
2058         2$ lateral wait
2059         2$
2060         2$ for i in $(seq 1 100); do
2061         2$   sem my_slow_command < workfile$i > /tmp/logfile$i
2062         2$ done
2063         2$ sem --wait
2064         2$
2065         2$ parallel 'my_slow_command < workfile{} > /tmp/logfile{}' \
2066              ::: {1..100}
2067
2068         3$ lateral start -p 0 # yup, it will just queue tasks
2069         3$ for i in $(seq 1 100); do
2070         3$   lateral run -- command_still_outputs_but_wont_spam inputfile$i
2071         3$ done
2072         3$ # command output spam can commence
2073         3$ lateral config -p 10; lateral wait
2074         3$
2075         3$ for i in $(seq 1 100); do
2076         3$   echo "command inputfile$i" >> joblist
2077         3$ done
2078         3$ parallel -j 10 :::: joblist
2079         3$
2080         3$ echo 1 > /tmp/njobs
2081         3$ parallel -j /tmp/njobs command inputfile{} \
2082              ::: {1..100} &
2083         3$ echo 10 >/tmp/njobs
2084         3$ wait
2085
2086       https://github.com/akramer/lateral (Last checked: 2019-03)
2087
2088   DIFFERENCES BETWEEN with-this AND GNU Parallel
2089       The examples from https://github.com/amritb/with-this.git and the
2090       corresponding GNU parallel command:
2091
2092         with -v "$(cat myurls.txt)" "curl -L this"
2093         parallel curl -L ::: myurls.txt
2094
2095         with -v "$(cat myregions.txt)" \
2096           "aws --region=this ec2 describe-instance-status"
2097         parallel aws --region={} ec2 describe-instance-status \
2098           :::: myregions.txt
2099
2100         with -v "$(ls)" "kubectl --kubeconfig=this get pods"
2101         ls | parallel kubectl --kubeconfig={} get pods
2102
2103         with -v "$(ls | grep config)" "kubectl --kubeconfig=this get pods"
2104         ls | grep config | parallel kubectl --kubeconfig={} get pods
2105
2106         with -v "$(echo {1..10})" "echo 123"
2107         parallel -N0 echo 123 ::: {1..10}
2108
2109       Stderr is merged with stdout. with-this buffers in RAM. It uses 3x the
2110       output size, so you cannot have output larger than 1/3rd the amount of
2111       RAM. The input values cannot contain spaces. Composed commands do not
2112       work.
2113
2114       with-this gives some additional information, so the output has to be
2115       cleaned before piping it to the next command.
2116
2117       https://github.com/amritb/with-this.git (Last checked: 2019-03)
2118
2119   DIFFERENCES BETWEEN Tollef's parallel (moreutils) AND GNU Parallel
2120       Summary table (see legend above): - - - I4 - - I7 - - M3 - - M6 - O2 O3
2121       - O5 O6 - x x E1 - - - - - E7 - x x x x x x x x - -
2122
2123       EXAMPLES FROM Tollef's parallel MANUAL
2124
2125       Tollef parallel sh -c "echo hi; sleep 2; echo bye" -- 1 2 3
2126
2127       GNU parallel "echo hi; sleep 2; echo bye" ::: 1 2 3
2128
2129       Tollef parallel -j 3 ufraw -o processed -- *.NEF
2130
2131       GNU parallel -j 3 ufraw -o processed ::: *.NEF
2132
2133       Tollef parallel -j 3 -- ls df "echo hi"
2134
2135       GNU parallel -j 3 ::: ls df "echo hi"
2136
2137       (Last checked: 2019-08)
2138
2139   Todo
2140       Url for spread
2141
2142       https://github.com/reggi/pkgrun
2143
2144       https://github.com/benoror/better-npm-run - not obvious how to use
2145
2146       https://github.com/bahmutov/with-package
2147
2148       https://github.com/xuchenCN/go-pssh
2149
2150       https://github.com/flesler/parallel
2151
2152       https://github.com/Julian/Verge
2153

TESTING OTHER TOOLS

2155       There are certain issues that are very common on parallelizing tools.
2156       Here are a few stress tests. Be warned: If the tool is badly coded it
2157       may overload your machine.
2158
2159   MIX: Output mixes
2160       Output from 2 jobs should not mix. If the output is not used, this does
2161       not matter; but if the output is used then it is important that you do
2162       not get half a line from one job followed by half a line from another
2163       job.
2164
2165       If the tool does not buffer, output will most likely mix now and then.
2166
2167       This test stresses whether output mixes.
2168
2169         #!/bin/bash
2170
2171         paralleltool="parallel -j0"
2172
2173         cat <<-EOF > mycommand
2174         #!/bin/bash
2175
2176         # If a, b, c, d, e, and f mix: Very bad
2177         perl -e 'print STDOUT "a"x3000_000," "'
2178         perl -e 'print STDERR "b"x3000_000," "'
2179         perl -e 'print STDOUT "c"x3000_000," "'
2180         perl -e 'print STDERR "d"x3000_000," "'
2181         perl -e 'print STDOUT "e"x3000_000," "'
2182         perl -e 'print STDERR "f"x3000_000," "'
2183         echo
2184         echo >&2
2185         EOF
2186         chmod +x mycommand
2187
2188         # Run 30 jobs in parallel
2189         seq 30 |
2190           $paralleltool ./mycommand > >(tr -s abcdef) 2> >(tr -s abcdef >&2)
2191
2192         # 'a c e' and 'b d f' should always stay together
2193         # and there should only be a single line per job
2194
2195   STDERRMERGE: Stderr is merged with stdout
2196       Output from stdout and stderr should not be merged, but kept separated.
2197
2198       This test shows whether stdout is mixed with stderr.
2199
2200         #!/bin/bash
2201
2202         paralleltool="parallel -j0"
2203
2204         cat <<-EOF > mycommand
2205         #!/bin/bash
2206
2207         echo stdout
2208         echo stderr >&2
2209         echo stdout
2210         echo stderr >&2
2211         EOF
2212         chmod +x mycommand
2213
2214         # Run one job
2215         echo |
2216           $paralleltool ./mycommand > stdout 2> stderr
2217         cat stdout
2218         cat stderr
2219
2220   RAM: Output limited by RAM
2221       Some tools cache output in RAM. This makes them extremely slow if the
2222       output is bigger than physical memory and crash if the output is bigger
2223       than the virtual memory.
2224
2225         #!/bin/bash
2226
2227         paralleltool="parallel -j0"
2228
2229         cat <<'EOF' > mycommand
2230         #!/bin/bash
2231
2232         # Generate 1 GB output
2233         yes "`perl -e 'print \"c\"x30_000'`" | head -c 1G
2234         EOF
2235         chmod +x mycommand
2236
2237         # Run 20 jobs in parallel
2238         # Adjust 20 to be > physical RAM and < free space on /tmp
2239         seq 20 | time $paralleltool ./mycommand | wc -c
2240
2241   DISKFULL: Incomplete data if /tmp runs full
2242       If caching is done on disk, the disk can run full during the run. Not
2243       all programs discover this. GNU Parallel discovers it, if it stays full
2244       for at least 2 seconds.
2245
2246         #!/bin/bash
2247
2248         paralleltool="parallel -j0"
2249
2250         # This should be a dir with less than 100 GB free space
2251         smalldisk=/tmp/shm/parallel
2252
2253         TMPDIR="$smalldisk"
2254         export TMPDIR
2255
2256         max_output() {
2257             # Force worst case scenario:
2258             # Make GNU Parallel only check once per second
2259             sleep 10
2260             # Generate 100 GB to fill $TMPDIR
2261             # Adjust if /tmp is bigger than 100 GB
2262             yes | head -c 100G >$TMPDIR/$$
2263             # Generate 10 MB output that will not be buffered due to full disk
2264             perl -e 'print "X"x10_000_000' | head -c 10M
2265             echo This part is missing from incomplete output
2266             sleep 2
2267             rm $TMPDIR/$$
2268             echo Final output
2269         }
2270
2271         export -f max_output
2272         seq 10 | $paralleltool max_output | tr -s X
2273
2274   CLEANUP: Leaving tmp files at unexpected death
2275       Some tools do not clean up tmp files if they are killed. If the tool
2276       buffers on disk, they may not clean up, if they are killed.
2277
2278         #!/bin/bash
2279
2280         paralleltool=parallel
2281
2282         ls /tmp >/tmp/before
2283         seq 10 | $paralleltool sleep &
2284         pid=$!
2285         # Give the tool time to start up
2286         sleep 1
2287         # Kill it without giving it a chance to cleanup
2288         kill -9 $!
2289         # Should be empty: No files should be left behind
2290         diff <(ls /tmp) /tmp/before
2291
2292   SPCCHAR: Dealing badly with special file names.
2293       It is not uncommon for users to create files like:
2294
2295         My brother's 12" *** record  (costs $$$).jpg
2296
2297       Some tools break on this.
2298
2299         #!/bin/bash
2300
2301         paralleltool=parallel
2302
2303         touch "My brother's 12\" *** record  (costs \$\$\$).jpg"
2304         ls My*jpg | $paralleltool ls -l
2305
2306   COMPOSED: Composed commands do not work
2307       Some tools require you to wrap composed commands into bash -c.
2308
2309         echo bar | $paralleltool echo foo';' echo {}
2310
2311   ONEREP: Only one replacement string allowed
2312       Some tools can only insert the argument once.
2313
2314         echo bar | $paralleltool echo {} foo {}
2315
2316   INPUTSIZE: Length of input should not be limited
2317       Some tools limit the length of the input lines artificially with no
2318       good reason. GNU parallel does not:
2319
2320         perl -e 'print "foo."."x"x100_000_000' | parallel echo {.}
2321
2322       GNU parallel limits the command to run to 128 KB due to execve(1):
2323
2324         perl -e 'print "x"x131_000' | parallel echo {} | wc
2325
2326   NUMWORDS: Speed depends on number of words
2327       Some tools become very slow if output lines have many words.
2328
2329         #!/bin/bash
2330
2331         paralleltool=parallel
2332
2333         cat <<-EOF > mycommand
2334         #!/bin/bash
2335
2336         # 10 MB of lines with 1000 words
2337         yes "`seq 1000`" | head -c 10M
2338         EOF
2339         chmod +x mycommand
2340
2341         # Run 30 jobs in parallel
2342         seq 30 | time $paralleltool -j0 ./mycommand > /dev/null
2343

AUTHOR

2345       When using GNU parallel for a publication please cite:
2346
2347       O. Tange (2011): GNU Parallel - The Command-Line Power Tool, ;login:
2348       The USENIX Magazine, February 2011:42-47.
2349
2350       This helps funding further development; and it won't cost you a cent.
2351       If you pay 10000 EUR you should feel free to use GNU Parallel without
2352       citing.
2353
2354       Copyright (C) 2007-10-18 Ole Tange, http://ole.tange.dk
2355
2356       Copyright (C) 2008-2010 Ole Tange, http://ole.tange.dk
2357
2358       Copyright (C) 2010-2019 Ole Tange, http://ole.tange.dk and Free
2359       Software Foundation, Inc.
2360
2361       Parts of the manual concerning xargs compatibility is inspired by the
2362       manual of xargs from GNU findutils 4.4.2.
2363

LICENSE

2365       This program is free software; you can redistribute it and/or modify it
2366       under the terms of the GNU General Public License as published by the
2367       Free Software Foundation; either version 3 of the License, or at your
2368       option any later version.
2369
2370       This program is distributed in the hope that it will be useful, but
2371       WITHOUT ANY WARRANTY; without even the implied warranty of
2372       MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
2373       General Public License for more details.
2374
2375       You should have received a copy of the GNU General Public License along
2376       with this program.  If not, see <http://www.gnu.org/licenses/>.
2377
2378   Documentation license I
2379       Permission is granted to copy, distribute and/or modify this
2380       documentation under the terms of the GNU Free Documentation License,
2381       Version 1.3 or any later version published by the Free Software
2382       Foundation; with no Invariant Sections, with no Front-Cover Texts, and
2383       with no Back-Cover Texts.  A copy of the license is included in the
2384       file fdl.txt.
2385
2386   Documentation license II
2387       You are free:
2388
2389       to Share to copy, distribute and transmit the work
2390
2391       to Remix to adapt the work
2392
2393       Under the following conditions:
2394
2395       Attribution
2396                You must attribute the work in the manner specified by the
2397                author or licensor (but not in any way that suggests that they
2398                endorse you or your use of the work).
2399
2400       Share Alike
2401                If you alter, transform, or build upon this work, you may
2402                distribute the resulting work only under the same, similar or
2403                a compatible license.
2404
2405       With the understanding that:
2406
2407       Waiver   Any of the above conditions can be waived if you get
2408                permission from the copyright holder.
2409
2410       Public Domain
2411                Where the work or any of its elements is in the public domain
2412                under applicable law, that status is in no way affected by the
2413                license.
2414
2415       Other Rights
2416                In no way are any of the following rights affected by the
2417                license:
2418
2419                · Your fair dealing or fair use rights, or other applicable
2420                  copyright exceptions and limitations;
2421
2422                · The author's moral rights;
2423
2424                · Rights other persons may have either in the work itself or
2425                  in how the work is used, such as publicity or privacy
2426                  rights.
2427
2428       Notice   For any reuse or distribution, you must make clear to others
2429                the license terms of this work.
2430
2431       A copy of the full license is included in the file as cc-by-sa.txt.
2432

DEPENDENCIES

2434       GNU parallel uses Perl, and the Perl modules Getopt::Long, IPC::Open3,
2435       Symbol, IO::File, POSIX, and File::Temp. For remote usage it also uses
2436       rsync with ssh.
2437

SEE ALSO

2439       find(1), xargs(1), make(1), pexec(1), ppss(1), xjobs(1), prll(1),
2440       dxargs(1), mdm(1)
2441
2442
2443
244420190822                          2019-08-30          PARALLEL_ALTERNATIVES(7)
Impressum