1PARALLEL(1)                        parallel                        PARALLEL(1)
2
3
4

NAME

6       parallel - build and execute shell command lines from standard input in
7       parallel
8

SYNOPSIS

10       parallel [options] [command [arguments]] < list_of_arguments
11
12       parallel [options] [command [arguments]] ( ::: arguments | :::+
13       arguments | :::: argfile(s) | ::::+ argfile(s) ) ...
14
15       parallel --semaphore [options] command
16
17       #!/usr/bin/parallel --shebang [options] [command [arguments]]
18
19       #!/usr/bin/parallel --shebang-wrap [options] [command [arguments]]
20

DESCRIPTION

22       STOP!
23
24       Read the Reader's guide below if you are new to GNU parallel.
25
26       GNU parallel is a shell tool for executing jobs in parallel using one
27       or more computers. A job can be a single command or a small script that
28       has to be run for each of the lines in the input. The typical input is
29       a list of files, a list of hosts, a list of users, a list of URLs, or a
30       list of tables. A job can also be a command that reads from a pipe. GNU
31       parallel can then split the input into blocks and pipe a block into
32       each command in parallel.
33
34       If you use xargs and tee today you will find GNU parallel very easy to
35       use as GNU parallel is written to have the same options as xargs. If
36       you write loops in shell, you will find GNU parallel may be able to
37       replace most of the loops and make them run faster by running several
38       jobs in parallel.
39
40       GNU parallel makes sure output from the commands is the same output as
41       you would get had you run the commands sequentially. This makes it
42       possible to use output from GNU parallel as input for other programs.
43
44       For each line of input GNU parallel will execute command with the line
45       as arguments. If no command is given, the line of input is executed.
46       Several lines will be run in parallel. GNU parallel can often be used
47       as a substitute for xargs or cat | bash.
48
49   Reader's guide
50       GNU parallel includes the 4 types of documentation: Tutorial, how-to,
51       reference and explanation/design.
52
53       Tutorial
54
55       If you prefer reading a book buy GNU Parallel 2018 at
56       https://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html
57       or download it at: https://doi.org/10.5281/zenodo.1146014 Read at least
58       chapter 1+2. It should take you less than 20 minutes.
59
60       Otherwise start by watching the intro videos for a quick introduction:
61       https://youtube.com/playlist?list=PL284C9FF2488BC6D1
62
63       If you want to dive deeper: spend a couple of hours walking through the
64       tutorial (man parallel_tutorial). Your command line will love you for
65       it.
66
67       How-to
68
69       You can find a lot of examples of use in man parallel_examples. They
70       will give you an idea of what GNU parallel is capable of, and you may
71       find a solution you can simply adapt to your situation.
72
73       If the example do not cover your exact needs, the options map
74       (https://www.gnu.org/software/parallel/parallel_options_map.pdf) can
75       help you identify options that are related, so you can look these up in
76       the man page.
77
78       Reference
79
80       If you need a one page printable cheat sheet you can find it on:
81       https://www.gnu.org/software/parallel/parallel_cheat.pdf
82
83       The man page is the reference for all options, and reading the man page
84       from cover to cover is probably not what you need.
85
86       Design discussion
87
88       If you want to know the design decisions behind GNU parallel, try: man
89       parallel_design. This is also a good intro if you intend to change GNU
90       parallel.
91

OPTIONS

93       command
94           Command to execute.
95
96           If command or the following arguments contain replacement strings
97           (such as {}) every instance will be substituted with the input.
98
99           If command is given, GNU parallel solve the same tasks as xargs. If
100           command is not given GNU parallel will behave similar to cat | sh.
101
102           The command must be an executable, a script, a composed command, an
103           alias, or a function.
104
105           Bash functions: export -f the function first or use env_parallel.
106
107           Bash, Csh, or Tcsh aliases: Use env_parallel.
108
109           Zsh, Fish, Ksh, and Pdksh functions and aliases: Use env_parallel.
110
111       {}  Input line.
112
113           This replacement string will be replaced by a full line read from
114           the input source. The input source is normally stdin (standard
115           input), but can also be given with --arg-file, :::, or ::::.
116
117           The replacement string {} can be changed with -I.
118
119           If the command line contains no replacement strings then {} will be
120           appended to the command line.
121
122           Replacement strings are normally quoted, so special characters are
123           not parsed by the shell. The exception is if the command starts
124           with a replacement string; then the string is not quoted.
125
126           See also: --plus {.} {/} {//} {/.} {#} {%} {n} {=perl expression=}
127
128       {.} Input line without extension.
129
130           This replacement string will be replaced by the input with the
131           extension removed. If the input line contains . after the last /,
132           the last . until the end of the string will be removed and {.} will
133           be replaced with the remaining. E.g. foo.jpg becomes foo,
134           subdir/foo.jpg becomes subdir/foo, sub.dir/foo.jpg becomes
135           sub.dir/foo, sub.dir/bar remains sub.dir/bar. If the input line
136           does not contain . it will remain unchanged.
137
138           The replacement string {.} can be changed with --extensionreplace
139
140           See also: {} --extensionreplace
141
142       {/} Basename of input line.
143
144           This replacement string will be replaced by the input with the
145           directory part removed.
146
147           See also: {} --basenamereplace
148
149       {//}
150           Dirname of input line.
151
152           This replacement string will be replaced by the dir of the input
153           line. See dirname(1).
154
155           See also: {} --dirnamereplace
156
157       {/.}
158           Basename of input line without extension.
159
160           This replacement string will be replaced by the input with the
161           directory and extension part removed.  {/.} is a combination of {/}
162           and {.}.
163
164           See also: {} --basenameextensionreplace
165
166       {#} Sequence number of the job to run.
167
168           This replacement string will be replaced by the sequence number of
169           the job being run. It contains the same number as $PARALLEL_SEQ.
170
171           See also: {} --seqreplace
172
173       {%} Job slot number.
174
175           This replacement string will be replaced by the job's slot number
176           between 1 and number of jobs to run in parallel. There will never
177           be 2 jobs running at the same time with the same job slot number.
178
179           If the job needs to be retried (e.g using --retries or
180           --retry-failed) the job slot is not automatically updated. You
181           should then instead use $PARALLEL_JOBSLOT:
182
183             $ do_test() {
184                 id="$3 {%}=$1 PARALLEL_JOBSLOT=$2"
185                 echo run "$id";
186                 sleep 1
187                 # fail if {%} is odd
188                 return `echo $1%2 | bc`
189               }
190             $ export -f do_test
191             $ parallel -j3 --jl mylog do_test {%} \$PARALLEL_JOBSLOT {} ::: A B C D
192             run A {%}=1 PARALLEL_JOBSLOT=1
193             run B {%}=2 PARALLEL_JOBSLOT=2
194             run C {%}=3 PARALLEL_JOBSLOT=3
195             run D {%}=1 PARALLEL_JOBSLOT=1
196             $ parallel --retry-failed -j3 --jl mylog do_test {%} \$PARALLEL_JOBSLOT {} ::: A B C D
197             run A {%}=1 PARALLEL_JOBSLOT=1
198             run C {%}=3 PARALLEL_JOBSLOT=2
199             run D {%}=1 PARALLEL_JOBSLOT=3
200
201           Notice how {%} and $PARALLEL_JOBSLOT differ in the retry run of C
202           and D.
203
204           See also: {} --jobs --slotreplace
205
206       {n} Argument from input source n or the n'th argument.
207
208           This positional replacement string will be replaced by the input
209           from input source n (when used with --arg-file or ::::) or with the
210           n'th argument (when used with -N). If n is negative it refers to
211           the n'th last argument.
212
213           See also: {} {n.} {n/} {n//} {n/.}
214
215       {n.}
216           Argument from input source n or the n'th argument without
217           extension.
218
219           {n.} is a combination of {n} and {.}.
220
221           This positional replacement string will be replaced by the input
222           from input source n (when used with --arg-file or ::::) or with the
223           n'th argument (when used with -N). The input will have the
224           extension removed.
225
226           See also: {n} {.}
227
228       {n/}
229           Basename of argument from input source n or the n'th argument.
230
231           {n/} is a combination of {n} and {/}.
232
233           This positional replacement string will be replaced by the input
234           from input source n (when used with --arg-file or ::::) or with the
235           n'th argument (when used with -N). The input will have the
236           directory (if any) removed.
237
238           See also: {n} {/}
239
240       {n//}
241           Dirname of argument from input source n or the n'th argument.
242
243           {n//} is a combination of {n} and {//}.
244
245           This positional replacement string will be replaced by the dir of
246           the input from input source n (when used with --arg-file or ::::)
247           or with the n'th argument (when used with -N). See dirname(1).
248
249           See also: {n} {//}
250
251       {n/.}
252           Basename of argument from input source n or the n'th argument
253           without extension.
254
255           {n/.} is a combination of {n}, {/}, and {.}.
256
257           This positional replacement string will be replaced by the input
258           from input source n (when used with --arg-file or ::::) or with the
259           n'th argument (when used with -N). The input will have the
260           directory (if any) and extension removed.
261
262           See also: {n} {/.}
263
264       {=perl expression=}
265           Replace with calculated perl expression.
266
267           $_ will contain the same as {}. After evaluating perl expression $_
268           will be used as the value. It is recommended to only change $_ but
269           you have full access to all of GNU parallel's internal functions
270           and data structures.
271
272           The expression must give the same result if evaluated twice -
273           otherwise the behaviour is undefined. E.g. in some versions of GNU
274           parallel this will not work as expected:
275
276               parallel echo '{= $_= ++$wrong_counter =}' ::: a b c
277
278           A few convenience functions and data structures have been made:
279
280            Q(string)
281             Shell quote a string. Example:
282
283               parallel echo {} is quoted as '{= $_=Q($_) =}' ::: \$PWD
284
285            pQ(string)
286             Perl quote a string. Example:
287
288               parallel echo {} is quoted as '{= $_=pQ($_) =}' ::: \$PWD
289
290            uq() (or uq)
291             Do not quote current replacement string. Example:
292
293               parallel echo {} has the value '{= uq =}' ::: \$PWD
294
295            hash(val)
296             Compute B::hash(val). Example:
297
298               parallel echo Hash of {} is '{= $_=hash($_) =}' ::: a b c
299
300            total_jobs()
301             Number of jobs in total. Example:
302
303               parallel echo Number of jobs: '{= $_=total_jobs() =}' ::: a b c
304
305            slot()
306             Slot number of job. Example:
307
308               parallel echo Job slot of {} is '{= $_=slot() =}' ::: a b c
309
310            seq()
311             Sequence number of job. Example:
312
313               parallel echo Seq number of {} is '{= $_=seq() =}' ::: a b c
314
315            @arg
316             The arguments counting from 1 ($arg[1] = {1} = first argument).
317             Example:
318
319               parallel echo {1}+{2}='{=1 $_=$arg[1]+$arg[2] =}' \
320                 ::: 1 2 3 ::: 2 3 4
321
322             ('{=1' forces this to be a positional replacement string, and
323             therefore will not repeat the value for each arg.)
324
325            skip()
326             Skip this job (see also --filter). Example:
327
328               parallel echo '{= $arg[1] >= $arg[2] and skip =}' \
329                 ::: 1 2 3 ::: 2 3 4
330
331            yyyy_mm_dd_hh_mm_ss(sec) (beta testing)
332            yyyy_mm_dd_hh_mm(sec) (beta testing)
333            yyyy_mm_dd(sec) (beta testing)
334            hh_mm_ss(sec) (beta testing)
335            hh_mm(sec) (beta testing)
336            yyyymmddhhmmss(sec) (beta testing)
337            yyyymmddhhmm(sec) (beta testing)
338            yyyymmdd(sec) (beta testing)
339            hhmmss(sec) (beta testing)
340            hhmm(sec) (beta testing)
341             Time functions. sec is number of seconds since epoch. If left out
342             it will use current local time. Example:
343
344               parallel echo 'Now: {= $_=yyyy_mm_dd_hh_mm_ss() =}' ::: Dummy
345               parallel echo 'The end: {= $_=yyyy_mm_dd_hh_mm_ss($_) =}' \
346                 ::: 2147483648
347
348           Example:
349
350             seq 10 | parallel echo {} + 1 is {= '$_++' =}
351             parallel csh -c {= '$_="mkdir ".Q($_)' =} ::: '12" dir'
352             seq 50 | parallel echo job {#} of {= '$_=total_jobs()' =}
353
354           See also: --rpl --parens {} {=n perl expression=} --filter
355
356       {=n perl expression=}
357           Positional equivalent to {=perl expression=}.
358
359           To understand positional replacement strings see {n}.
360
361           See also: {=perl expression=} {n}
362
363       ::: arguments
364           Use arguments on the command line as input source.
365
366           Unlike other options for GNU parallel ::: is placed after the
367           command and before the arguments.
368
369           The following are equivalent:
370
371             (echo file1; echo file2) | parallel gzip
372             parallel gzip ::: file1 file2
373             parallel gzip {} ::: file1 file2
374             parallel --arg-sep ,, gzip {} ,, file1 file2
375             parallel --arg-sep ,, gzip ,, file1 file2
376             parallel ::: "gzip file1" "gzip file2"
377
378           To avoid treating ::: as special use --arg-sep to set the argument
379           separator to something else.
380
381           If multiple ::: are given, each group will be treated as an input
382           source, and all combinations of input sources will be generated.
383           E.g. ::: 1 2 ::: a b c will result in the combinations (1,a) (1,b)
384           (1,c) (2,a) (2,b) (2,c). This is useful for replacing nested for-
385           loops.
386
387           :::, ::::, and --arg-file can be mixed. So these are equivalent:
388
389             parallel echo {1} {2} {3} ::: 6 7 ::: 4 5 ::: 1 2 3
390             parallel echo {1} {2} {3} :::: <(seq 6 7) <(seq 4 5) \
391               :::: <(seq 1 3)
392             parallel -a <(seq 6 7) echo {1} {2} {3} :::: <(seq 4 5) \
393               :::: <(seq 1 3)
394             parallel -a <(seq 6 7) -a <(seq 4 5) echo {1} {2} {3} \
395               ::: 1 2 3
396             seq 6 7 | parallel -a - -a <(seq 4 5) echo {1} {2} {3} \
397               ::: 1 2 3
398             seq 4 5 | parallel echo {1} {2} {3} :::: <(seq 6 7) - \
399               ::: 1 2 3
400
401           See also: --arg-sep --arg-file :::: :::+ ::::+ --link
402
403       :::+ arguments
404           Like ::: but linked like --link to the previous input source.
405
406           Contrary to --link, values do not wrap: The shortest input source
407           determines the length.
408
409           Example:
410
411             parallel echo ::: a b c :::+ 1 2 3 ::: X Y :::+ 11 22
412
413           See also: ::::+ --link
414
415       :::: argfiles
416           Another way to write --arg-file argfile1 --arg-file argfile2 ...
417
418           ::: and :::: can be mixed.
419
420           See also: --arg-file ::: ::::+ --link
421
422       ::::+ argfiles
423           Like :::: but linked like --link to the previous input source.
424
425           Contrary to --link, values do not wrap: The shortest input source
426           determines the length.
427
428           See also: --arg-file :::+ --link
429
430       --null
431       -0  Use NUL as delimiter.
432
433           Normally input lines will end in \n (newline). If they end in \0
434           (NUL), then use this option. It is useful for processing arguments
435           that may contain \n (newline).
436
437           Shorthand for --delimiter '\0'.
438
439           See also: --delimiter
440
441       --arg-file input-file
442       -a input-file
443           Use input-file as input source.
444
445           If you use this option, stdin (standard input) is given to the
446           first process run.  Otherwise, stdin (standard input) is redirected
447           from /dev/null.
448
449           If multiple --arg-file are given, each input-file will be treated
450           as an input source, and all combinations of input sources will be
451           generated. E.g. The file foo contains 1 2, the file bar contains a
452           b c.  -a foo -a bar will result in the combinations (1,a) (1,b)
453           (1,c) (2,a) (2,b) (2,c). This is useful for replacing nested for-
454           loops.
455
456           See also: --link {n} :::: ::::+ :::
457
458       --arg-file-sep sep-str
459           Use sep-str instead of :::: as separator string between command and
460           argument files.
461
462           Useful if :::: is used for something else by the command.
463
464           See also: ::::
465
466       --arg-sep sep-str
467           Use sep-str instead of ::: as separator string.
468
469           Useful if ::: is used for something else by the command.
470
471           Also useful if you command uses ::: but you still want to read
472           arguments from stdin (standard input): Simply change --arg-sep to a
473           string that is not in the command line.
474
475           See also: :::
476
477       --bar
478           Show progress as a progress bar.
479
480           In the bar is shown: % of jobs completed, estimated seconds left,
481           and number of jobs started.
482
483           It is compatible with zenity:
484
485             seq 1000 | parallel -j30 --bar '(echo {};sleep 0.1)' \
486               2> >(perl -pe 'BEGIN{$/="\r";$|=1};s/\r/\n/g' |
487                    zenity --progress --auto-kill) | wc
488
489           See also: --eta --progress --total-jobs
490
491       --basefile file
492       --bf file
493           file will be transferred to each sshlogin before first job is
494           started.
495
496           It will be removed if --cleanup is active. The file may be a script
497           to run or some common base data needed for the job.  Multiple --bf
498           can be specified to transfer more basefiles. The file will be
499           transferred the same way as --transferfile.
500
501           See also: --sshlogin --transfer --return --cleanup --workdir
502
503       --basenamereplace replace-str
504       --bnr replace-str
505           Use the replacement string replace-str instead of {/} for basename
506           of input line.
507
508           See also: {/}
509
510       --basenameextensionreplace replace-str
511       --bner replace-str
512           Use the replacement string replace-str instead of {/.} for basename
513           of input line without extension.
514
515           See also: {/.}
516
517       --bin binexpr
518           Use binexpr as binning key and bin input to the jobs.
519
520           binexpr is [column number|column name] [perlexpression] e.g.:
521
522             3
523             Address
524             3 $_%=100
525             Address s/\D//g
526
527           Each input line is split using --colsep. The value of the column is
528           put into $_, the perl expression is executed, the resulting value
529           is is the job slot that will be given the line. If the value is
530           bigger than the number of jobslots the value will be modulo number
531           of jobslots.
532
533           This is similar to --shard but the hashing algorithm is a simple
534           modulo, which makes it predictible which jobslot will receive which
535           value.
536
537           The performance is in the order of 100K rows per second. Faster if
538           the bincol is small (<10), slower if it is big (>100).
539
540           --bin requires --pipe and a fixed numeric value for --jobs.
541
542           See also: SPREADING BLOCKS OF DATA --group-by --round-robin --shard
543
544       --bg
545           Run command in background.
546
547           GNU parallel will normally wait for the completion of a job. With
548           --bg GNU parallel will not wait for completion of the command
549           before exiting.
550
551           This is the default if --semaphore is set.
552
553           Implies --semaphore.
554
555           See also: --fg man sem
556
557       --bibtex
558       --citation
559           Print the citation notice and BibTeX entry for GNU parallel,
560           silence citation notice for all future runs, and exit. It will not
561           run any commands.
562
563           If it is impossible for you to run --citation you can instead use
564           --will-cite, which will run commands, but which will only silence
565           the citation notice for this single run.
566
567           If you use --will-cite in scripts to be run by others you are
568           making it harder for others to see the citation notice.  The
569           development of GNU parallel is indirectly financed through
570           citations, so if your users do not know they should cite then you
571           are making it harder to finance development. However, if you pay
572           10000 EUR, you have done your part to finance future development
573           and should feel free to use --will-cite in scripts.
574
575           If you do not want to help financing future development by letting
576           other users see the citation notice or by paying, then please
577           consider using another tool instead of GNU parallel. You can find
578           some of the alternatives in man parallel_alternatives.
579
580       --block size
581       --block-size size
582           Size of block in bytes to read at a time.
583
584           The size can be postfixed with K, M, G, T, P, k, m, g, t, or p.
585
586           GNU parallel tries to meet the block size but can be off by the
587           length of one record. For performance reasons size should be bigger
588           than a two records. GNU parallel will warn you and automatically
589           increase the size if you choose a size that is too small.
590
591           If you use -N, --block should be bigger than N+1 records.
592
593           size defaults to 1M.
594
595           When using --pipe-part a negative block size is not interpreted as
596           a blocksize but as the number of blocks each jobslot should have.
597           So this will run 10*5 = 50 jobs in total:
598
599             parallel --pipe-part -a myfile --block -10 -j5 wc
600
601           This is an efficient alternative to --round-robin because data is
602           never read by GNU parallel, but you can still have very few
603           jobslots process large amounts of data.
604
605           See also: UNIT PREFIX -N --pipe --pipe-part --round-robin
606           --block-timeout
607
608       --block-timeout duration
609       --bt duration
610           Timeout for reading block when using --pipe.
611
612           If it takes longer than duration to read a full block, use the
613           partial block read so far.
614
615           duration is in seconds, but can be postfixed with s, m, h, or d.
616
617           See also: TIME POSTFIXES --pipe --block
618
619       --cat
620           Create a temporary file with content.
621
622           Normally --pipe/--pipe-part will give data to the program on stdin
623           (standard input). With --cat GNU parallel will create a temporary
624           file with the name in {}, so you can do: parallel --pipe --cat wc
625           {}.
626
627           Implies --pipe unless --pipe-part is used.
628
629           See also: --pipe --pipe-part --fifo
630
631       --cleanup
632           Remove transferred files.
633
634           --cleanup will remove the transferred files on the remote computer
635           after processing is done.
636
637             find log -name '*gz' | parallel \
638               --sshlogin server.example.com --transferfile {} \
639               --return {.}.bz2 --cleanup "zcat {} | bzip -9 >{.}.bz2"
640
641           With --transferfile {} the file transferred to the remote computer
642           will be removed on the remote computer. Directories on the remote
643           computer containing the file will be removed if they are empty.
644
645           With --return the file transferred from the remote computer will be
646           removed on the remote computer. Directories on the remote computer
647           containing the file will be removed if they are empty.
648
649           --cleanup is ignored when not used with --basefile, --transfer,
650           --transferfile or --return.
651
652           See also: --basefile --transfer --transferfile --sshlogin --return
653
654       --color
655           Colour output.
656
657           Colour the output. Each job gets its own colour combination
658           (background+foreground).
659
660           --color is ignored when using -u.
661
662           See also: --color-failed
663
664       --color-failed
665       --cf
666           Colour the output from failing jobs white on red.
667
668           Useful if you have a lot of jobs and want to focus on the failing
669           jobs.
670
671           --color-failed is ignored when using -u, --line-buffer and
672           unreliable when using --latest-line.
673
674           See also: --color
675
676       --colsep regexp
677       -C regexp
678           Column separator.
679
680           The input will be treated as a table with regexp separating the
681           columns. The n'th column can be accessed using {n} or {n.}. E.g.
682           {3} is the 3rd column.
683
684           If there are more input sources, each input source will be
685           separated, but the columns from each input source will be linked.
686
687             parallel --colsep '-' echo {4} {3} {2} {1} \
688               ::: A-B C-D ::: e-f g-h
689
690           --colsep implies --trim rl, which can be overridden with --trim n.
691
692           regexp is a Perl Regular Expression:
693           https://perldoc.perl.org/perlre.html
694
695           See also: --csv {n} --trim --link
696
697       --compress
698           Compress temporary files.
699
700           If the output is big and very compressible this will take up less
701           disk space in $TMPDIR and possibly be faster due to less disk I/O.
702
703           GNU parallel will try pzstd, lbzip2, pbzip2, zstd, pigz, lz4, lzop,
704           plzip, lzip, lrz, gzip, pxz, lzma, bzip2, xz, clzip, in that order,
705           and use the first available.
706
707           GNU parallel will use up to 8 processes per job waiting to be
708           printed. See man parallel_design for details.
709
710           See also: --compress-program
711
712       --compress-program prg
713       --decompress-program prg
714           Use prg for (de)compressing temporary files.
715
716           It is assumed that prg -dc will decompress stdin (standard input)
717           to stdout (standard output) unless --decompress-program is given.
718
719           See also: --compress
720
721       --csv
722           Treat input as CSV-format.
723
724           --colsep sets the field delimiter. It works very much like --colsep
725           except it deals correctly with quoting. Compare:
726
727              echo '"1 big, 2 small","2""x4"" plank",12.34' |
728                parallel --csv echo {1} of {2} at {3}
729
730              echo '"1 big, 2 small","2""x4"" plank",12.34' |
731                parallel --colsep ',' echo {1} of {2} at {3}
732
733           Even quoted newlines are parsed correctly:
734
735              (echo '"Start of field 1 with newline'
736               echo 'Line 2 in field 1";value 2') |
737                parallel --csv --colsep ';' echo Field 1: {1} Field 2: {2}
738
739           When used with --pipe only pass full CSV-records.
740
741           See also: --pipe --link {n} --colsep --header
742
743       --ctag (obsolete: use --color --tag)
744           Color tag.
745
746           If the values look very similar looking at the output it can be
747           hard to tell when a new value is used. --ctag gives each value a
748           random color.
749
750           See also: --color --tag
751
752       --ctagstring str (obsolete: use --color --tagstring)
753           Color tagstring.
754
755           See also: --color --ctag --tagstring
756
757       --delay duration
758           Delay starting next job by duration.
759
760           GNU parallel will not start another job for the next duration.
761
762           duration is in seconds, but can be postfixed with s, m, h, or d.
763
764           If you append 'auto' to duration (e.g. 13m3sauto) GNU parallel will
765           automatically try to find the optimal value: If a job fails,
766           duration is increased by 30%. If a job succeeds, duration is
767           decreased by 10%.
768
769           See also: TIME POSTFIXES --retries --ssh-delay
770
771       --delimiter delim
772       -d delim
773           Input items are terminated by delim.
774
775           The specified delimiter may be characters, C-style character
776           escapes such as \n, or octal or hexadecimal escape codes.  Octal
777           and hexadecimal escape codes are understood as for the printf
778           command.
779
780           See also: --colsep
781
782       --dirnamereplace replace-str
783       --dnr replace-str
784           Use the replacement string replace-str instead of {//} for dirname
785           of input line.
786
787           See also: {//}
788
789       --dry-run
790           Print the job to run on stdout (standard output), but do not run
791           the job.
792
793           Use -v -v to include the wrapping that GNU parallel generates (for
794           remote jobs, --tmux, --nice, --pipe, --pipe-part, --fifo and
795           --cat). Do not count on this literally, though, as the job may be
796           scheduled on another computer or the local computer if : is in the
797           list.
798
799           See also: -v
800
801       -E eof-str
802           Set the end of file string to eof-str.
803
804           If the end of file string occurs as a line of input, the rest of
805           the input is not read.  If neither -E nor -e is used, no end of
806           file string is used.
807
808       --eof[=eof-str]
809       -e[eof-str]
810           This option is a synonym for the -E option.
811
812           Use -E instead, because it is POSIX compliant for xargs while this
813           option is not.  If eof-str is omitted, there is no end of file
814           string.  If neither -E nor -e is used, no end of file string is
815           used.
816
817       --embed
818           Embed GNU parallel in a shell script.
819
820           If you need to distribute your script to someone who does not want
821           to install GNU parallel you can embed GNU parallel in your own
822           shell script:
823
824             parallel --embed > new_script
825
826           After which you add your code at the end of new_script. This is
827           tested on ash, bash, dash, ksh, sh, and zsh.
828
829       --env var
830           Copy exported environment variable var.
831
832           This will copy var to the environment that the command is run in.
833           This is especially useful for remote execution.
834
835           In Bash var can also be a Bash function - just remember to export
836           -f the function.
837
838           The variable '_' is special. It will copy all exported environment
839           variables except for the ones mentioned in
840           ~/.parallel/ignored_vars.
841
842           To copy the full environment (both exported and not exported
843           variables, arrays, and functions) use env_parallel.
844
845           See also: --record-env --session --sshlogin command env_parallel
846
847       --eta
848           Show the estimated number of seconds before finishing.
849
850           This forces GNU parallel to read all jobs before starting to find
851           the number of jobs (unless you use --total-jobs). GNU parallel
852           normally only reads the next job to run.
853
854           The estimate is based on the runtime of finished jobs, so the first
855           estimate will only be shown when the first job has finished.
856
857           Implies --progress.
858
859           See also: --bar --progress --total-jobs
860
861       --fg
862           Run command in foreground.
863
864           With --tmux and --tmuxpane GNU parallel will start tmux in the
865           foreground.
866
867           With --semaphore GNU parallel will run the command in the
868           foreground (opposite --bg), and wait for completion of the command
869           before exiting. Exit code will be that of the command.
870
871           See also: --bg man sem
872
873       --fifo
874           Create a temporary fifo with content.
875
876           Normally --pipe and --pipe-part will give data to the program on
877           stdin (standard input). With --fifo GNU parallel will create a
878           temporary fifo with the name in {}, so you can do:
879
880             parallel --pipe --fifo wc {}
881
882           Beware: If the fifo is never opened for reading, the job will block
883           forever:
884
885             seq 1000000 | parallel --fifo echo This will block forever
886             seq 1000000 | parallel --fifo 'echo This will not block < {}'
887
888           By using --fifo instead of --cat you may save I/O as --cat will
889           write to a temporary file, whereas --fifo will not.
890
891           Implies --pipe unless --pipe-part is used.
892
893           See also: --cat --pipe --pipe-part
894
895       --filter filter
896           Only run jobs where filter is true.
897
898           filter can contain replacement strings and Perl code. Example:
899
900             parallel --filter '{1}+{2}+{3} < 10' echo {1},{2},{3} \
901               ::: {1..10} ::: {3..8} ::: {3..10}
902
903           Outputs: 1,3,3 1,3,4 1,3,5 1,4,3 1,4,4 1,5,3 2,3,3 2,3,4 2,4,3
904           3,3,3
905
906             parallel --filter '{1} < {2}*{2}' echo {1},{2} \
907               ::: {1..10} ::: {1..3}
908
909           Outputs: 1,2 1,3 2,2 2,3 3,2 3,3 4,3 5,3 6,3 7,3 8,3
910
911             parallel --filter '{choose_k}' --plus echo {1},{2},{3} \
912               ::: {1..5} ::: {1..5} ::: {1..5}
913
914           Outputs: 1,2,3 1,2,4 1,2,5 1,3,4 1,3,5 1,4,5 2,3,4 2,3,5 2,4,5
915           3,4,5
916
917           See also: skip() --no-run-if-empty {choose_k}
918
919       --filter-hosts
920           Remove down hosts.
921
922           For each remote host: check that login through ssh works. If not:
923           do not use this host.
924
925           For performance reasons, this check is performed only at the start
926           and every time --sshloginfile is changed. If an host goes down
927           after the first check, it will go undetected until --sshloginfile
928           is changed; --retries can be used to mitigate this.
929
930           Currently you can not put --filter-hosts in a profile, $PARALLEL,
931           /etc/parallel/config or similar. This is because GNU parallel uses
932           GNU parallel to compute this, so you will get an infinite loop.
933           This will likely be fixed in a later release.
934
935           See also: --sshloginfile --sshlogin --retries
936
937       --gnu
938           Behave like GNU parallel.
939
940           This option historically took precedence over --tollef. The
941           --tollef option is now retired, and therefore may not be used.
942           --gnu is kept for compatibility, but does nothing.
943
944       --group
945           Group output.
946
947           Output from each job is grouped together and is only printed when
948           the command is finished. Stdout (standard output) first followed by
949           stderr (standard error).
950
951           This takes in the order of 0.5ms CPU time per job and depends on
952           the speed of your disk for larger output.
953
954           --group is the default.
955
956           See also: --line-buffer --ungroup --tag
957
958       --group-by val
959           Group input by value.
960
961           Combined with --pipe/--pipe-part --group-by groups lines with the
962           same value into a record.
963
964           The value can be computed from the full line or from a single
965           column.
966
967           val can be:
968
969            column number Use the value in the column numbered.
970
971            column name   Treat the first line as a header and use the value
972                          in the column named.
973
974                          (Not supported with --pipe-part).
975
976            perl expression
977                          Run the perl expression and use $_ as the value.
978
979            column number perl expression
980                          Put the value of the column put in $_, run the perl
981                          expression, and use $_ as the value.
982
983            column name perl expression
984                          Put the value of the column put in $_, run the perl
985                          expression, and use $_ as the value.
986
987                          (Not supported with --pipe-part).
988
989           Example:
990
991             UserID, Consumption
992             123,    1
993             123,    2
994             12-3,   1
995             221,    3
996             221,    1
997             2/21,   5
998
999           If you want to group 123, 12-3, 221, and 2/21 into 4 records and
1000           pass one record at a time to wc:
1001
1002             tail -n +2 table.csv | \
1003               parallel --pipe --colsep , --group-by 1 -kN1 wc
1004
1005           Make GNU parallel treat the first line as a header:
1006
1007             cat table.csv | \
1008               parallel --pipe --colsep , --header : --group-by 1 -kN1 wc
1009
1010           Address column by column name:
1011
1012             cat table.csv | \
1013               parallel --pipe --colsep , --header : --group-by UserID -kN1 wc
1014
1015           If 12-3 and 123 are really the same UserID, remove non-digits in
1016           UserID when grouping:
1017
1018             cat table.csv | parallel --pipe --colsep , --header : \
1019               --group-by 'UserID s/\D//g' -kN1 wc
1020
1021           See also: SPREADING BLOCKS OF DATA --pipe --pipe-part --bin --shard
1022           --round-robin
1023
1024       --help
1025       -h  Print a summary of the options to GNU parallel and exit.
1026
1027       --halt-on-error val
1028       --halt val
1029           When should GNU parallel terminate?
1030
1031           In some situations it makes no sense to run all jobs. GNU parallel
1032           should simply stop as soon as a condition is met.
1033
1034           val defaults to never, which runs all jobs no matter what.
1035
1036           val can also take on the form of when,why.
1037
1038           when can be 'now' which means kill all running jobs and halt
1039           immediately, or it can be 'soon' which means wait for all running
1040           jobs to complete, but start no new jobs.
1041
1042           why can be 'fail=X', 'fail=Y%', 'success=X', 'success=Y%',
1043           'done=X', or 'done=Y%' where X is the number of jobs that has to
1044           fail, succeed, or be done before halting, and Y is the percentage
1045           of jobs that has to fail, succeed, or be done before halting.
1046
1047           Example:
1048
1049            --halt now,fail=1     exit when a job has failed. Kill running
1050                                  jobs.
1051
1052            --halt soon,fail=3    exit when 3 jobs have failed, but wait for
1053                                  running jobs to complete.
1054
1055            --halt soon,fail=3%   exit when 3% of the jobs have failed, but
1056                                  wait for running jobs to complete.
1057
1058            --halt now,success=1  exit when a job has succeeded. Kill running
1059                                  jobs.
1060
1061            --halt soon,success=3 exit when 3 jobs have succeeded, but wait
1062                                  for running jobs to complete.
1063
1064            --halt now,success=3% exit when 3% of the jobs have succeeded.
1065                                  Kill running jobs.
1066
1067            --halt now,done=1     exit when a job has finished. Kill running
1068                                  jobs.
1069
1070            --halt soon,done=3    exit when 3 jobs have finished, but wait for
1071                                  running jobs to complete.
1072
1073            --halt now,done=3%    exit when 3% of the jobs have finished. Kill
1074                                  running jobs.
1075
1076           For backwards compatibility these also work:
1077
1078           0           never
1079
1080           1           soon,fail=1
1081
1082           2           now,fail=1
1083
1084           -1          soon,success=1
1085
1086           -2          now,success=1
1087
1088           1-99%       soon,fail=1-99%
1089
1090       --header regexp
1091           Use regexp as header.
1092
1093           For normal usage the matched header (typically the first line:
1094           --header '.*\n') will be split using --colsep (which will default
1095           to '\t') and column names can be used as replacement variables:
1096           {column name}, {column name/}, {column name//}, {column name/.},
1097           {column name.}, {=column name perl expression =}, ..
1098
1099           For --pipe the matched header will be prepended to each output.
1100
1101           --header : is an alias for --header '.*\n'.
1102
1103           If regexp is a number, it is a fixed number of lines.
1104
1105           --header 0 is special: It will make replacement strings for files
1106           given with --arg-file or ::::. It will make {foo/bar} for the file
1107           foo/bar.
1108
1109           See also: --colsep --pipe --pipe-part --arg-file
1110
1111       --hostgroups
1112       --hgrp
1113           Enable hostgroups on arguments.
1114
1115           If an argument contains '@' the string after '@' will be removed
1116           and treated as a list of hostgroups on which this job is allowed to
1117           run. If there is no --sshlogin with a corresponding group, the job
1118           will run on any hostgroup.
1119
1120           Example:
1121
1122             parallel --hostgroups \
1123               --sshlogin @grp1/myserver1 -S @grp1+grp2/myserver2 \
1124               --sshlogin @grp3/myserver3 \
1125               echo ::: my_grp1_arg@grp1 arg_for_grp2@grp2 third@grp1+grp3
1126
1127           my_grp1_arg may be run on either myserver1 or myserver2, third may
1128           be run on either myserver1 or myserver3, but arg_for_grp2 will only
1129           be run on myserver2.
1130
1131           See also: --sshlogin $PARALLEL_HOSTGROUPS $PARALLEL_ARGHOSTGROUPS
1132
1133       -I replace-str
1134           Use the replacement string replace-str instead of {}.
1135
1136           See also: {}
1137
1138       --replace [replace-str]
1139       -i [replace-str]
1140           This option is deprecated; use -I instead.
1141
1142           This option is a synonym for -Ireplace-str if replace-str is
1143           specified, and for -I {} otherwise.
1144
1145           See also: {}
1146
1147       --joblog logfile
1148       --jl logfile
1149           Logfile for executed jobs.
1150
1151           Save a list of the executed jobs to logfile in the following TAB
1152           separated format: sequence number, sshlogin, start time as seconds
1153           since epoch, run time in seconds, bytes in files transferred, bytes
1154           in files returned, exit status, signal, and command run.
1155
1156           For --pipe bytes transferred and bytes returned are number of input
1157           and output of bytes.
1158
1159           If logfile is prepended with '+' log lines will be appended to the
1160           logfile.
1161
1162           To convert the times into ISO-8601 strict do:
1163
1164             cat logfile | perl -a -F"\t" -ne \
1165               'chomp($F[2]=`date -d \@$F[2] +%FT%T`); print join("\t",@F)'
1166
1167           If the host is long, you can use column -t to pretty print it:
1168
1169             cat joblog | column -t
1170
1171           See also: --resume --resume-failed
1172
1173       --jobs num
1174       -j num
1175       --max-procs num
1176       -P num
1177           Number of jobslots on each machine.
1178
1179           Run up to num jobs in parallel. Default is 100%.
1180
1181           num    Run up to num jobs in parallel.
1182
1183           0      Run as many as possible (this can take a while to
1184                  determine).
1185
1186                  Due to a bug -j 0 will also evaluate replacement strings
1187                  twice up to the number of joblots:
1188
1189                    # This will not count from 1 but from number-of-jobslots
1190                    seq 10000 | parallel -j0   echo '{= $_ = $foo++; =}' | head
1191                    # This will count from 1
1192                    seq 10000 | parallel -j100 echo '{= $_ = $foo++; =}' | head
1193
1194           num%   Multiply the number of CPU threads by num percent. E.g. 100%
1195                  means one job per CPU thread on each machine.
1196
1197           +num   Add num to the number of CPU threads.
1198
1199           -num   Subtract num from the number of CPU threads.
1200
1201           expr   Evaluate expr. E.g. '12/2' to get 6, '+25%' gives the same
1202                  as '125%', or complex expressions like '+3*log(55)%' which
1203                  means: multiply 3 by log(55), multiply that by the number of
1204                  CPU threads and divide by 100, add this to the number of CPU
1205                  threads.
1206
1207           procfile
1208                  Read parameter from file.
1209
1210                  Use the content of procfile as parameter for -j. E.g.
1211                  procfile could contain the string 100% or +2 or 10.
1212
1213                  If procfile is changed when a job completes, procfile is
1214                  read again and the new number of jobs is computed. If the
1215                  number is lower than before, running jobs will be allowed to
1216                  finish but new jobs will not be started until the wanted
1217                  number of jobs has been reached.  This makes it possible to
1218                  change the number of simultaneous running jobs while GNU
1219                  parallel is running.
1220
1221           If the evaluated number is less than 1 then 1 will be used.
1222
1223           If --semaphore is set, the default is 1 thus making a mutex.
1224
1225           See also: --use-cores-instead-of-threads
1226           --use-sockets-instead-of-threads
1227
1228       --keep-order
1229       -k  Keep sequence of output same as the order of input.
1230
1231           Normally the output of a job will be printed as soon as the job
1232           completes. Try this to see the difference:
1233
1234             parallel -j4 sleep {}\; echo {} ::: 2 1 4 3
1235             parallel -j4 -k sleep {}\; echo {} ::: 2 1 4 3
1236
1237           If used with --onall or --nonall the output will grouped by
1238           sshlogin in sorted order.
1239
1240           --keep-order cannot keep the output order when used with --pipe
1241           --round-robin. Here it instead means, that the jobslots will get
1242           the same blocks as input in the same order in every run if the
1243           input is kept the same. Run each of these twice and compare:
1244
1245             seq 10000000 | parallel --pipe --round-robin 'sleep 0.$RANDOM; wc'
1246             seq 10000000 | parallel --pipe -k --round-robin 'sleep 0.$RANDOM; wc'
1247
1248           -k only affects the order in which the output is printed - not the
1249           order in which jobs are run.
1250
1251           See also: --group --line-buffer
1252
1253       -L recsize
1254           When used with --pipe: Read records of recsize.
1255
1256           When used otherwise: Use at most recsize nonblank input lines per
1257           command line.  Trailing blanks cause an input line to be logically
1258           continued on the next input line.
1259
1260           -L 0 means read one line, but insert 0 arguments on the command
1261           line.
1262
1263           recsize can be postfixed with K, M, G, T, P, k, m, g, t, or p.
1264
1265           Implies -X unless -m, --xargs, or --pipe is set.
1266
1267           See also: UNIT PREFIX -N --max-lines --block -X -m --xargs --pipe
1268
1269       --max-lines [recsize]
1270       -l[recsize]
1271           When used with --pipe: Read records of recsize lines.
1272
1273           When used otherwise: Synonym for the -L option.  Unlike -L, the
1274           recsize argument is optional.  If recsize is not specified, it
1275           defaults to one.  The -l option is deprecated since the POSIX
1276           standard specifies -L instead.
1277
1278           -l 0 is an alias for -l 1.
1279
1280           Implies -X unless -m, --xargs, or --pipe is set.
1281
1282           See also: UNIT PREFIX -N --block -X -m --xargs --pipe
1283
1284       --limit "command args"
1285           Dynamic job limit.
1286
1287           Before starting a new job run command with args. The exit value of
1288           command determines what GNU parallel will do:
1289
1290           0   Below limit. Start another job.
1291
1292           1   Over limit. Start no jobs.
1293
1294           2   Way over limit. Kill the youngest job.
1295
1296           You can use any shell command. There are 3 predefined commands:
1297
1298           "io n"    Limit for I/O. The amount of disk I/O will be computed as
1299                     a value 0-100, where 0 is no I/O and 100 is at least one
1300                     disk is 100% saturated.
1301
1302           "load n"  Similar to --load.
1303
1304           "mem n"   Similar to --memfree.
1305
1306           See also: --memfree --load
1307
1308       --latest-line
1309       --ll
1310           Print the lastest line. Each job gets a single line that is updated
1311           with the lastest output from the job.
1312
1313           Example:
1314
1315             slow_seq() {
1316               seq "$@" |
1317                 perl -ne '$|=1; for(split//){ print; select($a,$a,$a,0.03);}'
1318             }
1319             export -f slow_seq
1320             parallel --shuf -j99 --ll --tag --bar --color slow_seq {} ::: {1..300}
1321
1322           See also: --line-buffer
1323
1324       --line-buffer
1325       --lb
1326           Buffer output on line basis.
1327
1328           --group will keep the output together for a whole job. --ungroup
1329           allows output to mixup with half a line coming from one job and
1330           half a line coming from another job. --line-buffer fits between
1331           these two: GNU parallel will print a full line, but will allow for
1332           mixing lines of different jobs.
1333
1334           --line-buffer takes more CPU power than both --group and --ungroup,
1335           but can be much faster than --group if the CPU is not the limiting
1336           factor.
1337
1338           Normally --line-buffer does not buffer on disk, and can thus
1339           process an infinite amount of data, but it will buffer on disk when
1340           combined with: --keep-order, --results, --compress, and --files.
1341           This will make it as slow as --group and will limit output to the
1342           available disk space.
1343
1344           With --keep-order --line-buffer will output lines from the first
1345           job continuously while it is running, then lines from the second
1346           job while that is running. It will buffer full lines, but jobs will
1347           not mix. Compare:
1348
1349             parallel -j0 'echo [{};sleep {};echo {}]' ::: 1 3 2 4
1350             parallel -j0 --lb 'echo [{};sleep {};echo {}]' ::: 1 3 2 4
1351             parallel -j0 -k --lb 'echo [{};sleep {};echo {}]' ::: 1 3 2 4
1352
1353           See also: --group --ungroup --keep-order --tag
1354
1355       --link
1356       --xapply
1357           Link input sources.
1358
1359           Read multiple input sources like the command xapply. If multiple
1360           input sources are given, one argument will be read from each of the
1361           input sources. The arguments can be accessed in the command as {1}
1362           .. {n}, so {1} will be a line from the first input source, and {6}
1363           will refer to the line with the same line number from the 6th input
1364           source.
1365
1366           Compare these two:
1367
1368             parallel echo {1} {2} ::: 1 2 3 ::: a b c
1369             parallel --link echo {1} {2} ::: 1 2 3 ::: a b c
1370
1371           Arguments will be recycled if one input source has more arguments
1372           than the others:
1373
1374             parallel --link echo {1} {2} {3} \
1375               ::: 1 2 ::: I II III ::: a b c d e f g
1376
1377           See also: --header :::+ ::::+
1378
1379       --load max-load
1380           Only start jobs if load is less than max-load.
1381
1382           Do not start new jobs on a given computer unless the number of
1383           running processes on the computer is less than max-load. max-load
1384           uses the same syntax as --jobs, so 100% for one per CPU is a valid
1385           setting. Only difference is 0 which is interpreted as 0.01.
1386
1387           See also: --limit --jobs
1388
1389       --controlmaster
1390       -M  Use ssh's ControlMaster to make ssh connections faster.
1391
1392           Useful if jobs run remote and are very fast to run. This is
1393           disabled for sshlogins that specify their own ssh command.
1394
1395           See also: --ssh --sshlogin
1396
1397       -m  Multiple arguments.
1398
1399           Insert as many arguments as the command line length permits. If
1400           multiple jobs are being run in parallel: distribute the arguments
1401           evenly among the jobs. Use -j1 or --xargs to avoid this.
1402
1403           If {} is not used the arguments will be appended to the line.  If
1404           {} is used multiple times each {} will be replaced with all the
1405           arguments.
1406
1407           Support for -m with --sshlogin is limited and may fail.
1408
1409           If in doubt use -X as that will most likely do what is needed.
1410
1411           See also: -X --xargs
1412
1413       --memfree size
1414           Minimum memory free when starting another job.
1415
1416           The size can be postfixed with K, M, G, T, P, k, m, g, t, or p.
1417
1418           If the jobs take up very different amount of RAM, GNU parallel will
1419           only start as many as there is memory for. If less than size bytes
1420           are free, no more jobs will be started. If less than 50% size bytes
1421           are free, the youngest job will be killed (as per --term-seq), and
1422           put back on the queue to be run later.
1423
1424           --retries must be set to determine how many times GNU parallel
1425           should retry a given job.
1426
1427           See also: UNIT PREFIX --term-seq --retries --memsuspend
1428
1429       --memsuspend size
1430           Suspend jobs when there is less memory available.
1431
1432           If the available memory falls below 2 * size, GNU parallel will
1433           suspend some of the running jobs. If the available memory falls
1434           below size, only one job will be running.
1435
1436           If a single job takes up at most size RAM, all jobs will complete
1437           without running out of memory. If you have swap available, you can
1438           usually lower size to around half the size of a single job - with
1439           the slight risk of swapping a little.
1440
1441           Jobs will be resumed when more RAM is available - typically when
1442           the oldest job completes.
1443
1444           --memsuspend only works on local jobs because there is no obvious
1445           way to suspend remote jobs.
1446
1447           size can be postfixed with K, M, G, T, P, k, m, g, t, or p.
1448
1449           See also: UNIT PREFIX --memfree
1450
1451       --minversion version
1452           Print the version GNU parallel and exit.
1453
1454           If the current version of GNU parallel is less than version the
1455           exit code is 255. Otherwise it is 0.
1456
1457           This is useful for scripts that depend on features only available
1458           from a certain version of GNU parallel:
1459
1460              parallel --minversion 20170422 &&
1461                echo halt done=50% supported from version 20170422 &&
1462                parallel --halt now,done=50% echo ::: {1..100}
1463
1464           See also: --version
1465
1466       --max-args max-args
1467       -n max-args
1468           Use at most max-args arguments per command line.
1469
1470           Fewer than max-args arguments will be used if the size (see the -s
1471           option) is exceeded, unless the -x option is given, in which case
1472           GNU parallel will exit.
1473
1474           -n 0 means read one argument, but insert 0 arguments on the command
1475           line.
1476
1477           max-args can be postfixed with K, M, G, T, P, k, m, g, t, or p (see
1478           UNIT PREFIX).
1479
1480           Implies -X unless -m is set.
1481
1482           See also: -X -m --xargs --max-replace-args
1483
1484       --max-replace-args max-args
1485       -N max-args
1486           Use at most max-args arguments per command line.
1487
1488           Like -n but also makes replacement strings {1} .. {max-args} that
1489           represents argument 1 .. max-args. If too few args the {n} will be
1490           empty.
1491
1492           -N 0 means read one argument, but insert 0 arguments on the command
1493           line.
1494
1495           This will set the owner of the homedir to the user:
1496
1497             tr ':' '\n' < /etc/passwd | parallel -N7 chown {1} {6}
1498
1499           Implies -X unless -m or --pipe is set.
1500
1501           max-args can be postfixed with K, M, G, T, P, k, m, g, t, or p.
1502
1503           When used with --pipe -N is the number of records to read. This is
1504           somewhat slower than --block.
1505
1506           See also: UNIT PREFIX --pipe --block -m -X --max-args
1507
1508       --nonall
1509           --onall with no arguments.
1510
1511           Run the command on all computers given with --sshlogin but take no
1512           arguments. GNU parallel will log into --jobs number of computers in
1513           parallel and run the job on the computer. -j adjusts how many
1514           computers to log into in parallel.
1515
1516           This is useful for running the same command (e.g. uptime) on a list
1517           of servers.
1518
1519           See also: --onall --sshlogin
1520
1521       --onall
1522           Run all the jobs on all computers given with --sshlogin.
1523
1524           GNU parallel will log into --jobs number of computers in parallel
1525           and run one job at a time on the computer. The order of the jobs
1526           will not be changed, but some computers may finish before others.
1527
1528           When using --group the output will be grouped by each server, so
1529           all the output from one server will be grouped together.
1530
1531           --joblog will contain an entry for each job on each server, so
1532           there will be several job sequence 1.
1533
1534           See also: --nonall --sshlogin
1535
1536       --open-tty
1537       -o  Open terminal tty.
1538
1539           Similar to --tty but does not set --jobs or --ungroup.
1540
1541           See also: --tty
1542
1543       --output-as-files
1544       --outputasfiles
1545       --files
1546       --files0
1547           Save output to files.
1548
1549           Instead of printing the output to stdout (standard output) the
1550           output of each job is saved in a file and the filename is then
1551           printed.
1552
1553           --files0 uses NUL (\0) instead of newline (\n) as separator.
1554
1555           See also: --results
1556
1557       --pipe
1558       --spreadstdin
1559           Spread input to jobs on stdin (standard input).
1560
1561           Read a block of data from stdin (standard input) and give one block
1562           of data as input to one job.
1563
1564           The block size is determined by --block (default: 1M).
1565
1566           Except for the first and last record GNU parallel only passes full
1567           records to the job. The strings --recstart and --recend determine
1568           where a record starts and ends: The border between two records is
1569           defined as --recend immediately followed by --recstart. GNU
1570           parallel splits exactly after --recend and before --recstart. The
1571           block will have the last partial record removed before the block is
1572           passed on to the job. The partial record will be prepended to next
1573           block.
1574
1575           You can limit the number of records to be passed with -N, and set
1576           the record size with -L.
1577
1578           --pipe maxes out at around 1 GB/s input, and 100 MB/s output. If
1579           performance is important use --pipe-part.
1580
1581           --fifo and --cat will give stdin (standard input) on a fifo or a
1582           temporary file.
1583
1584           If data is arriving slowly, you can use --block-timeout to finish
1585           reading a block early.
1586
1587           The data can be spread between the jobs in specific ways using
1588           --round-robin, --bin, --shard, --group-by. See the section:
1589           SPREADING BLOCKS OF DATA
1590
1591           See also: --block --block-timeout --recstart --recend --fifo --cat
1592           --pipe-part -N -L --round-robin
1593
1594       --pipe-part
1595           Pipe parts of a physical file.
1596
1597           --pipe-part works similar to --pipe, but is much faster. 5 GB/s can
1598           easily be delivered.
1599
1600           --pipe-part has a few limitations:
1601
1602           •  The file must be a normal file or a block device (technically it
1603              must be seekable) and must be given using --arg-file or ::::.
1604              The file cannot be a pipe, a fifo, or a stream as they are not
1605              seekable.
1606
1607              If using a block device with lot of NUL bytes, remember to set
1608              --recend ''.
1609
1610           •  Record counting (-N) and line counting (-L/-l) do not work.
1611              Instead use --recstart and --recend to determine where records
1612              end.
1613
1614           See also: --pipe --recstart --recend --arg-file ::::
1615
1616       --plain
1617           Ignore --profile, $PARALLEL, and ~/.parallel/config.
1618
1619           Ignore any --profile, $PARALLEL, and ~/.parallel/config to get full
1620           control on the command line (used by GNU parallel internally when
1621           called with --sshlogin).
1622
1623           See also: --profile
1624
1625       --plus
1626           Add more replacement strings.
1627
1628           Activate additional replacement strings: {+/} {+.} {+..} {+...}
1629           {..} {...} {/..} {/...} {##}. The idea being that '{+foo}' matches
1630           the opposite of '{foo}' so that:
1631
1632           {} = {+/}/{/} = {.}.{+.} = {+/}/{/.}.{+.}  = {..}.{+..} =
1633           {+/}/{/..}.{+..} = {...}.{+...} = {+/}/{/...}.{+...}
1634
1635           {##} is the total number of jobs to be run. It is incompatible with
1636           -X/-m/--xargs.
1637
1638           {0%} zero-padded jobslot.
1639
1640           {0#} zero-padded sequence number.
1641
1642           {choose_k} is inspired by n choose k: Given a list of n elements,
1643           choose k. k is the number of input sources and n is the number of
1644           arguments in an input source.  The content of the input sources
1645           must be the same and the arguments must be unique.
1646
1647           {uniq} skips jobs where values from two input sources are the same.
1648
1649           Shorthands for variables:
1650
1651             {slot}         $PARALLEL_JOBSLOT (see {%})
1652             {sshlogin}     $PARALLEL_SSHLOGIN
1653             {host}         $PARALLEL_SSHHOST
1654             {agrp}         $PARALLEL_ARGHOSTGROUPS
1655             {hgrp}         $PARALLEL_HOSTGROUPS
1656
1657           The following dynamic replacement strings are also activated. They
1658           are inspired by bash's parameter expansion:
1659
1660             {:-str}        str if the value is empty
1661             {:num}         remove the first num characters
1662             {:pos:len}     substring from position pos length len
1663             {#regexp}      remove prefix regexp (non-greedy)
1664             {##regexp}     remove prefix regexp (greedy)
1665             {%regexp}      remove postfix regexp (non-greedy)
1666             {%%regexp}     remove postfix regexp (greedy)
1667             {/regexp/str}  replace one regexp with str
1668             {//regexp/str} replace every regexp with str
1669             {^str}         uppercase str if found at the start
1670             {^^str}        uppercase str
1671             {,str}         lowercase str if found at the start
1672             {,,str}        lowercase str
1673
1674           See also: --rpl {}
1675
1676       --process-slot-var varname
1677           Set the environment variable varname to the jobslot number-1.
1678
1679             seq 10 | parallel --process-slot-var=name echo '$name' {}
1680
1681       --progress
1682           Show progress of computations.
1683
1684           List the computers involved in the task with number of CPUs
1685           detected and the max number of jobs to run. After that show
1686           progress for each computer: number of running jobs, number of
1687           completed jobs, and percentage of all jobs done by this computer.
1688           The percentage will only be available after all jobs have been
1689           scheduled as GNU parallel only read the next job when ready to
1690           schedule it - this is to avoid wasting time and memory by reading
1691           everything at startup.
1692
1693           By sending GNU parallel SIGUSR2 you can toggle turning on/off
1694           --progress on a running GNU parallel process.
1695
1696           See also: --eta --bar
1697
1698       --max-line-length-allowed
1699           Print maximal command line length.
1700
1701           Print the maximal number of characters allowed on the command line
1702           and exit (used by GNU parallel itself to determine the line length
1703           on remote computers).
1704
1705           See also: --show-limits
1706
1707       --number-of-cpus (obsolete)
1708           Print the number of physical CPU cores and exit.
1709
1710       --number-of-cores
1711           Print the number of physical CPU cores and exit (used by GNU
1712           parallel itself to determine the number of physical CPU cores on
1713           remote computers).
1714
1715           See also: --number-of-sockets --number-of-threads
1716           --use-cores-instead-of-threads --jobs
1717
1718       --number-of-sockets
1719           Print the number of filled CPU sockets and exit (used by GNU
1720           parallel itself to determine the number of filled CPU sockets on
1721           remote computers).
1722
1723           See also: --number-of-cores --number-of-threads
1724           --use-sockets-instead-of-threads --jobs
1725
1726       --number-of-threads
1727           Print the number of hyperthreaded CPU cores and exit (used by GNU
1728           parallel itself to determine the number of hyperthreaded CPU cores
1729           on remote computers).
1730
1731           See also: --number-of-cores --number-of-sockets --jobs
1732
1733       --no-keep-order
1734           Overrides an earlier --keep-order (e.g. if set in
1735           ~/.parallel/config).
1736
1737       --nice niceness
1738           Run the command at this niceness.
1739
1740           By default GNU parallel will run jobs at the same nice level as GNU
1741           parallel is started - both on the local machine and remote servers,
1742           so you are unlikely to ever use this option.
1743
1744           Setting --nice will override this nice level. If the nice level is
1745           smaller than the current nice level, it will only affect remote
1746           jobs (e.g. if current level is 10 then --nice 5 will cause local
1747           jobs to be run at level 10, but remote jobs run at nice level 5).
1748
1749       --interactive
1750       -p  Ask user before running a job.
1751
1752           Prompt the user about whether to run each command line and read a
1753           line from the terminal.  Only run the command line if the response
1754           starts with 'y' or 'Y'.  Implies -t.
1755
1756       --_parset type,varname
1757           Used internally by parset.
1758
1759           Generate shell code to be eval'ed which will set the variable(s)
1760           varname. type can be 'assoc' for associative array or 'var' for
1761           normal variables.
1762
1763           The only supported use is as part of parset.
1764
1765       --parens parensstring
1766           Use parensstring instead of {==}.
1767
1768           Define start and end parenthesis for {=perl expression=}. The left
1769           and the right parenthesis can be multiple characters and are
1770           assumed to be the same length. The default is {==} giving {= as the
1771           start parenthesis and =} as the end parenthesis.
1772
1773           Another useful setting is ,,,, which would make both parenthesis
1774           ,,:
1775
1776             parallel --parens ,,,, echo foo is ,,s/I/O/g,, ::: FII
1777
1778           See also: --rpl {=perl expression=}
1779
1780       --profile profilename
1781       -J profilename
1782           Use profile profilename for options.
1783
1784           This is useful if you want to have multiple profiles. You could
1785           have one profile for running jobs in parallel on the local computer
1786           and a different profile for running jobs on remote computers.
1787
1788           profilename corresponds to the file ~/.parallel/profilename.
1789
1790           You can give multiple profiles by repeating --profile. If parts of
1791           the profiles conflict, the later ones will be used.
1792
1793           Default: ~/.parallel/config
1794
1795           See also: PROFILE FILES
1796
1797       --quote
1798       -q  Quote command.
1799
1800           If your command contains special characters that should not be
1801           interpreted by the shell (e.g. ; \ | *), use --quote to escape
1802           these. The command must be a simple command (see man bash) without
1803           redirections and without variable assignments.
1804
1805           Most people will not need this. Quoting is disabled by default.
1806
1807           See also: QUOTING command --shell-quote uq() Q()
1808
1809       --no-run-if-empty
1810       -r  Do not run empty input.
1811
1812           If the stdin (standard input) only contains whitespace, do not run
1813           the command.
1814
1815           If used with --pipe this is slow.
1816
1817           See also: command --pipe --interactive
1818
1819       --noswap
1820           Do not start job is computer is swapping.
1821
1822           Do not start new jobs on a given computer if there is both swap-in
1823           and swap-out activity.
1824
1825           The swap activity is only sampled every 10 seconds as the sampling
1826           takes 1 second to do.
1827
1828           Swap activity is computed as (swap-in)*(swap-out) which in practice
1829           is a good value: swapping out is not a problem, swapping in is not
1830           a problem, but both swapping in and out usually indicates a
1831           problem.
1832
1833           --memfree and --memsuspend may give better results, so try using
1834           those first.
1835
1836           See also: --memfree --memsuspend
1837
1838       --record-env
1839           Record exported environment.
1840
1841           Record current exported environment variables in
1842           ~/.parallel/ignored_vars.  This will ignore variables currently set
1843           when using --env _. So you should set the variables/fuctions, you
1844           want to use after running --record-env.
1845
1846           See also: --env --session env_parallel
1847
1848       --recstart startstring
1849       --recend endstring
1850           Split record between endstring and startstring.
1851
1852           If --recstart is given startstring will be used to split at record
1853           start.
1854
1855           If --recend is given endstring will be used to split at record end.
1856
1857           If both --recstart and --recend are given the combined string
1858           endstringstartstring will have to match to find a split position.
1859           This is useful if either startstring or endstring match in the
1860           middle of a record.
1861
1862           If neither --recstart nor --recend are given, then --recend
1863           defaults to '\n'. To have no record separator (e.g. for binary
1864           files) use --recend "".
1865
1866           --recstart and --recend are used with --pipe.
1867
1868           Use --regexp to interpret --recstart and --recend as regular
1869           expressions. This is slow, however.
1870
1871           Use --remove-rec-sep to remove --recstart and --recend before
1872           passing the block to the job.
1873
1874           See also: --pipe --regexp --remove-rec-sep
1875
1876       --regexp
1877           Use --regexp to interpret --recstart and --recend as regular
1878           expressions. This is slow, however.
1879
1880           See also: --pipe --regexp --remove-rec-sep --recstart --recend
1881
1882       --remove-rec-sep
1883       --removerecsep
1884       --rrs
1885           Remove record separator.
1886
1887           Remove the text matched by --recstart and --recend before piping it
1888           to the command.
1889
1890           Only used with --pipe/--pipe-part.
1891
1892           See also: --pipe --regexp --pipe-part --recstart --recend
1893
1894       --results name
1895       --res name
1896           Save the output into files.
1897
1898           Simple string output dir
1899
1900           If name does not contain replacement strings and does not end in
1901           .csv/.tsv, the output will be stored in a directory tree rooted at
1902           name.  Within this directory tree, each command will result in
1903           three files: name/<ARGS>/stdout and name/<ARGS>/stderr,
1904           name/<ARGS>/seq, where <ARGS> is a sequence of directories
1905           representing the header of the input source (if using --header :)
1906           or the number of the input source and corresponding values.
1907
1908           E.g:
1909
1910             parallel --header : --results foo echo {a} {b} \
1911               ::: a I II ::: b III IIII
1912
1913           will generate the files:
1914
1915             foo/a/II/b/III/seq
1916             foo/a/II/b/III/stderr
1917             foo/a/II/b/III/stdout
1918             foo/a/II/b/IIII/seq
1919             foo/a/II/b/IIII/stderr
1920             foo/a/II/b/IIII/stdout
1921             foo/a/I/b/III/seq
1922             foo/a/I/b/III/stderr
1923             foo/a/I/b/III/stdout
1924             foo/a/I/b/IIII/seq
1925             foo/a/I/b/IIII/stderr
1926             foo/a/I/b/IIII/stdout
1927
1928           and
1929
1930             parallel --results foo echo {1} {2} ::: I II ::: III IIII
1931
1932           will generate the files:
1933
1934             foo/1/II/2/III/seq
1935             foo/1/II/2/III/stderr
1936             foo/1/II/2/III/stdout
1937             foo/1/II/2/IIII/seq
1938             foo/1/II/2/IIII/stderr
1939             foo/1/II/2/IIII/stdout
1940             foo/1/I/2/III/seq
1941             foo/1/I/2/III/stderr
1942             foo/1/I/2/III/stdout
1943             foo/1/I/2/IIII/seq
1944             foo/1/I/2/IIII/stderr
1945             foo/1/I/2/IIII/stdout
1946
1947           CSV file output
1948
1949           If name ends in .csv/.tsv the output will be a CSV-file named name.
1950
1951           .csv gives a comma separated value file. .tsv gives a TAB separated
1952           value file.
1953
1954           -.csv/-.tsv are special: It will give the file on stdout (standard
1955           output).
1956
1957           JSON file output
1958
1959           If name ends in .json the output will be a JSON-file named name.
1960
1961           -.json is special: It will give the file on stdout (standard
1962           output).
1963
1964           Replacement string output file
1965
1966           If name contains a replacement string and the replaced result does
1967           not end in /, then the standard output will be stored in a file
1968           named by this result. Standard error will be stored in the same
1969           file name with '.err' added, and the sequence number will be stored
1970           in the same file name with '.seq' added.
1971
1972           E.g.
1973
1974             parallel --results my_{} echo ::: foo bar baz
1975
1976           will generate the files:
1977
1978             my_bar
1979             my_bar.err
1980             my_bar.seq
1981             my_baz
1982             my_baz.err
1983             my_baz.seq
1984             my_foo
1985             my_foo.err
1986             my_foo.seq
1987
1988           Replacement string output dir
1989
1990           If name contains a replacement string and the replaced result ends
1991           in /, then output files will be stored in the resulting dir.
1992
1993           E.g.
1994
1995             parallel --results my_{}/ echo ::: foo bar baz
1996
1997           will generate the files:
1998
1999             my_bar/seq
2000             my_bar/stderr
2001             my_bar/stdout
2002             my_baz/seq
2003             my_baz/stderr
2004             my_baz/stdout
2005             my_foo/seq
2006             my_foo/stderr
2007             my_foo/stdout
2008
2009           See also: --output-as-files --tag --header --joblog
2010
2011       --resume
2012           Resumes from the last unfinished job.
2013
2014           By reading --joblog or the --results dir GNU parallel will figure
2015           out the last unfinished job and continue from there. As GNU
2016           parallel only looks at the sequence numbers in --joblog then the
2017           input, the command, and --joblog all have to remain unchanged;
2018           otherwise GNU parallel may run wrong commands.
2019
2020           See also: --joblog --results --resume-failed --retries
2021
2022       --resume-failed
2023           Retry all failed and resume from the last unfinished job.
2024
2025           By reading --joblog GNU parallel will figure out the failed jobs
2026           and run those again. After that it will resume last unfinished job
2027           and continue from there. As GNU parallel only looks at the sequence
2028           numbers in --joblog then the input, the command, and --joblog all
2029           have to remain unchanged; otherwise GNU parallel may run wrong
2030           commands.
2031
2032           See also: --joblog --resume --retry-failed --retries
2033
2034       --retry-failed
2035           Retry all failed jobs in joblog.
2036
2037           By reading --joblog GNU parallel will figure out the failed jobs
2038           and run those again.
2039
2040           --retry-failed ignores the command and arguments on the command
2041           line: It only looks at the joblog.
2042
2043           Differences between --resume, --resume-failed, --retry-failed
2044
2045           In this example exit {= $_%=2 =} will cause every other job to
2046           fail.
2047
2048             timeout -k 1 4 parallel --joblog log -j10 \
2049               'sleep {}; exit {= $_%=2 =}' ::: {10..1}
2050
2051           4 jobs completed. 2 failed:
2052
2053             Seq   [...]   Exitval Signal  Command
2054             10    [...]   1       0       sleep 1; exit 1
2055             9     [...]   0       0       sleep 2; exit 0
2056             8     [...]   1       0       sleep 3; exit 1
2057             7     [...]   0       0       sleep 4; exit 0
2058
2059           --resume does not care about the Exitval, but only looks at Seq. If
2060           the Seq is run, it will not be run again. So if needed, you can
2061           change the command for the seqs not run yet:
2062
2063             parallel --resume --joblog log -j10 \
2064               'sleep .{}; exit {= $_%=2 =}' ::: {10..1}
2065
2066             Seq   [...]   Exitval Signal  Command
2067             [... as above ...]
2068             1     [...]   0       0       sleep .10; exit 0
2069             6     [...]   1       0       sleep .5; exit 1
2070             5     [...]   0       0       sleep .6; exit 0
2071             4     [...]   1       0       sleep .7; exit 1
2072             3     [...]   0       0       sleep .8; exit 0
2073             2     [...]   1       0       sleep .9; exit 1
2074
2075           --resume-failed cares about the Exitval, but also only looks at Seq
2076           to figure out which commands to run. Again this means you can
2077           change the command, but not the arguments. It will run the failed
2078           seqs and the seqs not yet run:
2079
2080             parallel --resume-failed --joblog log -j10 \
2081               'echo {};sleep .{}; exit {= $_%=3 =}' ::: {10..1}
2082
2083             Seq   [...]   Exitval Signal  Command
2084             [... as above ...]
2085             10    [...]   1       0       echo 1;sleep .1; exit 1
2086             8     [...]   0       0       echo 3;sleep .3; exit 0
2087             6     [...]   2       0       echo 5;sleep .5; exit 2
2088             4     [...]   1       0       echo 7;sleep .7; exit 1
2089             2     [...]   0       0       echo 9;sleep .9; exit 0
2090
2091           --retry-failed cares about the Exitval, but takes the command from
2092           the joblog. It ignores any arguments or commands given on the
2093           command line:
2094
2095             parallel --retry-failed --joblog log -j10 this part is ignored
2096
2097             Seq   [...]   Exitval Signal  Command
2098             [... as above ...]
2099             10    [...]   1       0       echo 1;sleep .1; exit 1
2100             6     [...]   2       0       echo 5;sleep .5; exit 2
2101             4     [...]   1       0       echo 7;sleep .7; exit 1
2102
2103           See also: --joblog --resume --resume-failed --retries
2104
2105       --retries n
2106           Try failing jobs n times.
2107
2108           If a job fails, retry it on another computer on which it has not
2109           failed. Do this n times. If there are fewer than n computers in
2110           --sshlogin GNU parallel will re-use all the computers. This is
2111           useful if some jobs fail for no apparent reason (such as network
2112           failure).
2113
2114           n=0 means infinite.
2115
2116           See also: --term-seq --sshlogin
2117
2118       --return filename
2119           Transfer files from remote computers.
2120
2121           --return is used with --sshlogin when the arguments are files on
2122           the remote computers. When processing is done the file filename
2123           will be transferred from the remote computer using rsync and will
2124           be put relative to the default login dir. E.g.
2125
2126             echo foo/bar.txt | parallel --return {.}.out \
2127               --sshlogin server.example.com touch {.}.out
2128
2129           This will transfer the file $HOME/foo/bar.out from the computer
2130           server.example.com to the file foo/bar.out after running touch
2131           foo/bar.out on server.example.com.
2132
2133             parallel -S server --trc out/./{}.out touch {}.out ::: in/file
2134
2135           This will transfer the file in/file.out from the computer
2136           server.example.com to the files out/in/file.out after running touch
2137           in/file.out on server.
2138
2139             echo /tmp/foo/bar.txt | parallel --return {.}.out \
2140               --sshlogin server.example.com touch {.}.out
2141
2142           This will transfer the file /tmp/foo/bar.out from the computer
2143           server.example.com to the file /tmp/foo/bar.out after running touch
2144           /tmp/foo/bar.out on server.example.com.
2145
2146           Multiple files can be transferred by repeating the option multiple
2147           times:
2148
2149             echo /tmp/foo/bar.txt | parallel \
2150               --sshlogin server.example.com \
2151               --return {.}.out --return {.}.out2 touch {.}.out {.}.out2
2152
2153           --return is ignored when used with --sshlogin : or when not used
2154           with --sshlogin.
2155
2156           For details on transferring see --transferfile.
2157
2158           See also: --transfer --transferfile --sshlogin --cleanup --workdir
2159
2160       --round-robin
2161       --round
2162           Distribute chunks of standard input in a round robin fashion.
2163
2164           Normally --pipe will give a single block to each instance of the
2165           command. With --round-robin all blocks will at random be written to
2166           commands already running. This is useful if the command takes a
2167           long time to initialize.
2168
2169           With --keep-order and --round-robin the jobslots will get the same
2170           blocks as input in the same order in every run if the input is kept
2171           the same. See details under --keep-order.
2172
2173           --round-robin implies --pipe, except if --pipe-part is given.
2174
2175           See the section: SPREADING BLOCKS OF DATA.
2176
2177           See also: --bin --group-by --shard
2178
2179       --rpl 'tag perl expression'
2180           Define replacement string.
2181
2182           Use tag as a replacement string for perl expression. This makes it
2183           possible to define your own replacement strings. GNU parallel's 7
2184           replacement strings are implemented as:
2185
2186             --rpl '{} '
2187             --rpl '{#} 1 $_=$job->seq()'
2188             --rpl '{%} 1 $_=$job->slot()'
2189             --rpl '{/} s:.*/::'
2190             --rpl '{//} $Global::use{"File::Basename"} ||=
2191                         eval "use File::Basename; 1;"; $_ = dirname($_);'
2192             --rpl '{/.} s:.*/::; s:\.[^/.]+$::;'
2193             --rpl '{.} s:\.[^/.]+$::'
2194
2195           The --plus replacement strings are implemented as:
2196
2197             --rpl '{+/} s:/[^/]*$:: || s:.*$::'
2198             --rpl '{+.} s:.*\.:: || s:.*$::'
2199             --rpl '{+..} s:.*\.([^/.]+\.[^/.]+)$:$1: || s:.*$::'
2200             --rpl '{+...} s:.*\.([^/.]+\.[^/.]+\.[^/.]+)$:$1: || s:.*$::'
2201             --rpl '{..} s:\.[^/.]+\.[^/.]+$::'
2202             --rpl '{...} s:\.[^/.]+\.[^/.]+\.[^/.]+$::'
2203             --rpl '{/..} s:.*/::; s:\.[^/.]+\.[^/.]+$::'
2204             --rpl '{/...} s:.*/::; s:\.[^/.]+\.[^/.]+\.[^/.]+$::'
2205             --rpl '{choose_k}
2206                    for $t (2..$#arg){ if($arg[$t-1] ge $arg[$t]) { skip() } }'
2207             --rpl '{##} 1 $_=total_jobs()'
2208             --rpl '{0%} 1 $f=1+int((log($Global::max_jobs_running||1)/
2209                                     log(10))); $_=sprintf("%0${f}d",slot())'
2210             --rpl '{0#} 1 $f=1+int((log(total_jobs())/log(10)));
2211                         $_=sprintf("%0${f}d",seq())'
2212
2213             --rpl '{:-([^}]+?)} $_ ||= $$1'
2214             --rpl '{:(\d+?)} substr($_,0,$$1) = ""'
2215             --rpl '{:(\d+?):(\d+?)} $_ = substr($_,$$1,$$2);'
2216             --rpl '{#([^#}][^}]*?)} $nongreedy=::make_regexp_ungreedy($$1);
2217                                     s/^$nongreedy(.*)/$1/;'
2218             --rpl '{##([^#}][^}]*?)} s/^$$1//;'
2219             --rpl '{%([^}]+?)} $nongreedy=::make_regexp_ungreedy($$1);
2220                                s/(.*)$nongreedy$/$1/;'
2221             --rpl '{%%([^}]+?)} s/$$1$//;'
2222             --rpl '{/([^}]+?)/([^}]*?)} s/$$1/$$2/;'
2223             --rpl '{^([^}]+?)} s/^($$1)/uc($1)/e;'
2224             --rpl '{^^([^}]+?)} s/($$1)/uc($1)/eg;'
2225             --rpl '{,([^}]+?)} s/^($$1)/lc($1)/e;'
2226             --rpl '{,,([^}]+?)} s/($$1)/lc($1)/eg;'
2227
2228             --rpl '{slot} 1 $_="\${PARALLEL_JOBSLOT}";uq()'
2229             --rpl '{host} 1 $_="\${PARALLEL_SSHHOST}";uq()'
2230             --rpl '{sshlogin} 1 $_="\${PARALLEL_SSHLOGIN}";uq()'
2231             --rpl '{hgrp} 1 $_="\${PARALLEL_HOSTGROUPS}";uq()'
2232             --rpl '{agrp} 1 $_="\${PARALLEL_ARGHOSTGROUPS}";uq()'
2233
2234           If the user defined replacement string starts with '{' it can also
2235           be used as a positional replacement string (like {2.}).
2236
2237           It is recommended to only change $_ but you have full access to all
2238           of GNU parallel's internal functions and data structures.
2239
2240           Here are a few examples:
2241
2242             Is the job sequence even or odd?
2243             --rpl '{odd} $_ = seq() % 2 ? "odd" : "even"'
2244             Pad job sequence with leading zeros to get equal width
2245             --rpl '{0#} $f=1+int("".(log(total_jobs())/log(10)));
2246               $_=sprintf("%0${f}d",seq())'
2247             Job sequence counting from 0
2248             --rpl '{#0} $_ = seq() - 1'
2249             Job slot counting from 2
2250             --rpl '{%1} $_ = slot() + 1'
2251             Remove all extensions
2252             --rpl '{:} s:(\.[^/]+)*$::'
2253
2254           You can have dynamic replacement strings by including parenthesis
2255           in the replacement string and adding a regular expression between
2256           the parenthesis. The matching string will be inserted as $$1:
2257
2258             parallel --rpl '{%(.*?)} s/$$1//' echo {%.tar.gz} ::: my.tar.gz
2259             parallel --rpl '{:%(.+?)} s:$$1(\.[^/]+)*$::' \
2260               echo {:%_file} ::: my_file.tar.gz
2261             parallel -n3 --rpl '{/:%(.*?)} s:.*/(.*)$$1(\.[^/]+)*$:$1:' \
2262               echo job {#}: {2} {2.} {3/:%_1} ::: a/b.c c/d.e f/g_1.h.i
2263
2264           You can even use multiple matches:
2265
2266             parallel --rpl '{/(.+?)/(.*?)} s/$$1/$$2/;'
2267               echo {/replacethis/withthis} {/b/C} ::: a_replacethis_b
2268
2269             parallel --rpl '{(.*?)/(.*?)} $_="$$2$_$$1"' \
2270               echo {swap/these} ::: -middle-
2271
2272           See also: {=perl expression=} --parens
2273
2274       --rsync-opts options
2275           Options to pass on to rsync.
2276
2277           Setting --rsync-opts takes precedence over setting the environment
2278           variable $PARALLEL_RSYNC_OPTS.
2279
2280       --max-chars max-chars
2281       -s max-chars
2282           Limit length of command.
2283
2284           Use at most max-chars characters per command line, including the
2285           command and initial-arguments and the terminating nulls at the ends
2286           of the argument strings.  The largest allowed value is system-
2287           dependent, and is calculated as the argument length limit for exec,
2288           less the size of your environment.  The default value is the
2289           maximum.
2290
2291           max-chars can be postfixed with K, M, G, T, P, k, m, g, t, or p
2292           (see UNIT PREFIX).
2293
2294           Implies -X unless -m or --xargs is set.
2295
2296           See also: -X -m --xargs --max-line-length-allowed --show-limits
2297
2298       --show-limits
2299           Display limits given by the operating system.
2300
2301           Display the limits on the command-line length which are imposed by
2302           the operating system and the -s option.  Pipe the input from
2303           /dev/null (and perhaps specify --no-run-if-empty) if you don't want
2304           GNU parallel to do anything.
2305
2306           See also: --max-chars --max-line-length-allowed --version
2307
2308       --semaphore
2309           Work as a counting semaphore.
2310
2311           --semaphore will cause GNU parallel to start command in the
2312           background. When the number of jobs given by --jobs is reached, GNU
2313           parallel will wait for one of these to complete before starting
2314           another command.
2315
2316           --semaphore implies --bg unless --fg is specified.
2317
2318           The command sem is an alias for parallel --semaphore.
2319
2320           See also: man sem --bg --fg --semaphore-name --semaphore-timeout
2321           --wait
2322
2323       --semaphore-name name
2324       --id name
2325           Use name as the name of the semaphore.
2326
2327           The default is the name of the controlling tty (output from tty).
2328
2329           The default normally works as expected when used interactively, but
2330           when used in a script name should be set. $$ or my_task_name are
2331           often a good value.
2332
2333           The semaphore is stored in ~/.parallel/semaphores/
2334
2335           Implies --semaphore.
2336
2337           See also: man sem --semaphore
2338
2339       --semaphore-timeout secs
2340       --st secs
2341           If secs > 0: If the semaphore is not released within secs seconds,
2342           take it anyway.
2343
2344           If secs < 0: If the semaphore is not released within secs seconds,
2345           exit.
2346
2347           secs is in seconds, but can be postfixed with s, m, h, or d (see
2348           the section TIME POSTFIXES).
2349
2350           Implies --semaphore.
2351
2352           See also: man sem
2353
2354       --seqreplace replace-str
2355           Use the replacement string replace-str instead of {#} for job
2356           sequence number.
2357
2358           See also: {#}
2359
2360       --session
2361           Record names in current environment in $PARALLEL_IGNORED_NAMES and
2362           exit.
2363
2364           Only used with env_parallel. Aliases, functions, and variables with
2365           names in $PARALLEL_IGNORED_NAMES will not be copied.  So you should
2366           set variables/function you want copied after running --session.
2367
2368           It is similar to --record-env, but only for this session.
2369
2370           Only supported in Ash, Bash, Dash, Ksh, Sh, and Zsh.
2371
2372           See also: --env --record-env env_parallel
2373
2374       --shard shardexpr
2375           Use shardexpr as shard key and shard input to the jobs.
2376
2377           shardexpr is [column number|column name] [perlexpression] e.g.:
2378
2379             3
2380             Address
2381             3 $_%=100
2382             Address s/\d//g
2383
2384           Each input line is split using --colsep. The string of the column
2385           is put into $_, the perl expression is executed, the resulting
2386           string is hashed so that all lines of a given value is given to the
2387           same job slot.
2388
2389           This is similar to sharding in databases.
2390
2391           The performance is in the order of 100K rows per second. Faster if
2392           the shardcol is small (<10), slower if it is big (>100).
2393
2394           --shard requires --pipe and a fixed numeric value for --jobs.
2395
2396           See the section: SPREADING BLOCKS OF DATA.
2397
2398           See also: --bin --group-by --round-robin
2399
2400       --shebang
2401       --hashbang
2402           GNU parallel can be called as a shebang (#!) command as the first
2403           line of a script. The content of the file will be treated as
2404           inputsource.
2405
2406           Like this:
2407
2408             #!/usr/bin/parallel --shebang -r wget
2409
2410             https://ftpmirror.gnu.org/parallel/parallel-20120822.tar.bz2
2411             https://ftpmirror.gnu.org/parallel/parallel-20130822.tar.bz2
2412             https://ftpmirror.gnu.org/parallel/parallel-20140822.tar.bz2
2413
2414           --shebang must be set as the first option.
2415
2416           On FreeBSD env is needed:
2417
2418             #!/usr/bin/env -S parallel --shebang -r wget
2419
2420             https://ftpmirror.gnu.org/parallel/parallel-20120822.tar.bz2
2421             https://ftpmirror.gnu.org/parallel/parallel-20130822.tar.bz2
2422             https://ftpmirror.gnu.org/parallel/parallel-20140822.tar.bz2
2423
2424           There are many limitations of shebang (#!) depending on your
2425           operating system. See details on
2426           https://www.in-ulm.de/~mascheck/various/shebang/
2427
2428           See also: --shebang-wrap
2429
2430       --shebang-wrap
2431           GNU parallel can parallelize scripts by wrapping the shebang line.
2432           If the program can be run like this:
2433
2434             cat arguments | parallel the_program
2435
2436           then the script can be changed to:
2437
2438             #!/usr/bin/parallel --shebang-wrap /original/parser --options
2439
2440           E.g.
2441
2442             #!/usr/bin/parallel --shebang-wrap /usr/bin/python
2443
2444           If the program can be run like this:
2445
2446             cat data | parallel --pipe the_program
2447
2448           then the script can be changed to:
2449
2450             #!/usr/bin/parallel --shebang-wrap --pipe /orig/parser --opts
2451
2452           E.g.
2453
2454             #!/usr/bin/parallel --shebang-wrap --pipe /usr/bin/perl -w
2455
2456           --shebang-wrap must be set as the first option.
2457
2458           See also: --shebang
2459
2460       --shell-completion shell
2461           Generate shell completion code for interactive shells.
2462
2463           Supported shells: bash zsh.
2464
2465           Use auto as shell to automatically detect running shell.
2466
2467           Activate the completion code with:
2468
2469             zsh% eval "$(parallel --shell-completion auto)"
2470             bash$ eval "$(parallel --shell-completion auto)"
2471
2472           Or put this `/usr/share/zsh/site-functions/_parallel`, then
2473           `compinit` to generate `~/.zcompdump`:
2474
2475             #compdef parallel
2476
2477             (( $+functions[_comp_parallel] )) ||
2478               eval "$(parallel --shell-completion auto)" &&
2479               _comp_parallel
2480
2481       --shell-quote
2482           Does not run the command but quotes it. Useful for making quoted
2483           composed commands for GNU parallel.
2484
2485           Multiple --shell-quote with quote the string multiple times, so
2486           parallel --shell-quote | parallel --shell-quote can be written as
2487           parallel --shell-quote --shell-quote.
2488
2489           See also: --quote
2490
2491       --shuf
2492           Shuffle jobs.
2493
2494           When having multiple input sources it is hard to randomize jobs.
2495           --shuf will generate all jobs, and shuffle them before running
2496           them. This is useful to get a quick preview of the results before
2497           running the full batch.
2498
2499           Combined with --halt soon,done=1% you can run a random 1% sample of
2500           all jobs:
2501
2502             parallel --shuf --halt soon,done=1% echo ::: {1..100} ::: {1..100}
2503
2504           See also: --halt
2505
2506       --skip-first-line
2507           Do not use the first line of input (used by GNU parallel itself
2508           when called with --shebang).
2509
2510       --sql DBURL (obsolete)
2511           Use --sql-master instead.
2512
2513       --sql-master DBURL
2514           Submit jobs via SQL server. DBURL must point to a table, which will
2515           contain the same information as --joblog, the values from the input
2516           sources (stored in columns V1 .. Vn), and the output (stored in
2517           columns Stdout and Stderr).
2518
2519           If DBURL is prepended with '+' GNU parallel assumes the table is
2520           already made with the correct columns and appends the jobs to it.
2521
2522           If DBURL is not prepended with '+' the table will be dropped and
2523           created with the correct amount of V-columns unless
2524
2525           --sqlmaster does not run any jobs, but it creates the values for
2526           the jobs to be run. One or more --sqlworker must be run to actually
2527           execute the jobs.
2528
2529           If --wait is set, GNU parallel will wait for the jobs to complete.
2530
2531           The format of a DBURL is:
2532
2533             [sql:]vendor://[[user][:pwd]@][host][:port]/[db]/table
2534
2535           E.g.
2536
2537             sql:mysql://hr:hr@localhost:3306/hrdb/jobs
2538             mysql://scott:tiger@my.example.com/pardb/paralleljobs
2539             sql:oracle://scott:tiger@ora.example.com/xe/parjob
2540             postgresql://scott:tiger@pg.example.com/pgdb/parjob
2541             pg:///parjob
2542             sqlite3:///%2Ftmp%2Fpardb.sqlite/parjob
2543             csv:///%2Ftmp%2Fpardb/parjob
2544
2545           Notice how / in the path of sqlite and CVS must be encoded as %2F.
2546           Except the last / in CSV which must be a /.
2547
2548           It can also be an alias from ~/.sql/aliases:
2549
2550             :myalias mysql:///mydb/paralleljobs
2551
2552           See also: --sql-and-worker --sql-worker --joblog
2553
2554       --sql-and-worker DBURL
2555           Shorthand for: --sql-master DBURL --sql-worker DBURL.
2556
2557           See also: --sql-master --sql-worker
2558
2559       --sql-worker DBURL
2560           Execute jobs via SQL server. Read the input sources variables from
2561           the table pointed to by DBURL. The command on the command line
2562           should be the same as given by --sqlmaster.
2563
2564           If you have more than one --sqlworker jobs may be run more than
2565           once.
2566
2567           If --sqlworker runs on the local machine, the hostname in the SQL
2568           table will not be ':' but instead the hostname of the machine.
2569
2570           See also: --sql-master --sql-and-worker
2571
2572       --ssh sshcommand
2573           GNU parallel defaults to using ssh for remote access. This can be
2574           overridden with --ssh. It can also be set on a per server basis
2575           with --sshlogin.
2576
2577           See also: --sshlogin
2578
2579       --ssh-delay duration
2580           Delay starting next ssh by duration.
2581
2582           GNU parallel will not start another ssh for the next duration.
2583
2584           duration is in seconds, but can be postfixed with s, m, h, or d.
2585
2586           See also: TIME POSTFIXES --sshlogin --delay
2587
2588       --sshlogin
2589       [@hostgroups/][ncpus/]sshlogin[,[@hostgroups/][ncpus/]sshlogin[,...]]
2590       --sshlogin @hostgroup
2591       -S
2592       [@hostgroups/][ncpus/]sshlogin[,[@hostgroups/][ncpus/]sshlogin[,...]]
2593       -S @hostgroup
2594           Distribute jobs to remote computers.
2595
2596           The jobs will be run on a list of remote computers.
2597
2598           If hostgroups is given, the sshlogin will be added to that
2599           hostgroup. Multiple hostgroups are separated by '+'. The sshlogin
2600           will always be added to a hostgroup named the same as sshlogin.
2601
2602           If only the @hostgroup is given, only the sshlogins in that
2603           hostgroup will be used. Multiple @hostgroup can be given.
2604
2605           GNU parallel will determine the number of CPUs on the remote
2606           computers and run the number of jobs as specified by -j.  If the
2607           number ncpus is given GNU parallel will use this number for number
2608           of CPUs on the host. Normally ncpus will not be needed.
2609
2610           An sshlogin is of the form:
2611
2612             [sshcommand [options]] [username[:password]@]hostname
2613
2614           If password is given, sshpass will be used. Otherwise the sshlogin
2615           must not require a password (ssh-agent and ssh-copy-id may help
2616           with that).
2617
2618           If the hostname is an IPv6 address, the port can be given separated
2619           with p or #. If the address is enclosed in [] you can also use :.
2620           E.g. ::1p2222 ::1#2222 [::1]:2222
2621
2622           The sshlogin ':' is special, it means 'no ssh' and will therefore
2623           run on the local computer.
2624
2625           The sshlogin '..' is special, it read sshlogins from
2626           ~/.parallel/sshloginfile or $XDG_CONFIG_HOME/parallel/sshloginfile
2627
2628           The sshlogin '-' is special, too, it read sshlogins from stdin
2629           (standard input).
2630
2631           To specify more sshlogins separate the sshlogins by comma, newline
2632           (in the same string), or repeat the options multiple times.
2633
2634           GNU parallel splits on , (comma) so if your sshlogin contains ,
2635           (comma) you need to replace it with \, or ,,
2636
2637           For examples: see --sshloginfile.
2638
2639           The remote host must have GNU parallel installed.
2640
2641           --sshlogin is known to cause problems with -m and -X.
2642
2643           See also: --basefile --transferfile --return --cleanup --trc
2644           --sshloginfile --workdir --filter-hosts --ssh
2645
2646       --sshloginfile filename
2647       --slf filename
2648           File with sshlogins. The file consists of sshlogins on separate
2649           lines. Empty lines and lines starting with '#' are ignored.
2650           Example:
2651
2652             server.example.com
2653             username@server2.example.com
2654             8/my-8-cpu-server.example.com
2655             2/my_other_username@my-dualcore.example.net
2656             # This server has SSH running on port 2222
2657             ssh -p 2222 server.example.net
2658             4/ssh -p 2222 quadserver.example.net
2659             # Use a different ssh program
2660             myssh -p 2222 -l myusername hexacpu.example.net
2661             # Use a different ssh program with default number of CPUs
2662             //usr/local/bin/myssh -p 2222 -l myusername hexacpu
2663             # Use a different ssh program with 6 CPUs
2664             6//usr/local/bin/myssh -p 2222 -l myusername hexacpu
2665             # Assume 16 CPUs on the local computer
2666             16/:
2667             # Put server1 in hostgroup1
2668             @hostgroup1/server1
2669             # Put myusername@server2 in hostgroup1+hostgroup2
2670             @hostgroup1+hostgroup2/myusername@server2
2671             # Force 4 CPUs and put 'ssh -p 2222 server3' in hostgroup1
2672             @hostgroup1/4/ssh -p 2222 server3
2673
2674           When using a different ssh program the last argument must be the
2675           hostname.
2676
2677           Multiple --sshloginfile are allowed.
2678
2679           GNU parallel will first look for the file in current dir; if that
2680           fails it look for the file in ~/.parallel.
2681
2682           The sshloginfile '..' is special, it read sshlogins from
2683           ~/.parallel/sshloginfile
2684
2685           The sshloginfile '.' is special, it read sshlogins from
2686           /etc/parallel/sshloginfile
2687
2688           The sshloginfile '-' is special, too, it read sshlogins from stdin
2689           (standard input).
2690
2691           If the sshloginfile is changed it will be re-read when a job
2692           finishes though at most once per second. This makes it possible to
2693           add and remove hosts while running.
2694
2695           This can be used to have a daemon that updates the sshloginfile to
2696           only contain servers that are up:
2697
2698               cp original.slf tmp2.slf
2699               while [ 1 ] ; do
2700                 nice parallel --nonall -j0 -k --slf original.slf \
2701                   --tag echo | perl 's/\t$//' > tmp.slf
2702                 if diff tmp.slf tmp2.slf; then
2703                   mv tmp.slf tmp2.slf
2704                 fi
2705                 sleep 10
2706               done &
2707               parallel --slf tmp2.slf ...
2708
2709           See also: --filter-hosts
2710
2711       --slotreplace replace-str
2712           Use the replacement string replace-str instead of {%} for job slot
2713           number.
2714
2715           See also: {%}
2716
2717       --silent
2718           Silent.
2719
2720           The job to be run will not be printed. This is the default.  Can be
2721           reversed with -v.
2722
2723           See also: -v
2724
2725       --template file=repl
2726       --tmpl file=repl
2727           Replace replacement strings in file and save it in repl.
2728
2729           All replacement strings in the contents of file will be replaced.
2730           All replacement strings in the name repl will be replaced.
2731
2732           With --cleanup the new file will be removed when the job is done.
2733
2734           If my.tmpl contains this:
2735
2736             Xval: {x}
2737             Yval: {y}
2738             FixedValue: 9
2739             # x with 2 decimals
2740             DecimalX: {=x $_=sprintf("%.2f",$_) =}
2741             TenX: {=x $_=$_*10 =}
2742             RandomVal: {=1 $_=rand() =}
2743
2744           it can be used like this:
2745
2746             myprog() { echo Using "$@"; cat "$@"; }
2747             export -f myprog
2748             parallel --cleanup --header : --tmpl my.tmpl={#}.t myprog {#}.t \
2749               ::: x 1.234 2.345 3.45678 ::: y 1 2 3
2750
2751           See also: {} --cleanup
2752
2753       --tty
2754           Open terminal tty.
2755
2756           If GNU parallel is used for starting a program that accesses the
2757           tty (such as an interactive program) then this option may be
2758           needed. It will default to starting only one job at a time (i.e.
2759           -j1), not buffer the output (i.e. -u), and it will open a tty for
2760           the job.
2761
2762           You can of course override -j1 and -u.
2763
2764           Using --tty unfortunately means that GNU parallel cannot kill the
2765           jobs (with --timeout, --memfree, or --halt). This is due to GNU
2766           parallel giving each child its own process group, which is then
2767           killed. Process groups are dependant on the tty.
2768
2769           See also: --ungroup --open-tty
2770
2771       --tag
2772           Tag lines with arguments.
2773
2774           Each output line will be prepended with the arguments and TAB (\t).
2775           When combined with --onall or --nonall the lines will be prepended
2776           with the sshlogin instead.
2777
2778           --tag is ignored when using -u.
2779
2780           See also: --tagstring --ctag
2781
2782       --tagstring str
2783           Tag lines with a string.
2784
2785           Each output line will be prepended with str and TAB (\t). str can
2786           contain replacement strings such as {}.
2787
2788           --tagstring is ignored when using -u, --onall, and --nonall.
2789
2790           See also: --tag --ctagstring
2791
2792       --tee
2793           Pipe all data to all jobs.
2794
2795           Used with --pipe/--pipe-part and :::.
2796
2797             seq 1000 | parallel --pipe --tee -v wc {} ::: -w -l -c
2798
2799           How many numbers in 1..1000 contain 0..9, and how many bytes do
2800           they fill:
2801
2802             seq 1000 | parallel --pipe --tee --tag \
2803               'grep {1} | wc {2}' ::: {0..9} ::: -l -c
2804
2805           How many words contain a..z and how many bytes do they fill?
2806
2807             parallel -a /usr/share/dict/words --pipe-part --tee --tag \
2808               'grep {1} | wc {2}' ::: {a..z} ::: -l -c
2809
2810           See also: ::: --pipe --pipe-part
2811
2812       --term-seq sequence
2813           Termination sequence.
2814
2815           When a job is killed due to --timeout, --memfree, --halt, or
2816           abnormal termination of GNU parallel, sequence determines how the
2817           job is killed. The default is:
2818
2819               TERM,200,TERM,100,TERM,50,KILL,25
2820
2821           which sends a TERM signal, waits 200 ms, sends another TERM signal,
2822           waits 100 ms, sends another TERM signal, waits 50 ms, sends a KILL
2823           signal, waits 25 ms, and exits. GNU parallel detects if a process
2824           dies before the waiting time is up.
2825
2826           See also: --halt --timeout --memfree
2827
2828       --total-jobs jobs
2829       --total jobs
2830           Provide the total number of jobs for computing ETA which is also
2831           used for --bar.
2832
2833           Without --total-jobs GNU Parallel will read all jobs before
2834           starting a job. --total-jobs is useful if the input is generated
2835           slowly.
2836
2837           See also: --bar --eta
2838
2839       --tmpdir dirname
2840           Directory for temporary files.
2841
2842           GNU parallel normally buffers output into temporary files in /tmp.
2843           By setting --tmpdir you can use a different dir for the files.
2844           Setting --tmpdir is equivalent to setting $TMPDIR.
2845
2846           See also: --compress $TMPDIR $PARALLEL_REMOTE_TMPDIR
2847
2848       --tmux (Long beta testing)
2849           Use tmux for output. Start a tmux session and run each job in a
2850           window in that session. No other output will be produced.
2851
2852           See also: --tmuxpane
2853
2854       --tmuxpane (Long beta testing)
2855           Use tmux for output but put output into panes in the first window.
2856           Useful if you want to monitor the progress of less than 100
2857           concurrent jobs.
2858
2859           See also: --tmux
2860
2861       --timeout duration
2862           Time out for command. If the command runs for longer than duration
2863           seconds it will get killed as per --term-seq.
2864
2865           If duration is followed by a % then the timeout will dynamically be
2866           computed as a percentage of the median average runtime of
2867           successful jobs. Only values > 100% will make sense.
2868
2869           duration is in seconds, but can be postfixed with s, m, h, or d.
2870
2871           See also: TIME POSTFIXES --term-seq --retries
2872
2873       --verbose
2874       -t  Print the job to be run on stderr (standard error).
2875
2876           See also: -v --interactive
2877
2878       --transfer
2879           Transfer files to remote computers.
2880
2881           Shorthand for: --transferfile {}.
2882
2883           See also: --transferfile.
2884
2885       --transferfile filename
2886       --tf filename
2887           Transfer filename to remote computers.
2888
2889           --transferfile is used with --sshlogin to transfer files to the
2890           remote computers. The files will be transferred using rsync and
2891           will be put relative to the work dir.
2892
2893           The filename will normally contain a replacement string.
2894
2895           If the path contains /./ the remaining path will be relative to the
2896           work dir (for details: see rsync). If the work dir is /home/user,
2897           the transferring will be as follows:
2898
2899             /tmp/foo/bar   => /tmp/foo/bar
2900             tmp/foo/bar    => /home/user/tmp/foo/bar
2901             /tmp/./foo/bar => /home/user/foo/bar
2902             tmp/./foo/bar  => /home/user/foo/bar
2903
2904           Examples
2905
2906           This will transfer the file foo/bar.txt to the computer
2907           server.example.com to the file $HOME/foo/bar.txt before running wc
2908           foo/bar.txt on server.example.com:
2909
2910             echo foo/bar.txt | parallel --transferfile {} \
2911               --sshlogin server.example.com wc
2912
2913           This will transfer the file /tmp/foo/bar.txt to the computer
2914           server.example.com to the file /tmp/foo/bar.txt before running wc
2915           /tmp/foo/bar.txt on server.example.com:
2916
2917             echo /tmp/foo/bar.txt | parallel --transferfile {} \
2918               --sshlogin server.example.com wc
2919
2920           This will transfer the file /tmp/foo/bar.txt to the computer
2921           server.example.com to the file foo/bar.txt before running wc
2922           ./foo/bar.txt on server.example.com:
2923
2924             echo /tmp/./foo/bar.txt | parallel --transferfile {} \
2925               --sshlogin server.example.com wc {= s:.*/\./:./: =}
2926
2927           --transferfile is often used with --return and --cleanup. A
2928           shorthand for --transferfile {} is --transfer.
2929
2930           --transferfile is ignored when used with --sshlogin : or when not
2931           used with --sshlogin.
2932
2933           See also: --workdir --sshlogin --basefile --return --cleanup
2934
2935       --trc filename
2936           Transfer, Return, Cleanup. Shorthand for: --transfer --return
2937           filename --cleanup
2938
2939           See also: --transfer --return --cleanup
2940
2941       --trim <n|l|r|lr|rl>
2942           Trim white space in input.
2943
2944           n   No trim. Input is not modified. This is the default.
2945
2946           l   Left trim. Remove white space from start of input. E.g. " a bc
2947               " -> "a bc ".
2948
2949           r   Right trim. Remove white space from end of input. E.g. " a bc "
2950               -> " a bc".
2951
2952           lr
2953           rl  Both trim. Remove white space from both start and end of input.
2954               E.g. " a bc " -> "a bc". This is the default if --colsep is
2955               used.
2956
2957           See also: --no-run-if-empty {} --colsep
2958
2959       --ungroup
2960       -u  Ungroup output.
2961
2962           Output is printed as soon as possible and bypasses GNU parallel
2963           internal processing. This may cause output from different commands
2964           to be mixed thus should only be used if you do not care about the
2965           output. Compare these:
2966
2967             seq 4 | parallel -j0 \
2968               'sleep {};echo -n start{};sleep {};echo {}end'
2969             seq 4 | parallel -u -j0 \
2970               'sleep {};echo -n start{};sleep {};echo {}end'
2971
2972           It also disables --tag. GNU parallel outputs faster with -u.
2973           Compare the speeds of these:
2974
2975             parallel seq ::: 300000000 >/dev/null
2976             parallel -u seq ::: 300000000 >/dev/null
2977             parallel --line-buffer seq ::: 300000000 >/dev/null
2978
2979           Can be reversed with --group.
2980
2981           See also: --line-buffer --group
2982
2983       --extensionreplace replace-str
2984       --er replace-str
2985           Use the replacement string replace-str instead of {.} for input
2986           line without extension.
2987
2988           See also: {.}
2989
2990       --use-sockets-instead-of-threads
2991           See also: --use-cores-instead-of-threads
2992
2993       --use-cores-instead-of-threads
2994       --use-cpus-instead-of-cores (obsolete)
2995           Determine how GNU parallel counts the number of CPUs.
2996
2997           GNU parallel uses this number when the number of jobslots (--jobs)
2998           is computed relative to the number of CPUs (e.g. 100% or +1).
2999
3000           CPUs can be counted in three different ways:
3001
3002           sockets The number of filled CPU sockets (i.e. the number of
3003                   physical chips).
3004
3005           cores   The number of physical cores (i.e. the number of physical
3006                   compute cores).
3007
3008           threads The number of hyperthreaded cores (i.e. the number of
3009                   virtual cores - with some of them possibly being
3010                   hyperthreaded)
3011
3012           Normally the number of CPUs is computed as the number of CPU
3013           threads. With --use-sockets-instead-of-threads or
3014           --use-cores-instead-of-threads you can force it to be computed as
3015           the number of filled sockets or number of cores instead.
3016
3017           Most users will not need these options.
3018
3019           --use-cpus-instead-of-cores is a (misleading) alias for
3020           --use-sockets-instead-of-threads and is kept for backwards
3021           compatibility.
3022
3023           See also: --number-of-threads --number-of-cores --number-of-sockets
3024
3025       -v  Verbose.
3026
3027           Print the job to be run on stdout (standard output). Can be
3028           reversed with --silent.
3029
3030           Use -v -v to print the wrapping ssh command when running remotely.
3031
3032           See also: -t
3033
3034       --version
3035       -V  Print the version GNU parallel and exit.
3036
3037       --workdir mydir
3038       --wd mydir
3039           Jobs will be run in the dir mydir. The default is the current dir
3040           for the local machine, and the login dir for remote computers.
3041
3042           Files transferred using --transferfile and --return will be
3043           relative to mydir on remote computers.
3044
3045           The special mydir value ... will create working dirs under
3046           ~/.parallel/tmp/. If --cleanup is given these dirs will be removed.
3047
3048           The special mydir value . uses the current working dir.  If the
3049           current working dir is beneath your home dir, the value . is
3050           treated as the relative path to your home dir. This means that if
3051           your home dir is different on remote computers (e.g. if your login
3052           is different) the relative path will still be relative to your home
3053           dir.
3054
3055           To see the difference try:
3056
3057             parallel -S server pwd ::: ""
3058             parallel --wd . -S server pwd ::: ""
3059             parallel --wd ... -S server pwd ::: ""
3060
3061           mydir can contain GNU parallel's replacement strings.
3062
3063       --wait
3064           Wait for all commands to complete.
3065
3066           Used with --semaphore or --sqlmaster.
3067
3068           See also: man sem
3069
3070       -X  Multiple arguments with context replace. Insert as many arguments
3071           as the command line length permits. If multiple jobs are being run
3072           in parallel: distribute the arguments evenly among the jobs. Use
3073           -j1 to avoid this.
3074
3075           If {} is not used the arguments will be appended to the line.  If
3076           {} is used as part of a word (like pic{}.jpg) then the whole word
3077           will be repeated. If {} is used multiple times each {} will be
3078           replaced with the arguments.
3079
3080           Normally -X will do the right thing, whereas -m can give unexpected
3081           results if {} is used as part of a word.
3082
3083           Support for -X with --sshlogin is limited and may fail.
3084
3085           See also: -m
3086
3087       --exit
3088       -x  Exit if the size (see the -s option) is exceeded.
3089
3090       --xargs
3091           Multiple arguments. Insert as many arguments as the command line
3092           length permits.
3093
3094           If {} is not used the arguments will be appended to the line.  If
3095           {} is used multiple times each {} will be replaced with all the
3096           arguments.
3097
3098           Support for --xargs with --sshlogin is limited and may fail.
3099
3100           See also: -X
3101

EXAMPLES

3103       See: man parallel_examples
3104

SPREADING BLOCKS OF DATA

3106       --round-robin, --pipe-part, --shard, --bin and --group-by are all
3107       specialized versions of --pipe.
3108
3109       In the following n is the number of jobslots given by --jobs. A record
3110       starts with --recstart and ends with --recend. It is typically a full
3111       line. A chunk is a number of full records that is approximately the
3112       size of a block. A block can contain half records, a chunk cannot.
3113
3114       --pipe starts one job per chunk. It reads blocks from stdin (standard
3115       input). It finds a record end near a block border and passes a chunk to
3116       the program.
3117
3118       --pipe-part starts one job per chunk - just like normal --pipe. It
3119       first finds record endings near all block borders in the file and then
3120       starts the jobs. By using --block -1 it will set the block size to
3121       size-of-file/n. Used this way it will start n jobs in total.
3122
3123       --round-robin starts n jobs in total. It reads a block and passes a
3124       chunk to whichever job is ready to read. It does not parse the content
3125       except for identifying where a record ends to make sure it only passes
3126       full records.
3127
3128       --shard starts n jobs in total. It parses each line to read the string
3129       in the given column. Based on this string the line is passed to one of
3130       the n jobs. All lines having this string will be given to the same
3131       jobslot.
3132
3133       --bin works like --shard but the value of the column must be numeric
3134       and is the jobslot number it will be passed to. If the value is bigger
3135       than n, then n will be subtracted from the value until the value is
3136       smaller than or equal to n.
3137
3138       --group-by starts one job per chunk. Record borders are not given by
3139       --recend/--recstart. Instead a record is defined by a group of lines
3140       having the same string in a given column. So the string of a given
3141       column changes at a chunk border. With --pipe every line is parsed,
3142       with --pipe-part only a few lines are parsed to find the chunk border.
3143
3144       --group-by can be combined with --round-robin or --pipe-part.
3145

TIME POSTFIXES

3147       Arguments that give a duration are given in seconds, but can be
3148       expressed as floats postfixed with s, m, h, or d which would multiply
3149       the float by 1, 60, 60*60, or 60*60*24. Thus these are equivalent:
3150       100000 and 1d3.5h16.6m4s.
3151

UNIT PREFIX

3153       Many numerical arguments in GNU parallel can be postfixed with K, M, G,
3154       T, P, k, m, g, t, or p which would multiply the number with 1024,
3155       1048576, 1073741824, 1099511627776, 1125899906842624, 1000, 1000000,
3156       1000000000, 1000000000000, or 1000000000000000, respectively.
3157
3158       You can even give it as a math expression. E.g. 1000000 can be written
3159       as 1M-12*2.024*2k.
3160

QUOTING

3162       GNU parallel is very liberal in quoting. You only need to quote
3163       characters that have special meaning in shell:
3164
3165         ( ) $ ` ' " < > ; | \
3166
3167       and depending on context these needs to be quoted, too:
3168
3169         ~ & # ! ? space * {
3170
3171       Therefore most people will never need more quoting than putting '\' in
3172       front of the special characters.
3173
3174       Often you can simply put \' around every ':
3175
3176         perl -ne '/^\S+\s+\S+$/ and print $ARGV,"\n"' file
3177
3178       can be quoted:
3179
3180         parallel perl -ne \''/^\S+\s+\S+$/ and print $ARGV,"\n"'\' ::: file
3181
3182       However, when you want to use a shell variable you need to quote the
3183       $-sign. Here is an example using $PARALLEL_SEQ. This variable is set by
3184       GNU parallel itself, so the evaluation of the $ must be done by the sub
3185       shell started by GNU parallel:
3186
3187         seq 10 | parallel -N2 echo seq:\$PARALLEL_SEQ arg1:{1} arg2:{2}
3188
3189       If the variable is set before GNU parallel starts you can do this:
3190
3191         VAR=this_is_set_before_starting
3192         echo test | parallel echo {} $VAR
3193
3194       Prints: test this_is_set_before_starting
3195
3196       It is a little more tricky if the variable contains more than one space
3197       in a row:
3198
3199         VAR="two  spaces  between  each  word"
3200         echo test | parallel echo {} \'"$VAR"\'
3201
3202       Prints: test two  spaces  between  each  word
3203
3204       If the variable should not be evaluated by the shell starting GNU
3205       parallel but be evaluated by the sub shell started by GNU parallel,
3206       then you need to quote it:
3207
3208         echo test | parallel VAR=this_is_set_after_starting \; echo {} \$VAR
3209
3210       Prints: test this_is_set_after_starting
3211
3212       It is a little more tricky if the variable contains space:
3213
3214         echo test |\
3215           parallel VAR='"two  spaces  between  each  word"' echo {} \'"$VAR"\'
3216
3217       Prints: test two  spaces  between  each  word
3218
3219       $$ is the shell variable containing the process id of the shell. This
3220       will print the process id of the shell running GNU parallel:
3221
3222         seq 10 | parallel echo $$
3223
3224       And this will print the process ids of the sub shells started by GNU
3225       parallel.
3226
3227         seq 10 | parallel echo \$\$
3228
3229       If the special characters should not be evaluated by the sub shell then
3230       you need to protect it against evaluation from both the shell starting
3231       GNU parallel and the sub shell:
3232
3233         echo test | parallel echo {} \\\$VAR
3234
3235       Prints: test $VAR
3236
3237       GNU parallel can protect against evaluation by the sub shell by using
3238       -q:
3239
3240         echo test | parallel -q echo {} \$VAR
3241
3242       Prints: test $VAR
3243
3244       This is particularly useful if you have lots of quoting. If you want to
3245       run a perl script like this:
3246
3247         perl -ne '/^\S+\s+\S+$/ and print $ARGV,"\n"' file
3248
3249       It needs to be quoted like one of these:
3250
3251         ls | parallel perl -ne '/^\\S+\\s+\\S+\$/\ and\ print\ \$ARGV,\"\\n\"'
3252         ls | parallel perl -ne \''/^\S+\s+\S+$/ and print $ARGV,"\n"'\'
3253
3254       Notice how spaces, \'s, "'s, and $'s need to be quoted. GNU parallel
3255       can do the quoting by using option -q:
3256
3257         ls | parallel -q  perl -ne '/^\S+\s+\S+$/ and print $ARGV,"\n"'
3258
3259       However, this means you cannot make the sub shell interpret special
3260       characters. For example because of -q this WILL NOT WORK:
3261
3262         ls *.gz | parallel -q "zcat {} >{.}"
3263         ls *.gz | parallel -q "zcat {} | bzip2 >{.}.bz2"
3264
3265       because > and | need to be interpreted by the sub shell.
3266
3267       If you get errors like:
3268
3269         sh: -c: line 0: syntax error near unexpected token
3270         sh: Syntax error: Unterminated quoted string
3271         sh: -c: line 0: unexpected EOF while looking for matching `''
3272         sh: -c: line 1: syntax error: unexpected end of file
3273         zsh:1: no matches found:
3274
3275       then you might try using -q.
3276
3277       If you are using bash process substitution like <(cat foo) then you may
3278       try -q and prepending command with bash -c:
3279
3280         ls | parallel -q bash -c 'wc -c <(echo {})'
3281
3282       Or for substituting output:
3283
3284         ls | parallel -q bash -c \
3285           'tar c {} | tee >(gzip >{}.tar.gz) | bzip2 >{}.tar.bz2'
3286
3287       Conclusion: If this is confusing consider avoiding having to deal with
3288       quoting by writing a small script or a function (remember to export -f
3289       the function) and have GNU parallel call that.
3290

LIST RUNNING JOBS

3292       If you want a list of the jobs currently running you can run:
3293
3294         killall -USR1 parallel
3295
3296       GNU parallel will then print the currently running jobs on stderr
3297       (standard error).
3298

COMPLETE RUNNING JOBS BUT DO NOT START NEW JOBS

3300       If you regret starting a lot of jobs you can simply break GNU parallel,
3301       but if you want to make sure you do not have half-completed jobs you
3302       should send the signal SIGHUP to GNU parallel:
3303
3304         killall -HUP parallel
3305
3306       This will tell GNU parallel to not start any new jobs, but wait until
3307       the currently running jobs are finished before exiting.
3308

ENVIRONMENT VARIABLES

3310       $PARALLEL_HOME
3311                Dir where GNU parallel stores config files, semaphores, and
3312                caches information between invocations. If set to a non-
3313                existent dir, the dir will be created.
3314
3315                Default: $HOME/.parallel.
3316
3317       $PARALLEL_ARGHOSTGROUPS
3318                When using --hostgroups GNU parallel sets this to the
3319                hostgroups of the job.
3320
3321                Remember to quote the $, so it gets evaluated by the correct
3322                shell. Or use --plus and {agrp}.
3323
3324       $PARALLEL_HOSTGROUPS
3325                When using --hostgroups GNU parallel sets this to the
3326                hostgroups of the sshlogin that the job is run on.
3327
3328                Remember to quote the $, so it gets evaluated by the correct
3329                shell. Or use --plus and {hgrp}.
3330
3331       $PARALLEL_JOBSLOT
3332                Set by GNU parallel and can be used in jobs run by GNU
3333                parallel.  Remember to quote the $, so it gets evaluated by
3334                the correct shell. Or use --plus and {slot}.
3335
3336                $PARALLEL_JOBSLOT is the jobslot of the job. It is equal to
3337                {%} unless the job is being retried. See {%} for details.
3338
3339       $PARALLEL_PID
3340                Set by GNU parallel and can be used in jobs run by GNU
3341                parallel.  Remember to quote the $, so it gets evaluated by
3342                the correct shell.
3343
3344                This makes it possible for the jobs to communicate directly to
3345                GNU parallel.
3346
3347                Example: If each of the jobs tests a solution and one of jobs
3348                finds the solution the job can tell GNU parallel not to start
3349                more jobs by: kill -HUP $PARALLEL_PID. This only works on the
3350                local computer.
3351
3352       $PARALLEL_RSYNC_OPTS
3353                Options to pass on to rsync. Defaults to: -rlDzR.
3354
3355       $PARALLEL_SHELL
3356                Use this shell for the commands run by GNU parallel:
3357
3358                • $PARALLEL_SHELL. If undefined use:
3359
3360                • The shell that started GNU parallel. If that cannot be
3361                  determined:
3362
3363                • $SHELL. If undefined use:
3364
3365                • /bin/sh
3366
3367       $PARALLEL_SSH
3368                GNU parallel defaults to using the ssh command for remote
3369                access. This can be overridden with $PARALLEL_SSH, which again
3370                can be overridden with --ssh. It can also be set on a per
3371                server basis (see --sshlogin).
3372
3373       $PARALLEL_SSHHOST
3374                Set by GNU parallel and can be used in jobs run by GNU
3375                parallel.  Remember to quote the $, so it gets evaluated by
3376                the correct shell. Or use --plus and {host}.
3377
3378                $PARALLEL_SSHHOST is the host part of an sshlogin line. E.g.
3379
3380                  4//usr/bin/specialssh user@host
3381
3382                becomes:
3383
3384                  host
3385
3386       $PARALLEL_SSHLOGIN
3387                Set by GNU parallel and can be used in jobs run by GNU
3388                parallel.  Remember to quote the $, so it gets evaluated by
3389                the correct shell. Or use --plus and {sshlogin}.
3390
3391                The value is the sshlogin line with number of threads removed.
3392                E.g.
3393
3394                  4//usr/bin/specialssh user@host
3395
3396                becomes:
3397
3398                  /usr/bin/specialssh user@host
3399
3400       $PARALLEL_SEQ
3401                Set by GNU parallel and can be used in jobs run by GNU
3402                parallel.  Remember to quote the $, so it gets evaluated by
3403                the correct shell.
3404
3405                $PARALLEL_SEQ is the sequence number of the job running.
3406
3407                Example:
3408
3409                  seq 10 | parallel -N2 \
3410                    echo seq:'$'PARALLEL_SEQ arg1:{1} arg2:{2}
3411
3412                {#} is a shorthand for $PARALLEL_SEQ.
3413
3414       $PARALLEL_TMUX
3415                Path to tmux. If unset the tmux in $PATH is used.
3416
3417       $TMPDIR  Directory for temporary files.
3418
3419                See also: --tmpdir
3420
3421       $PARALLEL_REMOTE_TMPDIR
3422                Directory for temporary files on remote servers.
3423
3424                See also: --tmpdir
3425
3426       $PARALLEL
3427                The environment variable $PARALLEL will be used as default
3428                options for GNU parallel. If the variable contains special
3429                shell characters (e.g. $, *, or space) then these need to be
3430                to be escaped with \.
3431
3432                Example:
3433
3434                  cat list | parallel -j1 -k -v ls
3435                  cat list | parallel -j1 -k -v -S"myssh user@server" ls
3436
3437                can be written as:
3438
3439                  cat list | PARALLEL="-kvj1" parallel ls
3440                  cat list | PARALLEL='-kvj1 -S myssh\ user@server' \
3441                    parallel echo
3442
3443                Notice the \ after 'myssh' is needed because 'myssh' and
3444                'user@server' must be one argument.
3445
3446                See also: --profile
3447

DEFAULT PROFILE (CONFIG FILE)

3449       The global configuration file /etc/parallel/config, followed by user
3450       configuration file ~/.parallel/config (formerly known as .parallelrc)
3451       will be read in turn if they exist.  Lines starting with '#' will be
3452       ignored. The format can follow that of the environment variable
3453       $PARALLEL, but it is often easier to simply put each option on its own
3454       line.
3455
3456       Options on the command line take precedence, followed by the
3457       environment variable $PARALLEL, user configuration file
3458       ~/.parallel/config, and finally the global configuration file
3459       /etc/parallel/config.
3460
3461       Note that no file that is read for options, nor the environment
3462       variable $PARALLEL, may contain retired options such as --tollef.
3463

PROFILE FILES

3465       If --profile set, GNU parallel will read the profile from that file
3466       rather than the global or user configuration files. You can have
3467       multiple --profiles.
3468
3469       Profiles are searched for in ~/.parallel. If the name starts with / it
3470       is seen as an absolute path. If the name starts with ./ it is seen as a
3471       relative path from current dir.
3472
3473       Example: Profile for running a command on every sshlogin in
3474       ~/.ssh/sshlogins and prepend the output with the sshlogin:
3475
3476         echo --tag -S .. --nonall > ~/.parallel/nonall_profile
3477         parallel -J nonall_profile uptime
3478
3479       Example: Profile for running every command with -j-1 and nice
3480
3481         echo -j-1 nice > ~/.parallel/nice_profile
3482         parallel -J nice_profile bzip2 -9 ::: *
3483
3484       Example: Profile for running a perl script before every command:
3485
3486         echo "perl -e '\$a=\$\$; print \$a,\" \",'\$PARALLEL_SEQ',\" \";';" \
3487           > ~/.parallel/pre_perl
3488         parallel -J pre_perl echo ::: *
3489
3490       Note how the $ and " need to be quoted using \.
3491
3492       Example: Profile for running distributed jobs with nice on the remote
3493       computers:
3494
3495         echo -S .. nice > ~/.parallel/dist
3496         parallel -J dist --trc {.}.bz2 bzip2 -9 ::: *
3497

EXIT STATUS

3499       Exit status depends on --halt-on-error if one of these is used:
3500       success=X, success=Y%, fail=Y%.
3501
3502       0     All jobs ran without error. If success=X is used: X jobs ran
3503             without error. If success=Y% is used: Y% of the jobs ran without
3504             error.
3505
3506       1-100 Some of the jobs failed. The exit status gives the number of
3507             failed jobs. If Y% is used the exit status is the percentage of
3508             jobs that failed.
3509
3510       101   More than 100 jobs failed.
3511
3512       255   Other error.
3513
3514       -1 (In joblog and SQL table)
3515             Killed by Ctrl-C, timeout, not enough memory or similar.
3516
3517       -2 (In joblog and SQL table)
3518             skip() was called in {= =}.
3519
3520       -1000 (In SQL table)
3521             Job is ready to run (set by --sqlmaster).
3522
3523       -1220 (In SQL table)
3524             Job is taken by worker (set by --sqlworker).
3525
3526       If fail=1 is used, the exit status will be the exit status of the
3527       failing job.
3528

DIFFERENCES BETWEEN GNU Parallel AND ALTERNATIVES

3530       See: man parallel_alternatives
3531

BUGS

3533   Quoting of newline
3534       Because of the way newline is quoted this will not work:
3535
3536         echo 1,2,3 | parallel -vkd, "echo 'a{}b'"
3537
3538       However, these will all work:
3539
3540         echo 1,2,3 | parallel -vkd, echo a{}b
3541         echo 1,2,3 | parallel -vkd, "echo 'a'{}'b'"
3542         echo 1,2,3 | parallel -vkd, "echo 'a'"{}"'b'"
3543
3544   Speed
3545       Startup
3546
3547       GNU parallel is slow at starting up - around 250 ms the first time and
3548       150 ms after that.
3549
3550       Job startup
3551
3552       Starting a job on the local machine takes around 3-10 ms. This can be a
3553       big overhead if the job takes very few ms to run. Often you can group
3554       small jobs together using -X which will make the overhead less
3555       significant. Or you can run multiple GNU parallels as described in
3556       EXAMPLE: Speeding up fast jobs.
3557
3558       SSH
3559
3560       When using multiple computers GNU parallel opens ssh connections to
3561       them to figure out how many connections can be used reliably
3562       simultaneously (Namely SSHD's MaxStartups). This test is done for each
3563       host in serial, so if your --sshloginfile contains many hosts it may be
3564       slow.
3565
3566       If your jobs are short you may see that there are fewer jobs running on
3567       the remote systems than expected. This is due to time spent logging in
3568       and out. -M may help here.
3569
3570       Disk access
3571
3572       A single disk can normally read data faster if it reads one file at a
3573       time instead of reading a lot of files in parallel, as this will avoid
3574       disk seeks. However, newer disk systems with multiple drives can read
3575       faster if reading from multiple files in parallel.
3576
3577       If the jobs are of the form read-all-compute-all-write-all, so
3578       everything is read before anything is written, it may be faster to
3579       force only one disk access at the time:
3580
3581         sem --id diskio cat file | compute | sem --id diskio cat > file
3582
3583       If the jobs are of the form read-compute-write, so writing starts
3584       before all reading is done, it may be faster to force only one reader
3585       and writer at the time:
3586
3587         sem --id read cat file | compute | sem --id write cat > file
3588
3589       If the jobs are of the form read-compute-read-compute, it may be faster
3590       to run more jobs in parallel than the system has CPUs, as some of the
3591       jobs will be stuck waiting for disk access.
3592
3593   --nice limits command length
3594       The current implementation of --nice is too pessimistic in the max
3595       allowed command length. It only uses a little more than half of what it
3596       could. This affects -X and -m. If this becomes a real problem for you,
3597       file a bug-report.
3598
3599   Aliases and functions do not work
3600       If you get:
3601
3602         Can't exec "command": No such file or directory
3603
3604       or:
3605
3606         open3: exec of by command failed
3607
3608       or:
3609
3610         /bin/bash: command: command not found
3611
3612       it may be because command is not known, but it could also be because
3613       command is an alias or a function. If it is a function you need to
3614       export -f the function first or use env_parallel. An alias will only
3615       work if you use env_parallel.
3616
3617   Database with MySQL fails randomly
3618       The --sql* options may fail randomly with MySQL. This problem does not
3619       exist with PostgreSQL.
3620

REPORTING BUGS

3622       Report bugs to <parallel@gnu.org> or
3623       https://savannah.gnu.org/bugs/?func=additem&group=parallel
3624
3625       When you write your report, please keep in mind, that you must give the
3626       reader enough information to be able to run exactly what you run. So
3627       you need to include all data and programs that you use to show the
3628       problem.
3629
3630       See a perfect bug report on
3631       https://lists.gnu.org/archive/html/bug-parallel/2015-01/msg00000.html
3632
3633       Your bug report should always include:
3634
3635       • The error message you get (if any). If the error message is not from
3636         GNU parallel you need to show why you think GNU parallel caused this.
3637
3638       • The complete output of parallel --version. If you are not running the
3639         latest released version (see https://ftp.gnu.org/gnu/parallel/) you
3640         should specify why you believe the problem is not fixed in that
3641         version.
3642
3643       • A minimal, complete, and verifiable example (See description on
3644         https://stackoverflow.com/help/mcve).
3645
3646         It should be a complete example that others can run which shows the
3647         problem including all files needed to run the example. This should
3648         preferably be small and simple, so try to remove as many options as
3649         possible.
3650
3651         A combination of yes, seq, cat, echo, wc, and sleep can reproduce
3652         most errors.
3653
3654         If your example requires large files, see if you can make them with
3655         something like seq 100000000 > bigfile or yes | head -n 1000000000 >
3656         file. If you need multiple columns: paste <(seq 1000) <(seq 1000
3657         1999)
3658
3659         If your example requires remote execution, see if you can use
3660         localhost - maybe using another login.
3661
3662         If you have access to a different system (maybe a VirtualBox on your
3663         own machine), test if your MCVE shows the problem on that system. If
3664         it does not, read below.
3665
3666       • The output of your example. If your problem is not easily reproduced
3667         by others, the output might help them figure out the problem.
3668
3669       • Whether you have watched the intro videos
3670         (https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1), walked
3671         through the tutorial (man parallel_tutorial), and read the examples
3672         (man parallel_examples).
3673
3674   Bug dependent on environment
3675       If you suspect the error is dependent on your environment or
3676       distribution, please see if you can reproduce the error on one of these
3677       VirtualBox images:
3678       https://sourceforge.net/projects/virtualboximage/files/
3679       https://www.osboxes.org/virtualbox-images/
3680
3681       Specifying the name of your distribution is not enough as you may have
3682       installed software that is not in the VirtualBox images.
3683
3684       If you cannot reproduce the error on any of the VirtualBox images
3685       above, see if you can build a VirtualBox image on which you can
3686       reproduce the error. If not you should assume the debugging will be
3687       done through you. That will put a lot more burden on you and it is
3688       extra important you give any information that help. In general the
3689       problem will be fixed faster and with much less work for you if you can
3690       reproduce the error on a VirtualBox - even if you have to build a
3691       VirtualBox image.
3692
3693   In summary
3694       Your report must include:
3695
3696parallel --version
3697
3698       • output + error message
3699
3700       • full example including all files
3701
3702       • VirtualBox image, if you cannot reproduce it on other systems
3703

AUTHOR

3705       When using GNU parallel for a publication please cite:
3706
3707       O. Tange (2011): GNU Parallel - The Command-Line Power Tool, ;login:
3708       The USENIX Magazine, February 2011:42-47.
3709
3710       This helps funding further development; and it won't cost you a cent.
3711       If you pay 10000 EUR you should feel free to use GNU Parallel without
3712       citing.
3713
3714       Copyright (C) 2007-10-18 Ole Tange, http://ole.tange.dk
3715
3716       Copyright (C) 2008-2010 Ole Tange, http://ole.tange.dk
3717
3718       Copyright (C) 2010-2023 Ole Tange, http://ole.tange.dk and Free
3719       Software Foundation, Inc.
3720
3721       Parts of the manual concerning xargs compatibility is inspired by the
3722       manual of xargs from GNU findutils 4.4.2.
3723

LICENSE

3725       This program is free software; you can redistribute it and/or modify it
3726       under the terms of the GNU General Public License as published by the
3727       Free Software Foundation; either version 3 of the License, or at your
3728       option any later version.
3729
3730       This program is distributed in the hope that it will be useful, but
3731       WITHOUT ANY WARRANTY; without even the implied warranty of
3732       MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
3733       General Public License for more details.
3734
3735       You should have received a copy of the GNU General Public License along
3736       with this program.  If not, see <https://www.gnu.org/licenses/>.
3737
3738   Documentation license I
3739       Permission is granted to copy, distribute and/or modify this
3740       documentation under the terms of the GNU Free Documentation License,
3741       Version 1.3 or any later version published by the Free Software
3742       Foundation; with no Invariant Sections, with no Front-Cover Texts, and
3743       with no Back-Cover Texts.  A copy of the license is included in the
3744       file LICENSES/GFDL-1.3-or-later.txt.
3745
3746   Documentation license II
3747       You are free:
3748
3749       to Share to copy, distribute and transmit the work
3750
3751       to Remix to adapt the work
3752
3753       Under the following conditions:
3754
3755       Attribution
3756                You must attribute the work in the manner specified by the
3757                author or licensor (but not in any way that suggests that they
3758                endorse you or your use of the work).
3759
3760       Share Alike
3761                If you alter, transform, or build upon this work, you may
3762                distribute the resulting work only under the same, similar or
3763                a compatible license.
3764
3765       With the understanding that:
3766
3767       Waiver   Any of the above conditions can be waived if you get
3768                permission from the copyright holder.
3769
3770       Public Domain
3771                Where the work or any of its elements is in the public domain
3772                under applicable law, that status is in no way affected by the
3773                license.
3774
3775       Other Rights
3776                In no way are any of the following rights affected by the
3777                license:
3778
3779                • Your fair dealing or fair use rights, or other applicable
3780                  copyright exceptions and limitations;
3781
3782                • The author's moral rights;
3783
3784                • Rights other persons may have either in the work itself or
3785                  in how the work is used, such as publicity or privacy
3786                  rights.
3787
3788       Notice   For any reuse or distribution, you must make clear to others
3789                the license terms of this work.
3790
3791       A copy of the full license is included in the file as
3792       LICENCES/CC-BY-SA-4.0.txt
3793

DEPENDENCIES

3795       GNU parallel uses Perl, and the Perl modules Getopt::Long, IPC::Open3,
3796       Symbol, IO::File, POSIX, and File::Temp.
3797
3798       For --csv it uses the Perl module Text::CSV.
3799
3800       For remote usage it uses rsync with ssh.
3801

SEE ALSO

3803       parallel_tutorial(1), env_parallel(1), parset(1), parsort(1),
3804       parallel_alternatives(1), parallel_design(7), niceload(1), sql(1),
3805       ssh(1), ssh-agent(1), sshpass(1), ssh-copy-id(1), rsync(1)
3806
3807
3808
380920230722                          2023-08-12                       PARALLEL(1)
Impressum