1PARALLEL(1)                        parallel                        PARALLEL(1)
2
3
4

NAME

6       parallel - build and execute shell command lines from standard input in
7       parallel
8

SYNOPSIS

10       parallel [options] [command [arguments]] < list_of_arguments
11
12       parallel [options] [command [arguments]] ( ::: arguments | :::+
13       arguments | :::: argfile(s) | ::::+ argfile(s) ) ...
14
15       parallel --semaphore [options] command
16
17       #!/usr/bin/parallel --shebang [options] [command [arguments]]
18
19       #!/usr/bin/parallel --shebang-wrap [options] [command [arguments]]
20

DESCRIPTION

22       STOP!
23
24       Read the Reader's guide below if you are new to GNU parallel.
25
26       GNU parallel is a shell tool for executing jobs in parallel using one
27       or more computers. A job can be a single command or a small script that
28       has to be run for each of the lines in the input. The typical input is
29       a list of files, a list of hosts, a list of users, a list of URLs, or a
30       list of tables. A job can also be a command that reads from a pipe. GNU
31       parallel can then split the input into blocks and pipe a block into
32       each command in parallel.
33
34       If you use xargs and tee today you will find GNU parallel very easy to
35       use as GNU parallel is written to have the same options as xargs. If
36       you write loops in shell, you will find GNU parallel may be able to
37       replace most of the loops and make them run faster by running several
38       jobs in parallel.
39
40       GNU parallel makes sure output from the commands is the same output as
41       you would get had you run the commands sequentially. This makes it
42       possible to use output from GNU parallel as input for other programs.
43
44       For each line of input GNU parallel will execute command with the line
45       as arguments. If no command is given, the line of input is executed.
46       Several lines will be run in parallel. GNU parallel can often be used
47       as a substitute for xargs or cat | bash.
48
49   Reader's guide
50       GNU parallel includes the 4 types of documentation: Tutorial, how-to,
51       reference and explanation.
52
53       Tutorial
54
55       If you prefer reading a book buy GNU Parallel 2018 at
56       https://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html
57       or download it at: https://doi.org/10.5281/zenodo.1146014 Read at least
58       chapter 1+2. It should take you less than 20 minutes.
59
60       Otherwise start by watching the intro videos for a quick introduction:
61       https://youtube.com/playlist?list=PL284C9FF2488BC6D1
62
63       If you want to dive deeper: spend a couple of hours walking through the
64       tutorial (man parallel_tutorial). Your command line will love you for
65       it.
66
67       How-to
68
69       You can find a lot of examples of use in man parallel_examples. They
70       will give you an idea of what GNU parallel is capable of, and you may
71       find a solution you can simply adapt to your situation.
72
73       Reference
74
75       If you need a one page printable cheat sheet you can find it on:
76       https://www.gnu.org/software/parallel/parallel_cheat.pdf
77
78       The man page is the reference for all options.
79
80       Design discussion
81
82       If you want to know the design decisions behind GNU parallel, try: man
83       parallel_design. This is also a good intro if you intend to change GNU
84       parallel.
85

OPTIONS

87       command
88           Command to execute.
89
90           If command or the following arguments contain replacement strings
91           (such as {}) every instance will be substituted with the input.
92
93           If command is given, GNU parallel solve the same tasks as xargs. If
94           command is not given GNU parallel will behave similar to cat | sh.
95
96           The command must be an executable, a script, a composed command, an
97           alias, or a function.
98
99           Bash functions: export -f the function first or use env_parallel.
100
101           Bash, Csh, or Tcsh aliases: Use env_parallel.
102
103           Zsh, Fish, Ksh, and Pdksh functions and aliases: Use env_parallel.
104
105       {}  Input line.
106
107           This replacement string will be replaced by a full line read from
108           the input source. The input source is normally stdin (standard
109           input), but can also be given with --arg-file, :::, or ::::.
110
111           The replacement string {} can be changed with -I.
112
113           If the command line contains no replacement strings then {} will be
114           appended to the command line.
115
116           Replacement strings are normally quoted, so special characters are
117           not parsed by the shell. The exception is if the command starts
118           with a replacement string; then the string is not quoted.
119
120           See also: --plus {.} {/} {//} {/.} {#} {%} {n} {=perl expression=}
121
122       {.} Input line without extension.
123
124           This replacement string will be replaced by the input with the
125           extension removed. If the input line contains . after the last /,
126           the last . until the end of the string will be removed and {.} will
127           be replaced with the remaining. E.g. foo.jpg becomes foo,
128           subdir/foo.jpg becomes subdir/foo, sub.dir/foo.jpg becomes
129           sub.dir/foo, sub.dir/bar remains sub.dir/bar. If the input line
130           does not contain . it will remain unchanged.
131
132           The replacement string {.} can be changed with --extensionreplace
133
134           See also: {} --extensionreplace
135
136       {/} Basename of input line.
137
138           This replacement string will be replaced by the input with the
139           directory part removed.
140
141           See also: {} --basenamereplace
142
143       {//}
144           Dirname of input line.
145
146           This replacement string will be replaced by the dir of the input
147           line. See dirname(1).
148
149           See also: {} --dirnamereplace
150
151       {/.}
152           Basename of input line without extension.
153
154           This replacement string will be replaced by the input with the
155           directory and extension part removed.  {/.} is a combination of {/}
156           and {.}.
157
158           See also: {} --basenameextensionreplace
159
160       {#} Sequence number of the job to run.
161
162           This replacement string will be replaced by the sequence number of
163           the job being run. It contains the same number as $PARALLEL_SEQ.
164
165           See also: {} --seqreplace
166
167       {%} Job slot number.
168
169           This replacement string will be replaced by the job's slot number
170           between 1 and number of jobs to run in parallel. There will never
171           be 2 jobs running at the same time with the same job slot number.
172
173           If the job needs to be retried (e.g using --retries or
174           --retry-failed) the job slot is not automatically updated. You
175           should then instead use $PARALLEL_JOBSLOT:
176
177             $ do_test() {
178                 id="$3 {%}=$1 PARALLEL_JOBSLOT=$2"
179                 echo run "$id";
180                 sleep 1
181                 # fail if {%} is odd
182                 return `echo $1%2 | bc`
183               }
184             $ export -f do_test
185             $ parallel -j3 --jl mylog do_test {%} \$PARALLEL_JOBSLOT {} ::: A B C D
186             run A {%}=1 PARALLEL_JOBSLOT=1
187             run B {%}=2 PARALLEL_JOBSLOT=2
188             run C {%}=3 PARALLEL_JOBSLOT=3
189             run D {%}=1 PARALLEL_JOBSLOT=1
190             $ parallel --retry-failed -j3 --jl mylog do_test {%} \$PARALLEL_JOBSLOT {} ::: A B C D
191             run A {%}=1 PARALLEL_JOBSLOT=1
192             run C {%}=3 PARALLEL_JOBSLOT=2
193             run D {%}=1 PARALLEL_JOBSLOT=3
194
195           Notice how {%} and $PARALLEL_JOBSLOT differ in the retry run of C
196           and D.
197
198           See also: {} --jobs --slotreplace
199
200       {n} Argument from input source n or the n'th argument.
201
202           This positional replacement string will be replaced by the input
203           from input source n (when used with --arg-file or ::::) or with the
204           n'th argument (when used with -N). If n is negative it refers to
205           the n'th last argument.
206
207           See also: {} {n.} {n/} {n//} {n/.}
208
209       {n.}
210           Argument from input source n or the n'th argument without
211           extension.
212
213           {n.} is a combination of {n} and {.}.
214
215           This positional replacement string will be replaced by the input
216           from input source n (when used with --arg-file or ::::) or with the
217           n'th argument (when used with -N). The input will have the
218           extension removed.
219
220           See also: {n} {.}
221
222       {n/}
223           Basename of argument from input source n or the n'th argument.
224
225           {n/} is a combination of {n} and {/}.
226
227           This positional replacement string will be replaced by the input
228           from input source n (when used with --arg-file or ::::) or with the
229           n'th argument (when used with -N). The input will have the
230           directory (if any) removed.
231
232           See also: {n} {/}
233
234       {n//}
235           Dirname of argument from input source n or the n'th argument.
236
237           {n//} is a combination of {n} and {//}.
238
239           This positional replacement string will be replaced by the dir of
240           the input from input source n (when used with --arg-file or ::::)
241           or with the n'th argument (when used with -N). See dirname(1).
242
243           See also: {n} {//}
244
245       {n/.}
246           Basename of argument from input source n or the n'th argument
247           without extension.
248
249           {n/.} is a combination of {n}, {/}, and {.}.
250
251           This positional replacement string will be replaced by the input
252           from input source n (when used with --arg-file or ::::) or with the
253           n'th argument (when used with -N). The input will have the
254           directory (if any) and extension removed.
255
256           See also: {n} {/.}
257
258       {=perl expression=}
259           Replace with calculated perl expression.
260
261           $_ will contain the same as {}. After evaluating perl expression $_
262           will be used as the value. It is recommended to only change $_ but
263           you have full access to all of GNU parallel's internal functions
264           and data structures.
265
266           The expression must give the same result if evaluated twice -
267           otherwise the behaviour is undefined. E.g. this will not work as
268           expected:
269
270               parallel echo '{= $_= ++$wrong_counter =}' ::: a b c
271
272           A few convenience functions and data structures have been made:
273
274            Q(string)     shell quote a string
275
276            pQ(string)    perl quote a string
277
278            uq() (or uq)  do not quote current replacement string
279
280            hash(val)     compute B::hash(val)
281
282            total_jobs()  number of jobs in total
283
284            slot()        slot number of job
285
286            seq()         sequence number of job
287
288            @arg          the arguments
289
290            skip()        skip this job (see also --filter)
291
292            yyyy_mm_dd_hh_mm_ss()
293            yyyy_mm_dd_hh_mm()
294            yyyy_mm_dd()
295            hh_mm_ss()
296            hh_mm()
297            yyyymmddhhmmss()
298            yyyymmddhhmm()
299            yyyymmdd()
300            hhmmss()
301            hhmm()        time functions
302
303           Example:
304
305             seq 10 | parallel echo {} + 1 is {= '$_++' =}
306             parallel csh -c {= '$_="mkdir ".Q($_)' =} ::: '12" dir'
307             seq 50 | parallel echo job {#} of {= '$_=total_jobs()' =}
308
309           See also: --rpl --parens {} {=n perl expression=}
310
311       {=n perl expression=}
312           Positional equivalent to {=perl expression=}.
313
314           To understand positional replacement strings see {n}.
315
316           See also: {=perl expression=} {n}
317
318       ::: arguments
319           Use arguments on the command line as input source.
320
321           Unlike other options for GNU parallel ::: is placed after the
322           command and before the arguments.
323
324           The following are equivalent:
325
326             (echo file1; echo file2) | parallel gzip
327             parallel gzip ::: file1 file2
328             parallel gzip {} ::: file1 file2
329             parallel --arg-sep ,, gzip {} ,, file1 file2
330             parallel --arg-sep ,, gzip ,, file1 file2
331             parallel ::: "gzip file1" "gzip file2"
332
333           To avoid treating ::: as special use --arg-sep to set the argument
334           separator to something else.
335
336           If multiple ::: are given, each group will be treated as an input
337           source, and all combinations of input sources will be generated.
338           E.g. ::: 1 2 ::: a b c will result in the combinations (1,a) (1,b)
339           (1,c) (2,a) (2,b) (2,c). This is useful for replacing nested for-
340           loops.
341
342           :::, ::::, and --arg-file can be mixed. So these are equivalent:
343
344             parallel echo {1} {2} {3} ::: 6 7 ::: 4 5 ::: 1 2 3
345             parallel echo {1} {2} {3} :::: <(seq 6 7) <(seq 4 5) \
346               :::: <(seq 1 3)
347             parallel -a <(seq 6 7) echo {1} {2} {3} :::: <(seq 4 5) \
348               :::: <(seq 1 3)
349             parallel -a <(seq 6 7) -a <(seq 4 5) echo {1} {2} {3} \
350               ::: 1 2 3
351             seq 6 7 | parallel -a - -a <(seq 4 5) echo {1} {2} {3} \
352               ::: 1 2 3
353             seq 4 5 | parallel echo {1} {2} {3} :::: <(seq 6 7) - \
354               ::: 1 2 3
355
356           See also: --arg-sep --arg-file :::: :::+ ::::+ --link
357
358       :::+ arguments
359           Like ::: but linked like --link to the previous input source.
360
361           Contrary to --link, values do not wrap: The shortest input source
362           determines the length.
363
364           Example:
365
366             parallel echo ::: a b c :::+ 1 2 3 ::: X Y :::+ 11 22
367
368           See also: ::::+ --link
369
370       :::: argfiles
371           Another way to write --arg-file argfile1 --arg-file argfile2 ...
372
373           ::: and :::: can be mixed.
374
375           See also: --arg-file ::: ::::+ --link
376
377       ::::+ argfiles
378           Like :::: but linked like --link to the previous input source.
379
380           Contrary to --link, values do not wrap: The shortest input source
381           determines the length.
382
383           See also: --arg-file :::+ --link
384
385       --null
386       -0  Use NUL as delimiter.
387
388           Normally input lines will end in \n (newline). If they end in \0
389           (NUL), then use this option. It is useful for processing arguments
390           that may contain \n (newline).
391
392           Shorthand for --delimiter '\0'.
393
394           See also: --delimiter
395
396       --arg-file input-file
397       -a input-file
398           Use input-file as input source.
399
400           If you use this option, stdin (standard input) is given to the
401           first process run.  Otherwise, stdin (standard input) is redirected
402           from /dev/null.
403
404           If multiple --arg-file are given, each input-file will be treated
405           as an input source, and all combinations of input sources will be
406           generated. E.g. The file foo contains 1 2, the file bar contains a
407           b c.  -a foo -a bar will result in the combinations (1,a) (1,b)
408           (1,c) (2,a) (2,b) (2,c). This is useful for replacing nested for-
409           loops.
410
411           See also: --link {n} :::: ::::+ :::
412
413       --arg-file-sep sep-str
414           Use sep-str instead of :::: as separator string between command and
415           argument files.
416
417           Useful if :::: is used for something else by the command.
418
419           See also: ::::
420
421       --arg-sep sep-str
422           Use sep-str instead of ::: as separator string.
423
424           Useful if ::: is used for something else by the command.
425
426           Also useful if you command uses ::: but you still want to read
427           arguments from stdin (standard input): Simply change --arg-sep to a
428           string that is not in the command line.
429
430           See also: :::
431
432       --bar (alpha testing)
433           Show progress as a progress bar.
434
435           In the bar is shown: % of jobs completed, estimated seconds left,
436           and number of jobs started.
437
438           It is compatible with zenity:
439
440             seq 1000 | parallel -j30 --bar '(echo {};sleep 0.1)' \
441               2> >(perl -pe 'BEGIN{$/="\r";$|=1};s/\r/\n/g' |
442                    zenity --progress --auto-kill) | wc
443
444           See also: --eta --progress --total-jobs
445
446       --basefile file
447       --bf file
448           file will be transferred to each sshlogin before first job is
449           started.
450
451           It will be removed if --cleanup is active. The file may be a script
452           to run or some common base data needed for the job.  Multiple --bf
453           can be specified to transfer more basefiles. The file will be
454           transferred the same way as --transferfile.
455
456           See also: --sshlogin --transfer --return --cleanup --workdir
457
458       --basenamereplace replace-str
459       --bnr replace-str
460           Use the replacement string replace-str instead of {/} for basename
461           of input line.
462
463           See also: {/}
464
465       --basenameextensionreplace replace-str
466       --bner replace-str
467           Use the replacement string replace-str instead of {/.} for basename
468           of input line without extension.
469
470           See also: {/.}
471
472       --bin binexpr
473           Use binexpr as binning key and bin input to the jobs.
474
475           binexpr is [column number|column name] [perlexpression] e.g.:
476
477             3
478             Address
479             3 $_%=100
480             Address s/\D//g
481
482           Each input line is split using --colsep. The value of the column is
483           put into $_, the perl expression is executed, the resulting value
484           is is the job slot that will be given the line. If the value is
485           bigger than the number of jobslots the value will be modulo number
486           of jobslots.
487
488           This is similar to --shard but the hashing algorithm is a simple
489           modulo, which makes it predictible which jobslot will receive which
490           value.
491
492           The performance is in the order of 100K rows per second. Faster if
493           the bincol is small (<10), slower if it is big (>100).
494
495           --bin requires --pipe and a fixed numeric value for --jobs.
496
497           See also: SPREADING BLOCKS OF DATA --group-by --round-robin --shard
498
499       --bg
500           Run command in background.
501
502           GNU parallel will normally wait for the completion of a job. With
503           --bg GNU parallel will not wait for completion of the command
504           before exiting.
505
506           This is the default if --semaphore is set.
507
508           Implies --semaphore.
509
510           See also: --fg man sem
511
512       --bibtex
513       --citation
514           Print the citation notice and BibTeX entry for GNU parallel,
515           silence citation notice for all future runs, and exit. It will not
516           run any commands.
517
518           If it is impossible for you to run --citation you can instead use
519           --will-cite, which will run commands, but which will only silence
520           the citation notice for this single run.
521
522           If you use --will-cite in scripts to be run by others you are
523           making it harder for others to see the citation notice.  The
524           development of GNU parallel is indirectly financed through
525           citations, so if your users do not know they should cite then you
526           are making it harder to finance development. However, if you pay
527           10000 EUR, you have done your part to finance future development
528           and should feel free to use --will-cite in scripts.
529
530           If you do not want to help financing future development by letting
531           other users see the citation notice or by paying, then please
532           consider using another tool instead of GNU parallel. You can find
533           some of the alternatives in man parallel_alternatives.
534
535       --block size
536       --block-size size
537           Size of block in bytes to read at a time.
538
539           The size can be postfixed with K, M, G, T, P, k, m, g, t, or p.
540
541           GNU parallel tries to meet the block size but can be off by the
542           length of one record. For performance reasons size should be bigger
543           than a two records. GNU parallel will warn you and automatically
544           increase the size if you choose a size that is too small.
545
546           If you use -N, --block should be bigger than N+1 records.
547
548           size defaults to 1M.
549
550           When using --pipe-part a negative block size is not interpreted as
551           a blocksize but as the number of blocks each jobslot should have.
552           So this will run 10*5 = 50 jobs in total:
553
554             parallel --pipe-part -a myfile --block -10 -j5 wc
555
556           This is an efficient alternative to --round-robin because data is
557           never read by GNU parallel, but you can still have very few
558           jobslots process large amounts of data.
559
560           See also: UNIT PREFIX -N --pipe --pipe-part --round-robin
561           --block-timeout
562
563       --block-timeout duration
564       --bt duration
565           Timeout for reading block when using --pipe.
566
567           If it takes longer than duration to read a full block, use the
568           partial block read so far.
569
570           duration is in seconds, but can be postfixed with s, m, h, or d.
571
572           See also: TIME POSTFIXES --pipe --block
573
574       --cat
575           Create a temporary file with content.
576
577           Normally --pipe/--pipe-part will give data to the program on stdin
578           (standard input). With --cat GNU parallel will create a temporary
579           file with the name in {}, so you can do: parallel --pipe --cat wc
580           {}.
581
582           Implies --pipe unless --pipe-part is used.
583
584           See also: --pipe --pipe-part --fifo
585
586       --cleanup
587           Remove transferred files.
588
589           --cleanup will remove the transferred files on the remote computer
590           after processing is done.
591
592             find log -name '*gz' | parallel \
593               --sshlogin server.example.com --transferfile {} \
594               --return {.}.bz2 --cleanup "zcat {} | bzip -9 >{.}.bz2"
595
596           With --transferfile {} the file transferred to the remote computer
597           will be removed on the remote computer. Directories on the remote
598           computer containing the file will be removed if they are empty.
599
600           With --return the file transferred from the remote computer will be
601           removed on the remote computer. Directories on the remote computer
602           containing the file will be removed if they are empty.
603
604           --cleanup is ignored when not used with --basefile, --transfer,
605           --transferfile or --return.
606
607           See also: --basefile --transfer --transferfile --sshlogin --return
608
609       --color (beta testing)
610           Colour output.
611
612           Colour the output. Each job gets its own colour combination
613           (background+foreground).
614
615           --color is ignored when using -u.
616
617           See also: --color-failed
618
619       --color-failed (beta testing)
620       --cf (beta testing)
621           Colour the output from failing jobs white on red.
622
623           Useful if you have a lot of jobs and want to focus on the failing
624           jobs.
625
626           --color-failed is ignored when using -u, --line-buffer and
627           unreliable when using --latest-line.
628
629           See also: --color
630
631       --colsep regexp
632       -C regexp
633           Column separator.
634
635           The input will be treated as a table with regexp separating the
636           columns. The n'th column can be accessed using {n} or {n.}. E.g.
637           {3} is the 3rd column.
638
639           If there are more input sources, each input source will be
640           separated, but the columns from each input source will be linked.
641
642             parallel --colsep '-' echo {4} {3} {2} {1} \
643               ::: A-B C-D ::: e-f g-h
644
645           --colsep implies --trim rl, which can be overridden with --trim n.
646
647           regexp is a Perl Regular Expression:
648           https://perldoc.perl.org/perlre.html
649
650           See also: --csv {n} --trim --link
651
652       --compress
653           Compress temporary files.
654
655           If the output is big and very compressible this will take up less
656           disk space in $TMPDIR and possibly be faster due to less disk I/O.
657
658           GNU parallel will try pzstd, lbzip2, pbzip2, zstd, pigz, lz4, lzop,
659           plzip, lzip, lrz, gzip, pxz, lzma, bzip2, xz, clzip, in that order,
660           and use the first available.
661
662           GNU parallel will use up to 8 processes per job waiting to be
663           printed. See man parallel_design for details.
664
665           See also: --compress-program
666
667       --compress-program prg
668       --decompress-program prg
669           Use prg for (de)compressing temporary files.
670
671           It is assumed that prg -dc will decompress stdin (standard input)
672           to stdout (standard output) unless --decompress-program is given.
673
674           See also: --compress
675
676       --csv (alpha testing)
677           Treat input as CSV-format.
678
679           --colsep sets the field delimiter. It works very much like --colsep
680           except it deals correctly with quoting. Compare:
681
682              echo '"1 big, 2 small","2""x4"" plank",12.34' |
683                parallel --csv echo {1} of {2} at {3}
684
685              echo '"1 big, 2 small","2""x4"" plank",12.34' |
686                parallel --colsep ',' echo {1} of {2} at {3}
687
688           Even quoted newlines are parsed correctly:
689
690              (echo '"Start of field 1 with newline'
691               echo 'Line 2 in field 1";value 2') |
692                parallel --csv --colsep ';' echo Field 1: {1} Field 2: {2}
693
694           When used with --pipe only pass full CSV-records.
695
696           See also: --pipe --link {n} --colsep --header
697
698       --ctag (obsolete: use --color --tag)
699           Color tag.
700
701           If the values look very similar looking at the output it can be
702           hard to tell when a new value is used. --ctag gives each value a
703           random color.
704
705           See also: --color --tag
706
707       --ctagstring str (obsolete: use --color --tagstring)
708           Color tagstring.
709
710           See also: --color --ctag --tagstring
711
712       --delay duration
713           Delay starting next job by duration.
714
715           GNU parallel will not start another job for the next duration.
716
717           duration is in seconds, but can be postfixed with s, m, h, or d.
718
719           If you append 'auto' to duration (e.g. 13m3sauto) GNU parallel will
720           automatically try to find the optimal value: If a job fails,
721           duration is increased by 30%. If a job succeeds, duration is
722           decreased by 10%.
723
724           See also: TIME POSTFIXES --retries --ssh-delay
725
726       --delimiter delim
727       -d delim
728           Input items are terminated by delim.
729
730           The specified delimiter may be characters, C-style character
731           escapes such as \n, or octal or hexadecimal escape codes.  Octal
732           and hexadecimal escape codes are understood as for the printf
733           command.
734
735           See also: --colsep
736
737       --dirnamereplace replace-str
738       --dnr replace-str
739           Use the replacement string replace-str instead of {//} for dirname
740           of input line.
741
742           See also: {//}
743
744       --dry-run
745           Print the job to run on stdout (standard output), but do not run
746           the job.
747
748           Use -v -v to include the wrapping that GNU parallel generates (for
749           remote jobs, --tmux, --nice, --pipe, --pipe-part, --fifo and
750           --cat). Do not count on this literally, though, as the job may be
751           scheduled on another computer or the local computer if : is in the
752           list.
753
754           See also: -v
755
756       -E eof-str
757           Set the end of file string to eof-str.
758
759           If the end of file string occurs as a line of input, the rest of
760           the input is not read.  If neither -E nor -e is used, no end of
761           file string is used.
762
763       --eof[=eof-str]
764       -e[eof-str]
765           This option is a synonym for the -E option.
766
767           Use -E instead, because it is POSIX compliant for xargs while this
768           option is not.  If eof-str is omitted, there is no end of file
769           string.  If neither -E nor -e is used, no end of file string is
770           used.
771
772       --embed
773           Embed GNU parallel in a shell script.
774
775           If you need to distribute your script to someone who does not want
776           to install GNU parallel you can embed GNU parallel in your own
777           shell script:
778
779             parallel --embed > new_script
780
781           After which you add your code at the end of new_script. This is
782           tested on ash, bash, dash, ksh, sh, and zsh.
783
784       --env var
785           Copy exported environment variable var.
786
787           This will copy var to the environment that the command is run in.
788           This is especially useful for remote execution.
789
790           In Bash var can also be a Bash function - just remember to export
791           -f the function.
792
793           The variable '_' is special. It will copy all exported environment
794           variables except for the ones mentioned in
795           ~/.parallel/ignored_vars.
796
797           To copy the full environment (both exported and not exported
798           variables, arrays, and functions) use env_parallel.
799
800           See also: --record-env --session --sshlogin command env_parallel
801
802       --eta
803           Show the estimated number of seconds before finishing.
804
805           This forces GNU parallel to read all jobs before starting to find
806           the number of jobs (unless you use --total-jobs). GNU parallel
807           normally only reads the next job to run.
808
809           The estimate is based on the runtime of finished jobs, so the first
810           estimate will only be shown when the first job has finished.
811
812           Implies --progress.
813
814           See also: --bar --progress --total-jobs
815
816       --fg
817           Run command in foreground.
818
819           With --tmux and --tmuxpane GNU parallel will start tmux in the
820           foreground.
821
822           With --semaphore GNU parallel will run the command in the
823           foreground (opposite --bg), and wait for completion of the command
824           before exiting. Exit code will be that of the command.
825
826           See also: --bg man sem
827
828       --fifo
829           Create a temporary fifo with content.
830
831           Normally --pipe and --pipe-part will give data to the program on
832           stdin (standard input). With --fifo GNU parallel will create a
833           temporary fifo with the name in {}, so you can do:
834
835             parallel --pipe --fifo wc {}
836
837           Beware: If the fifo is never opened for reading, the job will block
838           forever:
839
840             seq 1000000 | parallel --fifo echo This will block
841             seq 1000000 | parallel --fifo 'echo This will not block < {}'
842
843           By using --fifo instead of --cat you may save I/O as --cat will
844           write to a temporary file, whereas --fifo will not.
845
846           Implies --pipe unless --pipe-part is used.
847
848           See also: --cat --pipe --pipe-part
849
850       --filter filter
851           Only run jobs where filter is true.
852
853           filter can contain replacement strings and Perl code. Example:
854
855              parallel --filter '{1} < {2}+1' echo ::: {1..3} ::: {1..3}
856
857           Outputs: 1,1 1,2 1,3 2,2 2,3 3,3
858
859           See also: skip() --no-run-if-empty
860
861       --filter-hosts (alpha testing)
862           Remove down hosts.
863
864           For each remote host: check that login through ssh works. If not:
865           do not use this host.
866
867           For performance reasons, this check is performed only at the start
868           and every time --sshloginfile is changed. If an host goes down
869           after the first check, it will go undetected until --sshloginfile
870           is changed; --retries can be used to mitigate this.
871
872           Currently you can not put --filter-hosts in a profile, $PARALLEL,
873           /etc/parallel/config or similar. This is because GNU parallel uses
874           GNU parallel to compute this, so you will get an infinite loop.
875           This will likely be fixed in a later release.
876
877           See also: --sshloginfile --sshlogin --retries
878
879       --gnu
880           Behave like GNU parallel.
881
882           This option historically took precedence over --tollef. The
883           --tollef option is now retired, and therefore may not be used.
884           --gnu is kept for compatibility.
885
886       --group
887           Group output.
888
889           Output from each job is grouped together and is only printed when
890           the command is finished. Stdout (standard output) first followed by
891           stderr (standard error).
892
893           This takes in the order of 0.5ms CPU time per job and depends on
894           the speed of your disk for larger output. It can be disabled with
895           -u, but this means output from different commands can get mixed.
896
897           --group is the default. Can be reversed with -u.
898
899           See also: --line-buffer --ungroup --tag
900
901       --group-by val
902           Group input by value.
903
904           Combined with --pipe/--pipe-part --group-by groups lines with the
905           same value into a record.
906
907           The value can be computed from the full line or from a single
908           column.
909
910           val can be:
911
912            column number Use the value in the column numbered.
913
914            column name   Treat the first line as a header and use the value
915                          in the column named.
916
917                          (Not supported with --pipe-part).
918
919            perl expression
920                          Run the perl expression and use $_ as the value.
921
922            column number perl expression
923                          Put the value of the column put in $_, run the perl
924                          expression, and use $_ as the value.
925
926            column name perl expression
927                          Put the value of the column put in $_, run the perl
928                          expression, and use $_ as the value.
929
930                          (Not supported with --pipe-part).
931
932           Example:
933
934             UserID, Consumption
935             123,    1
936             123,    2
937             12-3,   1
938             221,    3
939             221,    1
940             2/21,   5
941
942           If you want to group 123, 12-3, 221, and 2/21 into 4 records and
943           pass one record at a time to wc:
944
945             tail -n +2 table.csv | \
946               parallel --pipe --colsep , --group-by 1 -kN1 wc
947
948           Make GNU parallel treat the first line as a header:
949
950             cat table.csv | \
951               parallel --pipe --colsep , --header : --group-by 1 -kN1 wc
952
953           Address column by column name:
954
955             cat table.csv | \
956               parallel --pipe --colsep , --header : --group-by UserID -kN1 wc
957
958           If 12-3 and 123 are really the same UserID, remove non-digits in
959           UserID when grouping:
960
961             cat table.csv | parallel --pipe --colsep , --header : \
962               --group-by 'UserID s/\D//g' -kN1 wc
963
964           See also: SPREADING BLOCKS OF DATA --pipe --pipe-part --bin --shard
965           --round-robin
966
967       --help
968       -h  Print a summary of the options to GNU parallel and exit.
969
970       --halt-on-error val
971       --halt val
972           When should GNU parallel terminate?
973
974           In some situations it makes no sense to run all jobs. GNU parallel
975           should simply stop as soon as a condition is met.
976
977           val defaults to never, which runs all jobs no matter what.
978
979           val can also take on the form of when,why.
980
981           when can be 'now' which means kill all running jobs and halt
982           immediately, or it can be 'soon' which means wait for all running
983           jobs to complete, but start no new jobs.
984
985           why can be 'fail=X', 'fail=Y%', 'success=X', 'success=Y%',
986           'done=X', or 'done=Y%' where X is the number of jobs that has to
987           fail, succeed, or be done before halting, and Y is the percentage
988           of jobs that has to fail, succeed, or be done before halting.
989
990           Example:
991
992            --halt now,fail=1     exit when a job has failed. Kill running
993                                  jobs.
994
995            --halt soon,fail=3    exit when 3 jobs have failed, but wait for
996                                  running jobs to complete.
997
998            --halt soon,fail=3%   exit when 3% of the jobs have failed, but
999                                  wait for running jobs to complete.
1000
1001            --halt now,success=1  exit when a job has succeeded. Kill running
1002                                  jobs.
1003
1004            --halt soon,success=3 exit when 3 jobs have succeeded, but wait
1005                                  for running jobs to complete.
1006
1007            --halt now,success=3% exit when 3% of the jobs have succeeded.
1008                                  Kill running jobs.
1009
1010            --halt now,done=1     exit when a job has finished. Kill running
1011                                  jobs.
1012
1013            --halt soon,done=3    exit when 3 jobs have finished, but wait for
1014                                  running jobs to complete.
1015
1016            --halt now,done=3%    exit when 3% of the jobs have finished. Kill
1017                                  running jobs.
1018
1019           For backwards compatibility these also work:
1020
1021           0           never
1022
1023           1           soon,fail=1
1024
1025           2           now,fail=1
1026
1027           -1          soon,success=1
1028
1029           -2          now,success=1
1030
1031           1-99%       soon,fail=1-99%
1032
1033       --header regexp
1034           Use regexp as header.
1035
1036           For normal usage the matched header (typically the first line:
1037           --header '.*\n') will be split using --colsep (which will default
1038           to '\t') and column names can be used as replacement variables:
1039           {column name}, {column name/}, {column name//}, {column name/.},
1040           {column name.}, {=column name perl expression =}, ..
1041
1042           For --pipe the matched header will be prepended to each output.
1043
1044           --header : is an alias for --header '.*\n'.
1045
1046           If regexp is a number, it is a fixed number of lines.
1047
1048           --header 0 is special: It will make replacement strings for files
1049           given with --arg-file or ::::. It will make {foo/bar} for the file
1050           foo/bar.
1051
1052           See also: --colsep --pipe --pipe-part --arg-file
1053
1054       --hostgroups
1055       --hgrp
1056           Enable hostgroups on arguments.
1057
1058           If an argument contains '@' the string after '@' will be removed
1059           and treated as a list of hostgroups on which this job is allowed to
1060           run. If there is no --sshlogin with a corresponding group, the job
1061           will run on any hostgroup.
1062
1063           Example:
1064
1065             parallel --hostgroups \
1066               --sshlogin @grp1/myserver1 -S @grp1+grp2/myserver2 \
1067               --sshlogin @grp3/myserver3 \
1068               echo ::: my_grp1_arg@grp1 arg_for_grp2@grp2 third@grp1+grp3
1069
1070           my_grp1_arg may be run on either myserver1 or myserver2, third may
1071           be run on either myserver1 or myserver3, but arg_for_grp2 will only
1072           be run on myserver2.
1073
1074           See also: --sshlogin $PARALLEL_HOSTGROUPS $PARALLEL_ARGHOSTGROUPS
1075
1076       -I replace-str
1077           Use the replacement string replace-str instead of {}.
1078
1079           See also: {}
1080
1081       --replace [replace-str]
1082       -i [replace-str]
1083           This option is deprecated; use -I instead.
1084
1085           This option is a synonym for -Ireplace-str if replace-str is
1086           specified, and for -I {} otherwise.
1087
1088           See also: {}
1089
1090       --joblog logfile
1091       --jl logfile
1092           Logfile for executed jobs.
1093
1094           Save a list of the executed jobs to logfile in the following TAB
1095           separated format: sequence number, sshlogin, start time as seconds
1096           since epoch, run time in seconds, bytes in files transferred, bytes
1097           in files returned, exit status, signal, and command run.
1098
1099           For --pipe bytes transferred and bytes returned are number of input
1100           and output of bytes.
1101
1102           If logfile is prepended with '+' log lines will be appended to the
1103           logfile.
1104
1105           To convert the times into ISO-8601 strict do:
1106
1107             cat logfile | perl -a -F"\t" -ne \
1108               'chomp($F[2]=`date -d \@$F[2] +%FT%T`); print join("\t",@F)'
1109
1110           If the host is long, you can use column -t to pretty print it:
1111
1112             cat joblog | column -t
1113
1114           See also: --resume --resume-failed
1115
1116       --jobs N
1117       -j N
1118       --max-procs N
1119       -P N
1120           Number of jobslots on each machine.
1121
1122           Run up to N jobs in parallel.  0 means as many as possible (this
1123           can take a while to determine). Default is 100% which will run one
1124           job per CPU thread on each machine.
1125
1126           Due to a bug -j 0 will also evaluate replacement strings twice up
1127           to the number of joblots:
1128
1129             # This will not count from 1 but from number-of-jobslots
1130             seq 10000 | parallel -j0   echo '{= $_ = $foo++; =}' | head
1131             # This will count from 1
1132             seq 10000 | parallel -j100 echo '{= $_ = $foo++; =}' | head
1133
1134           If --semaphore is set, the default is 1 thus making a mutex.
1135
1136           See also: --use-cores-instead-of-threads
1137           --use-sockets-instead-of-threads
1138
1139       --jobs +N
1140       -j +N
1141       --max-procs +N
1142       -P +N
1143           Add N to the number of CPU threads.
1144
1145           Run this many jobs in parallel.
1146
1147           See also: --number-of-threads --number-of-cores --number-of-sockets
1148
1149       --jobs -N
1150       -j -N
1151       --max-procs -N
1152       -P -N
1153           Subtract N from the number of CPU threads.
1154
1155           Run this many jobs in parallel.  If the evaluated number is less
1156           than 1 then 1 will be used.
1157
1158           See also: --number-of-threads --number-of-cores --number-of-sockets
1159
1160       --jobs N%
1161       -j N%
1162       --max-procs N%
1163       -P N%
1164           Multiply N% with the number of CPU threads.
1165
1166           Run this many jobs in parallel.
1167
1168           See also: --number-of-threads --number-of-cores --number-of-sockets
1169
1170       --jobs procfile
1171       -j procfile
1172       --max-procs procfile
1173       -P procfile
1174           Read parameter from file.
1175
1176           Use the content of procfile as parameter for -j. E.g. procfile
1177           could contain the string 100% or +2 or 10. If procfile is changed
1178           when a job completes, procfile is read again and the new number of
1179           jobs is computed. If the number is lower than before, running jobs
1180           will be allowed to finish but new jobs will not be started until
1181           the wanted number of jobs has been reached.  This makes it possible
1182           to change the number of simultaneous running jobs while GNU
1183           parallel is running.
1184
1185       --keep-order
1186       -k  Keep sequence of output same as the order of input.
1187
1188           Normally the output of a job will be printed as soon as the job
1189           completes. Try this to see the difference:
1190
1191             parallel -j4 sleep {}\; echo {} ::: 2 1 4 3
1192             parallel -j4 -k sleep {}\; echo {} ::: 2 1 4 3
1193
1194           If used with --onall or --nonall the output will grouped by
1195           sshlogin in sorted order.
1196
1197           --keep-order cannot keep the output order when used with --pipe
1198           --round-robin. Here it instead means, that the jobslots will get
1199           the same blocks as input in the same order in every run if the
1200           input is kept the same. Run each of these twice and compare:
1201
1202             seq 10000000 | parallel --pipe --round-robin 'sleep 0.$RANDOM; wc'
1203             seq 10000000 | parallel --pipe -k --round-robin 'sleep 0.$RANDOM; wc'
1204
1205           -k only affects the order in which the output is printed - not the
1206           order in which jobs are run.
1207
1208           See also: --group --line-buffer
1209
1210       -L recsize
1211           When used with --pipe: Read records of recsize.
1212
1213           When used otherwise: Use at most recsize nonblank input lines per
1214           command line.  Trailing blanks cause an input line to be logically
1215           continued on the next input line.
1216
1217           -L 0 means read one line, but insert 0 arguments on the command
1218           line.
1219
1220           recsize can be postfixed with K, M, G, T, P, k, m, g, t, or p.
1221
1222           Implies -X unless -m, --xargs, or --pipe is set.
1223
1224           See also: UNIT PREFIX -N --max-lines --block -X -m --xargs --pipe
1225
1226       --max-lines [recsize]
1227       -l[recsize]
1228           When used with --pipe: Read records of recsize lines.
1229
1230           When used otherwise: Synonym for the -L option.  Unlike -L, the
1231           recsize argument is optional.  If recsize is not specified, it
1232           defaults to one.  The -l option is deprecated since the POSIX
1233           standard specifies -L instead.
1234
1235           -l 0 is an alias for -l 1.
1236
1237           Implies -X unless -m, --xargs, or --pipe is set.
1238
1239           See also: UNIT PREFIX -N --block -X -m --xargs --pipe
1240
1241       --limit "command args"
1242           Dynamic job limit.
1243
1244           Before starting a new job run command with args. The exit value of
1245           command determines what GNU parallel will do:
1246
1247           0   Below limit. Start another job.
1248
1249           1   Over limit. Start no jobs.
1250
1251           2   Way over limit. Kill the youngest job.
1252
1253           You can use any shell command. There are 3 predefined commands:
1254
1255           "io n"    Limit for I/O. The amount of disk I/O will be computed as
1256                     a value 0-100, where 0 is no I/O and 100 is at least one
1257                     disk is 100% saturated.
1258
1259           "load n"  Similar to --load.
1260
1261           "mem n"   Similar to --memfree.
1262
1263           See also: --memfree --load
1264
1265       --latest-line (alpha testing)
1266       --ll (alpha testing)
1267           Print the lastest line. Each job gets a single line that is updated
1268           with the lastest output from the job.
1269
1270           Example:
1271
1272             slow_seq() {
1273               seq "$@" |
1274                 perl -ne '$|=1; for(split//){ print; select($a,$a,$a,0.03);}'
1275             }
1276             export -f slow_seq
1277             parallel --shuf -j99 --ll --tag --bar --color slow_seq {} ::: {1..300}
1278
1279           See also: --line-buffer
1280
1281       --line-buffer (beta testing)
1282       --lb (beta testing)
1283           Buffer output on line basis.
1284
1285           --group will keep the output together for a whole job. --ungroup
1286           allows output to mixup with half a line coming from one job and
1287           half a line coming from another job. --line-buffer fits between
1288           these two: GNU parallel will print a full line, but will allow for
1289           mixing lines of different jobs.
1290
1291           --line-buffer takes more CPU power than both --group and --ungroup,
1292           but can be much faster than --group if the CPU is not the limiting
1293           factor.
1294
1295           Normally --line-buffer does not buffer on disk, and can thus
1296           process an infinite amount of data, but it will buffer on disk when
1297           combined with: --keep-order, --results, --compress, and --files.
1298           This will make it as slow as --group and will limit output to the
1299           available disk space.
1300
1301           With --keep-order --line-buffer will output lines from the first
1302           job continuously while it is running, then lines from the second
1303           job while that is running. It will buffer full lines, but jobs will
1304           not mix. Compare:
1305
1306             parallel -j0 'echo {};sleep {};echo {}' ::: 1 3 2 4
1307             parallel -j0 --lb 'echo {};sleep {};echo {}' ::: 1 3 2 4
1308             parallel -j0 -k --lb 'echo {};sleep {};echo {}' ::: 1 3 2 4
1309
1310           See also: --group --ungroup --keep-order --tag
1311
1312       --link
1313       --xapply
1314           Link input sources.
1315
1316           Read multiple input sources like the command xapply. If multiple
1317           input sources are given, one argument will be read from each of the
1318           input sources. The arguments can be accessed in the command as {1}
1319           .. {n}, so {1} will be a line from the first input source, and {6}
1320           will refer to the line with the same line number from the 6th input
1321           source.
1322
1323           Compare these two:
1324
1325             parallel echo {1} {2} ::: 1 2 3 ::: a b c
1326             parallel --link echo {1} {2} ::: 1 2 3 ::: a b c
1327
1328           Arguments will be recycled if one input source has more arguments
1329           than the others:
1330
1331             parallel --link echo {1} {2} {3} \
1332               ::: 1 2 ::: I II III ::: a b c d e f g
1333
1334           See also: --header :::+ ::::+
1335
1336       --load max-load
1337           Only start jobs if load is less than max-load.
1338
1339           Do not start new jobs on a given computer unless the number of
1340           running processes on the computer is less than max-load. max-load
1341           uses the same syntax as --jobs, so 100% for one per CPU is a valid
1342           setting. Only difference is 0 which is interpreted as 0.01.
1343
1344           See also: --limit --jobs
1345
1346       --controlmaster
1347       -M  Use ssh's ControlMaster to make ssh connections faster.
1348
1349           Useful if jobs run remote and are very fast to run. This is
1350           disabled for sshlogins that specify their own ssh command.
1351
1352           See also: --ssh --sshlogin
1353
1354       -m  Multiple arguments.
1355
1356           Insert as many arguments as the command line length permits. If
1357           multiple jobs are being run in parallel: distribute the arguments
1358           evenly among the jobs. Use -j1 or --xargs to avoid this.
1359
1360           If {} is not used the arguments will be appended to the line.  If
1361           {} is used multiple times each {} will be replaced with all the
1362           arguments.
1363
1364           Support for -m with --sshlogin is limited and may fail.
1365
1366           If in doubt use -X as that will most likely do what is needed.
1367
1368           See also: -X --xargs
1369
1370       --memfree size
1371           Minimum memory free when starting another job.
1372
1373           The size can be postfixed with K, M, G, T, P, k, m, g, t, or p.
1374
1375           If the jobs take up very different amount of RAM, GNU parallel will
1376           only start as many as there is memory for. If less than size bytes
1377           are free, no more jobs will be started. If less than 50% size bytes
1378           are free, the youngest job will be killed (as per --term-seq), and
1379           put back on the queue to be run later.
1380
1381           --retries must be set to determine how many times GNU parallel
1382           should retry a given job.
1383
1384           See also: UNIT PREFIX --term-seq --retries --memsuspend
1385
1386       --memsuspend size
1387           Suspend jobs when there is less memory available.
1388
1389           If the available memory falls below 2 * size, GNU parallel will
1390           suspend some of the running jobs. If the available memory falls
1391           below size, only one job will be running.
1392
1393           If a single job takes up at most size RAM, all jobs will complete
1394           without running out of memory. If you have swap available, you can
1395           usually lower size to around half the size of a single job - with
1396           the slight risk of swapping a little.
1397
1398           Jobs will be resumed when more RAM is available - typically when
1399           the oldest job completes.
1400
1401           --memsuspend only works on local jobs because there is no obvious
1402           way to suspend remote jobs.
1403
1404           size can be postfixed with K, M, G, T, P, k, m, g, t, or p.
1405
1406           See also: UNIT PREFIX --memfree
1407
1408       --minversion version
1409           Print the version GNU parallel and exit.
1410
1411           If the current version of GNU parallel is less than version the
1412           exit code is 255. Otherwise it is 0.
1413
1414           This is useful for scripts that depend on features only available
1415           from a certain version of GNU parallel:
1416
1417              parallel --minversion 20170422 &&
1418                echo halt done=50% supported from version 20170422 &&
1419                parallel --halt now,done=50% echo ::: {1..100}
1420
1421           See also: --version
1422
1423       --max-args max-args
1424       -n max-args
1425           Use at most max-args arguments per command line.
1426
1427           Fewer than max-args arguments will be used if the size (see the -s
1428           option) is exceeded, unless the -x option is given, in which case
1429           GNU parallel will exit.
1430
1431           -n 0 means read one argument, but insert 0 arguments on the command
1432           line.
1433
1434           max-args can be postfixed with K, M, G, T, P, k, m, g, t, or p (see
1435           UNIT PREFIX).
1436
1437           Implies -X unless -m is set.
1438
1439           See also: -X -m --xargs --max-replace-args
1440
1441       --max-replace-args max-args
1442       -N max-args
1443           Use at most max-args arguments per command line.
1444
1445           Like -n but also makes replacement strings {1} .. {max-args} that
1446           represents argument 1 .. max-args. If too few args the {n} will be
1447           empty.
1448
1449           -N 0 means read one argument, but insert 0 arguments on the command
1450           line.
1451
1452           This will set the owner of the homedir to the user:
1453
1454             tr ':' '\n' < /etc/passwd | parallel -N7 chown {1} {6}
1455
1456           Implies -X unless -m or --pipe is set.
1457
1458           max-args can be postfixed with K, M, G, T, P, k, m, g, t, or p.
1459
1460           When used with --pipe -N is the number of records to read. This is
1461           somewhat slower than --block.
1462
1463           See also: UNIT PREFIX --pipe --block -m -X --max-args
1464
1465       --nonall
1466           --onall with no arguments.
1467
1468           Run the command on all computers given with --sshlogin but take no
1469           arguments. GNU parallel will log into --jobs number of computers in
1470           parallel and run the job on the computer. -j adjusts how many
1471           computers to log into in parallel.
1472
1473           This is useful for running the same command (e.g. uptime) on a list
1474           of servers.
1475
1476           See also: --onall --sshlogin
1477
1478       --onall
1479           Run all the jobs on all computers given with --sshlogin.
1480
1481           GNU parallel will log into --jobs number of computers in parallel
1482           and run one job at a time on the computer. The order of the jobs
1483           will not be changed, but some computers may finish before others.
1484
1485           When using --group the output will be grouped by each server, so
1486           all the output from one server will be grouped together.
1487
1488           --joblog will contain an entry for each job on each server, so
1489           there will be several job sequence 1.
1490
1491           See also: --nonall --sshlogin
1492
1493       --open-tty
1494       -o  Open terminal tty.
1495
1496           Similar to --tty but does not set --jobs or --ungroup.
1497
1498           See also: --tty
1499
1500       --output-as-files
1501       --outputasfiles
1502       --files
1503           Save output to files.
1504
1505           Instead of printing the output to stdout (standard output) the
1506           output of each job is saved in a file and the filename is then
1507           printed.
1508
1509           See also: --results
1510
1511       --pipe
1512       --spreadstdin
1513           Spread input to jobs on stdin (standard input).
1514
1515           Read a block of data from stdin (standard input) and give one block
1516           of data as input to one job.
1517
1518           The block size is determined by --block (default: 1M). The strings
1519           --recstart and --recend tell GNU parallel how a record starts
1520           and/or ends. The block read will have the final partial record
1521           removed before the block is passed on to the job. The partial
1522           record will be prepended to next block.
1523
1524           You can limit the number of records to be passed with -N, and set
1525           the record size with -L.
1526
1527           --pipe maxes out at around 1 GB/s input, and 100 MB/s output. If
1528           performance is important use --pipe-part.
1529
1530           --fifo and --cat will give stdin (standard input) on a fifo or a
1531           temporary file.
1532
1533           If data is arriving slowly, you can use --block-timeout to finish
1534           reading a block early.
1535
1536           The data can be spread between the jobs in specific ways using
1537           --round-robin, --bin, --shard, --group-by. See the section:
1538           SPREADING BLOCKS OF DATA
1539
1540           See also: --block --block-timeout --recstart --recend --fifo --cat
1541           --pipe-part -N -L --round-robin
1542
1543       --pipe-part
1544           Pipe parts of a physical file.
1545
1546           --pipe-part works similar to --pipe, but is much faster.
1547
1548           --pipe-part has a few limitations:
1549
1550           •  The file must be a normal file or a block device (technically it
1551              must be seekable) and must be given using --arg-file or ::::.
1552              The file cannot be a pipe, a fifo, or a stream as they are not
1553              seekable.
1554
1555              If using a block device with lot of NUL bytes, remember to set
1556              --recend ''.
1557
1558           •  Record counting (-N) and line counting (-L/-l) do not work.
1559              Instead use --recstart and --recend to determine where records
1560              end.
1561
1562           See also: --pipe --recstart --recend --arg-file ::::
1563
1564       --plain
1565           Ignore --profile, $PARALLEL, and ~/.parallel/config.
1566
1567           Ignore any --profile, $PARALLEL, and ~/.parallel/config to get full
1568           control on the command line (used by GNU parallel internally when
1569           called with --sshlogin).
1570
1571           See also: --profile
1572
1573       --plus
1574           Add more replacement strings.
1575
1576           Activate additional replacement strings: {+/} {+.} {+..} {+...}
1577           {..} {...} {/..} {/...} {##}. The idea being that '{+foo}' matches
1578           the opposite of '{foo}' and {} = {+/}/{/} = {.}.{+.} =
1579           {+/}/{/.}.{+.} = {..}.{+..} = {+/}/{/..}.{+..} = {...}.{+...} =
1580           {+/}/{/...}.{+...}
1581
1582           {##} is the total number of jobs to be run. It is incompatible with
1583           -X/-m/--xargs.
1584
1585           {0%} zero-padded jobslot.
1586
1587           {0#} zero-padded sequence number.
1588
1589           {choose_k} is inspired by n choose k: Given a list of n elements,
1590           choose k. k is the number of input sources and n is the number of
1591           arguments in an input source.  The content of the input sources
1592           must be the same and the arguments must be unique.
1593
1594           {uniq} skips jobs where values from two input sources are the same.
1595
1596           Shorthands for variables:
1597
1598             {slot}         $PARALLEL_JOBSLOT (see {%})
1599             {sshlogin}     $PARALLEL_SSHLOGIN
1600             {host}         $PARALLEL_SSHHOST
1601             {agrp}         $PARALLEL_ARGHOSTGROUPS
1602             {hgrp}         $PARALLEL_HOSTGROUPS
1603
1604           The following dynamic replacement strings are also activated. They
1605           are inspired by bash's parameter expansion:
1606
1607             {:-str}        str if the value is empty
1608             {:num}         remove the first num characters
1609             {:pos:len}     substring from position pos length len
1610             {#regexp}      remove prefix regexp (non-greedy)
1611             {##regexp}     remove prefix regexp (greedy)
1612             {%regexp}      remove postfix regexp (non-greedy)
1613             {%%regexp}     remove postfix regexp (greedy)
1614             {/regexp/str}  replace one regexp with str
1615             {//regexp/str} replace every regexp with str
1616             {^str}         uppercase str if found at the start
1617             {^^str}        uppercase str
1618             {,str}         lowercase str if found at the start
1619             {,,str}        lowercase str
1620
1621           See also: --rpl {}
1622
1623       --process-slot-var varname
1624           Set the environment variable varname to the jobslot number-1.
1625
1626             seq 10 | parallel --process-slot-var=name echo '$name' {}
1627
1628       --progress
1629           Show progress of computations.
1630
1631           List the computers involved in the task with number of CPUs
1632           detected and the max number of jobs to run. After that show
1633           progress for each computer: number of running jobs, number of
1634           completed jobs, and percentage of all jobs done by this computer.
1635           The percentage will only be available after all jobs have been
1636           scheduled as GNU parallel only read the next job when ready to
1637           schedule it - this is to avoid wasting time and memory by reading
1638           everything at startup.
1639
1640           By sending GNU parallel SIGUSR2 you can toggle turning on/off
1641           --progress on a running GNU parallel process.
1642
1643           See also: --eta --bar
1644
1645       --max-line-length-allowed (alpha testing)
1646           Print maximal command line length.
1647
1648           Print the maximal number of characters allowed on the command line
1649           and exit (used by GNU parallel itself to determine the line length
1650           on remote computers).
1651
1652           See also: --show-limits
1653
1654       --number-of-cpus (obsolete)
1655           Print the number of physical CPU cores and exit.
1656
1657       --number-of-cores
1658           Print the number of physical CPU cores and exit (used by GNU
1659           parallel itself to determine the number of physical CPU cores on
1660           remote computers).
1661
1662           See also: --number-of-sockets --number-of-threads
1663           --use-cores-instead-of-threads --jobs
1664
1665       --number-of-sockets
1666           Print the number of filled CPU sockets and exit (used by GNU
1667           parallel itself to determine the number of filled CPU sockets on
1668           remote computers).
1669
1670           See also: --number-of-cores --number-of-threads
1671           --use-sockets-instead-of-threads --jobs
1672
1673       --number-of-threads
1674           Print the number of hyperthreaded CPU cores and exit (used by GNU
1675           parallel itself to determine the number of hyperthreaded CPU cores
1676           on remote computers).
1677
1678           See also: --number-of-cores --number-of-sockets --jobs
1679
1680       --no-keep-order
1681           Overrides an earlier --keep-order (e.g. if set in
1682           ~/.parallel/config).
1683
1684       --nice niceness
1685           Run the command at this niceness.
1686
1687           By default GNU parallel will run jobs at the same nice level as GNU
1688           parallel is started - both on the local machine and remote servers,
1689           so you are unlikely to ever use this option.
1690
1691           Setting --nice will override this nice level. If the nice level is
1692           smaller than the current nice level, it will only affect remote
1693           jobs (e.g. if current level is 10 then --nice 5 will cause local
1694           jobs to be run at level 10, but remote jobs run at nice level 5).
1695
1696       --interactive
1697       -p  Ask user before running a job.
1698
1699           Prompt the user about whether to run each command line and read a
1700           line from the terminal.  Only run the command line if the response
1701           starts with 'y' or 'Y'.  Implies -t.
1702
1703       --_parset type,varname
1704           Used internally by parset.
1705
1706           Generate shell code to be eval'ed which will set the variable(s)
1707           varname. type can be 'assoc' for associative array or 'var' for
1708           normal variables.
1709
1710           The only supported use is as part of parset.
1711
1712       --parens parensstring
1713           Use parensstring instead of {==}.
1714
1715           Define start and end parenthesis for {=perl expression=}. The left
1716           and the right parenthesis can be multiple characters and are
1717           assumed to be the same length. The default is {==} giving {= as the
1718           start parenthesis and =} as the end parenthesis.
1719
1720           Another useful setting is ,,,, which would make both parenthesis
1721           ,,:
1722
1723             parallel --parens ,,,, echo foo is ,,s/I/O/g,, ::: FII
1724
1725           See also: --rpl {=perl expression=}
1726
1727       --profile profilename
1728       -J profilename
1729           Use profile profilename for options.
1730
1731           This is useful if you want to have multiple profiles. You could
1732           have one profile for running jobs in parallel on the local computer
1733           and a different profile for running jobs on remote computers.
1734
1735           profilename corresponds to the file ~/.parallel/profilename.
1736
1737           You can give multiple profiles by repeating --profile. If parts of
1738           the profiles conflict, the later ones will be used.
1739
1740           Default: ~/.parallel/config
1741
1742           See also: PROFILE FILES
1743
1744       --quote
1745       -q  Quote command.
1746
1747           If your command contains special characters that should not be
1748           interpreted by the shell (e.g. ; \ | *), use --quote to escape
1749           these. The command must be a simple command (see man bash) without
1750           redirections and without variable assignments.
1751
1752           Most people will not need this. Quoting is disabled by default.
1753
1754           See also: QUOTING command --shell-quote uq() Q()
1755
1756       --no-run-if-empty
1757       -r  Do not run empty input.
1758
1759           If the stdin (standard input) only contains whitespace, do not run
1760           the command.
1761
1762           If used with --pipe this is slow.
1763
1764           See also: command --pipe --interactive
1765
1766       --noswap
1767           Do not start job is computer is swapping.
1768
1769           Do not start new jobs on a given computer if there is both swap-in
1770           and swap-out activity.
1771
1772           The swap activity is only sampled every 10 seconds as the sampling
1773           takes 1 second to do.
1774
1775           Swap activity is computed as (swap-in)*(swap-out) which in practice
1776           is a good value: swapping out is not a problem, swapping in is not
1777           a problem, but both swapping in and out usually indicates a
1778           problem.
1779
1780           --memfree and --memsuspend may give better results, so try using
1781           those first.
1782
1783           See also: --memfree --memsuspend
1784
1785       --record-env
1786           Record exported environment.
1787
1788           Record current exported environment variables in
1789           ~/.parallel/ignored_vars.  This will ignore variables currently set
1790           when using --env _. So you should set the variables/fuctions, you
1791           want to use after running --record-env.
1792
1793           See also: --env --session env_parallel
1794
1795       --recstart startstring
1796       --recend endstring
1797           Split record between endstring and startstring.
1798
1799           If --recstart is given startstring will be used to split at record
1800           start.
1801
1802           If --recend is given endstring will be used to split at record end.
1803
1804           If both --recstart and --recend are given the combined string
1805           endstringstartstring will have to match to find a split position.
1806           This is useful if either startstring or endstring match in the
1807           middle of a record.
1808
1809           If neither --recstart nor --recend are given, then --recend
1810           defaults to '\n'. To have no record separator (e.g. for binary
1811           files) use --recend "".
1812
1813           --recstart and --recend are used with --pipe.
1814
1815           Use --regexp to interpret --recstart and --recend as regular
1816           expressions. This is slow, however.
1817
1818           Use --remove-rec-sep to remove --recstart and --recend before
1819           passing the block to the job.
1820
1821           See also: --pipe --regexp --remove-rec-sep
1822
1823       --regexp
1824           Use --regexp to interpret --recstart and --recend as regular
1825           expressions. This is slow, however.
1826
1827           See also: --pipe --regexp --remove-rec-sep --recstart --recend
1828
1829       --remove-rec-sep
1830       --removerecsep
1831       --rrs
1832           Remove record separator.
1833
1834           Remove the text matched by --recstart and --recend before piping it
1835           to the command.
1836
1837           Only used with --pipe/--pipe-part.
1838
1839           See also: --pipe --regexp --pipe-part --recstart --recend
1840
1841       --results name
1842       --res name
1843           Save the output into files.
1844
1845           Simple string output dir
1846
1847           If name does not contain replacement strings and does not end in
1848           .csv/.tsv, the output will be stored in a directory tree rooted at
1849           name.  Within this directory tree, each command will result in
1850           three files: name/<ARGS>/stdout and name/<ARGS>/stderr,
1851           name/<ARGS>/seq, where <ARGS> is a sequence of directories
1852           representing the header of the input source (if using --header :)
1853           or the number of the input source and corresponding values.
1854
1855           E.g:
1856
1857             parallel --header : --results foo echo {a} {b} \
1858               ::: a I II ::: b III IIII
1859
1860           will generate the files:
1861
1862             foo/a/II/b/III/seq
1863             foo/a/II/b/III/stderr
1864             foo/a/II/b/III/stdout
1865             foo/a/II/b/IIII/seq
1866             foo/a/II/b/IIII/stderr
1867             foo/a/II/b/IIII/stdout
1868             foo/a/I/b/III/seq
1869             foo/a/I/b/III/stderr
1870             foo/a/I/b/III/stdout
1871             foo/a/I/b/IIII/seq
1872             foo/a/I/b/IIII/stderr
1873             foo/a/I/b/IIII/stdout
1874
1875           and
1876
1877             parallel --results foo echo {1} {2} ::: I II ::: III IIII
1878
1879           will generate the files:
1880
1881             foo/1/II/2/III/seq
1882             foo/1/II/2/III/stderr
1883             foo/1/II/2/III/stdout
1884             foo/1/II/2/IIII/seq
1885             foo/1/II/2/IIII/stderr
1886             foo/1/II/2/IIII/stdout
1887             foo/1/I/2/III/seq
1888             foo/1/I/2/III/stderr
1889             foo/1/I/2/III/stdout
1890             foo/1/I/2/IIII/seq
1891             foo/1/I/2/IIII/stderr
1892             foo/1/I/2/IIII/stdout
1893
1894           CSV file output
1895
1896           If name ends in .csv/.tsv the output will be a CSV-file named name.
1897
1898           .csv gives a comma separated value file. .tsv gives a TAB separated
1899           value file.
1900
1901           -.csv/-.tsv are special: It will give the file on stdout (standard
1902           output).
1903
1904           JSON file output
1905
1906           If name ends in .json the output will be a JSON-file named name.
1907
1908           -.json is special: It will give the file on stdout (standard
1909           output).
1910
1911           Replacement string output file
1912
1913           If name contains a replacement string and the replaced result does
1914           not end in /, then the standard output will be stored in a file
1915           named by this result. Standard error will be stored in the same
1916           file name with '.err' added, and the sequence number will be stored
1917           in the same file name with '.seq' added.
1918
1919           E.g.
1920
1921             parallel --results my_{} echo ::: foo bar baz
1922
1923           will generate the files:
1924
1925             my_bar
1926             my_bar.err
1927             my_bar.seq
1928             my_baz
1929             my_baz.err
1930             my_baz.seq
1931             my_foo
1932             my_foo.err
1933             my_foo.seq
1934
1935           Replacement string output dir
1936
1937           If name contains a replacement string and the replaced result ends
1938           in /, then output files will be stored in the resulting dir.
1939
1940           E.g.
1941
1942             parallel --results my_{}/ echo ::: foo bar baz
1943
1944           will generate the files:
1945
1946             my_bar/seq
1947             my_bar/stderr
1948             my_bar/stdout
1949             my_baz/seq
1950             my_baz/stderr
1951             my_baz/stdout
1952             my_foo/seq
1953             my_foo/stderr
1954             my_foo/stdout
1955
1956           See also: --output-as-files --tag --header --joblog
1957
1958       --resume
1959           Resumes from the last unfinished job.
1960
1961           By reading --joblog or the --results dir GNU parallel will figure
1962           out the last unfinished job and continue from there. As GNU
1963           parallel only looks at the sequence numbers in --joblog then the
1964           input, the command, and --joblog all have to remain unchanged;
1965           otherwise GNU parallel may run wrong commands.
1966
1967           See also: --joblog --results --resume-failed --retries
1968
1969       --resume-failed
1970           Retry all failed and resume from the last unfinished job.
1971
1972           By reading --joblog GNU parallel will figure out the failed jobs
1973           and run those again. After that it will resume last unfinished job
1974           and continue from there. As GNU parallel only looks at the sequence
1975           numbers in --joblog then the input, the command, and --joblog all
1976           have to remain unchanged; otherwise GNU parallel may run wrong
1977           commands.
1978
1979           See also: --joblog --resume --retry-failed --retries
1980
1981       --retry-failed
1982           Retry all failed jobs in joblog.
1983
1984           By reading --joblog GNU parallel will figure out the failed jobs
1985           and run those again.
1986
1987           --retry-failed ignores the command and arguments on the command
1988           line: It only looks at the joblog.
1989
1990           Differences between --resume, --resume-failed, --retry-failed
1991
1992           In this example exit {= $_%=2 =} will cause every other job to
1993           fail.
1994
1995             timeout -k 1 4 parallel --joblog log -j10 \
1996               'sleep {}; exit {= $_%=2 =}' ::: {10..1}
1997
1998           4 jobs completed. 2 failed:
1999
2000             Seq   [...]   Exitval Signal  Command
2001             10    [...]   1       0       sleep 1; exit 1
2002             9     [...]   0       0       sleep 2; exit 0
2003             8     [...]   1       0       sleep 3; exit 1
2004             7     [...]   0       0       sleep 4; exit 0
2005
2006           --resume does not care about the Exitval, but only looks at Seq. If
2007           the Seq is run, it will not be run again. So if needed, you can
2008           change the command for the seqs not run yet:
2009
2010             parallel --resume --joblog log -j10 \
2011               'sleep .{}; exit {= $_%=2 =}' ::: {10..1}
2012
2013             Seq   [...]   Exitval Signal  Command
2014             [... as above ...]
2015             1     [...]   0       0       sleep .10; exit 0
2016             6     [...]   1       0       sleep .5; exit 1
2017             5     [...]   0       0       sleep .6; exit 0
2018             4     [...]   1       0       sleep .7; exit 1
2019             3     [...]   0       0       sleep .8; exit 0
2020             2     [...]   1       0       sleep .9; exit 1
2021
2022           --resume-failed cares about the Exitval, but also only looks at Seq
2023           to figure out which commands to run. Again this means you can
2024           change the command, but not the arguments. It will run the failed
2025           seqs and the seqs not yet run:
2026
2027             parallel --resume-failed --joblog log -j10 \
2028               'echo {};sleep .{}; exit {= $_%=3 =}' ::: {10..1}
2029
2030             Seq   [...]   Exitval Signal  Command
2031             [... as above ...]
2032             10    [...]   1       0       echo 1;sleep .1; exit 1
2033             8     [...]   0       0       echo 3;sleep .3; exit 0
2034             6     [...]   2       0       echo 5;sleep .5; exit 2
2035             4     [...]   1       0       echo 7;sleep .7; exit 1
2036             2     [...]   0       0       echo 9;sleep .9; exit 0
2037
2038           --retry-failed cares about the Exitval, but takes the command from
2039           the joblog. It ignores any arguments or commands given on the
2040           command line:
2041
2042             parallel --retry-failed --joblog log -j10 this part is ignored
2043
2044             Seq   [...]   Exitval Signal  Command
2045             [... as above ...]
2046             10    [...]   1       0       echo 1;sleep .1; exit 1
2047             6     [...]   2       0       echo 5;sleep .5; exit 2
2048             4     [...]   1       0       echo 7;sleep .7; exit 1
2049
2050           See also: --joblog --resume --resume-failed --retries
2051
2052       --retries n
2053           Try failing jobs n times.
2054
2055           If a job fails, retry it on another computer on which it has not
2056           failed. Do this n times. If there are fewer than n computers in
2057           --sshlogin GNU parallel will re-use all the computers. This is
2058           useful if some jobs fail for no apparent reason (such as network
2059           failure).
2060
2061           n=0 means infinite.
2062
2063           See also: --term-seq --sshlogin
2064
2065       --return filename
2066           Transfer files from remote computers.
2067
2068           --return is used with --sshlogin when the arguments are files on
2069           the remote computers. When processing is done the file filename
2070           will be transferred from the remote computer using rsync and will
2071           be put relative to the default login dir. E.g.
2072
2073             echo foo/bar.txt | parallel --return {.}.out \
2074               --sshlogin server.example.com touch {.}.out
2075
2076           This will transfer the file $HOME/foo/bar.out from the computer
2077           server.example.com to the file foo/bar.out after running touch
2078           foo/bar.out on server.example.com.
2079
2080             parallel -S server --trc out/./{}.out touch {}.out ::: in/file
2081
2082           This will transfer the file in/file.out from the computer
2083           server.example.com to the files out/in/file.out after running touch
2084           in/file.out on server.
2085
2086             echo /tmp/foo/bar.txt | parallel --return {.}.out \
2087               --sshlogin server.example.com touch {.}.out
2088
2089           This will transfer the file /tmp/foo/bar.out from the computer
2090           server.example.com to the file /tmp/foo/bar.out after running touch
2091           /tmp/foo/bar.out on server.example.com.
2092
2093           Multiple files can be transferred by repeating the option multiple
2094           times:
2095
2096             echo /tmp/foo/bar.txt | parallel \
2097               --sshlogin server.example.com \
2098               --return {.}.out --return {.}.out2 touch {.}.out {.}.out2
2099
2100           --return is ignored when used with --sshlogin : or when not used
2101           with --sshlogin.
2102
2103           For details on transferring see --transferfile.
2104
2105           See also: --transfer --transferfile --sshlogin --cleanup --workdir
2106
2107       --round-robin
2108       --round
2109           Distribute chunks of standard input in a round robin fashion.
2110
2111           Normally --pipe will give a single block to each instance of the
2112           command. With --round-robin all blocks will at random be written to
2113           commands already running. This is useful if the command takes a
2114           long time to initialize.
2115
2116           --keep-order will not work with --round-robin as it is impossible
2117           to track which input block corresponds to which output.
2118
2119           --round-robin implies --pipe, except if --pipe-part is given.
2120
2121           See the section: SPREADING BLOCKS OF DATA.
2122
2123           See also: --bin --group-by --shard
2124
2125       --rpl 'tag perl expression'
2126           Define replacement string.
2127
2128           Use tag as a replacement string for perl expression. This makes it
2129           possible to define your own replacement strings. GNU parallel's 7
2130           replacement strings are implemented as:
2131
2132             --rpl '{} '
2133             --rpl '{#} 1 $_=$job->seq()'
2134             --rpl '{%} 1 $_=$job->slot()'
2135             --rpl '{/} s:.*/::'
2136             --rpl '{//} $Global::use{"File::Basename"} ||=
2137                         eval "use File::Basename; 1;"; $_ = dirname($_);'
2138             --rpl '{/.} s:.*/::; s:\.[^/.]+$::;'
2139             --rpl '{.} s:\.[^/.]+$::'
2140
2141           The --plus replacement strings are implemented as:
2142
2143             --rpl '{+/} s:/[^/]*$:: || s:.*$::'
2144             --rpl '{+.} s:.*\.:: || s:.*$::'
2145             --rpl '{+..} s:.*\.([^/.]+\.[^/.]+)$:$1: || s:.*$::'
2146             --rpl '{+...} s:.*\.([^/.]+\.[^/.]+\.[^/.]+)$:$1: || s:.*$::'
2147             --rpl '{..} s:\.[^/.]+\.[^/.]+$::'
2148             --rpl '{...} s:\.[^/.]+\.[^/.]+\.[^/.]+$::'
2149             --rpl '{/..} s:.*/::; s:\.[^/.]+\.[^/.]+$::'
2150             --rpl '{/...} s:.*/::; s:\.[^/.]+\.[^/.]+\.[^/.]+$::'
2151             --rpl '{choose_k}
2152                    for $t (2..$#arg){ if($arg[$t-1] ge $arg[$t]) { skip() } }'
2153             --rpl '{##} 1 $_=total_jobs()'
2154             --rpl '{0%} 1 $f=1+int((log($Global::max_jobs_running||1)/
2155                                     log(10))); $_=sprintf("%0${f}d",slot())'
2156             --rpl '{0#} 1 $f=1+int((log(total_jobs())/log(10)));
2157                         $_=sprintf("%0${f}d",seq())'
2158
2159             --rpl '{:-([^}]+?)} $_ ||= $$1'
2160             --rpl '{:(\d+?)} substr($_,0,$$1) = ""'
2161             --rpl '{:(\d+?):(\d+?)} $_ = substr($_,$$1,$$2);'
2162             --rpl '{#([^#}][^}]*?)} $nongreedy=::make_regexp_ungreedy($$1);
2163                                     s/^$nongreedy(.*)/$1/;'
2164             --rpl '{##([^#}][^}]*?)} s/^$$1//;'
2165             --rpl '{%([^}]+?)} $nongreedy=::make_regexp_ungreedy($$1);
2166                                s/(.*)$nongreedy$/$1/;'
2167             --rpl '{%%([^}]+?)} s/$$1$//;'
2168             --rpl '{/([^}]+?)/([^}]*?)} s/$$1/$$2/;'
2169             --rpl '{^([^}]+?)} s/^($$1)/uc($1)/e;'
2170             --rpl '{^^([^}]+?)} s/($$1)/uc($1)/eg;'
2171             --rpl '{,([^}]+?)} s/^($$1)/lc($1)/e;'
2172             --rpl '{,,([^}]+?)} s/($$1)/lc($1)/eg;'
2173
2174             --rpl '{slot} 1 $_="\${PARALLEL_JOBSLOT}";uq()'
2175             --rpl '{host} 1 $_="\${PARALLEL_SSHHOST}";uq()'
2176             --rpl '{sshlogin} 1 $_="\${PARALLEL_SSHLOGIN}";uq()'
2177             --rpl '{hgrp} 1 $_="\${PARALLEL_HOSTGROUPS}";uq()'
2178             --rpl '{agrp} 1 $_="\${PARALLEL_ARGHOSTGROUPS}";uq()'
2179
2180           If the user defined replacement string starts with '{' it can also
2181           be used as a positional replacement string (like {2.}).
2182
2183           It is recommended to only change $_ but you have full access to all
2184           of GNU parallel's internal functions and data structures.
2185
2186           Here are a few examples:
2187
2188             Is the job sequence even or odd?
2189             --rpl '{odd} $_ = seq() % 2 ? "odd" : "even"'
2190             Pad job sequence with leading zeros to get equal width
2191             --rpl '{0#} $f=1+int("".(log(total_jobs())/log(10)));
2192               $_=sprintf("%0${f}d",seq())'
2193             Job sequence counting from 0
2194             --rpl '{#0} $_ = seq() - 1'
2195             Job slot counting from 2
2196             --rpl '{%1} $_ = slot() + 1'
2197             Remove all extensions
2198             --rpl '{:} s:(\.[^/]+)*$::'
2199
2200           You can have dynamic replacement strings by including parenthesis
2201           in the replacement string and adding a regular expression between
2202           the parenthesis. The matching string will be inserted as $$1:
2203
2204             parallel --rpl '{%(.*?)} s/$$1//' echo {%.tar.gz} ::: my.tar.gz
2205             parallel --rpl '{:%(.+?)} s:$$1(\.[^/]+)*$::' \
2206               echo {:%_file} ::: my_file.tar.gz
2207             parallel -n3 --rpl '{/:%(.*?)} s:.*/(.*)$$1(\.[^/]+)*$:$1:' \
2208               echo job {#}: {2} {2.} {3/:%_1} ::: a/b.c c/d.e f/g_1.h.i
2209
2210           You can even use multiple matches:
2211
2212             parallel --rpl '{/(.+?)/(.*?)} s/$$1/$$2/;'
2213               echo {/replacethis/withthis} {/b/C} ::: a_replacethis_b
2214
2215             parallel --rpl '{(.*?)/(.*?)} $_="$$2$_$$1"' \
2216               echo {swap/these} ::: -middle-
2217
2218           See also: {=perl expression=} --parens
2219
2220       --rsync-opts options
2221           Options to pass on to rsync.
2222
2223           Setting --rsync-opts takes precedence over setting the environment
2224           variable $PARALLEL_RSYNC_OPTS.
2225
2226       --max-chars max-chars
2227       -s max-chars
2228           Limit length of command.
2229
2230           Use at most max-chars characters per command line, including the
2231           command and initial-arguments and the terminating nulls at the ends
2232           of the argument strings.  The largest allowed value is system-
2233           dependent, and is calculated as the argument length limit for exec,
2234           less the size of your environment.  The default value is the
2235           maximum.
2236
2237           max-chars can be postfixed with K, M, G, T, P, k, m, g, t, or p
2238           (see UNIT PREFIX).
2239
2240           Implies -X unless -m or --xargs is set.
2241
2242           See also: -X -m --xargs --max-line-length-allowed --show-limits
2243
2244       --show-limits
2245           Display limits given by the operating system.
2246
2247           Display the limits on the command-line length which are imposed by
2248           the operating system and the -s option.  Pipe the input from
2249           /dev/null (and perhaps specify --no-run-if-empty) if you don't want
2250           GNU parallel to do anything.
2251
2252           See also: --max-chars --max-line-length-allowed --version
2253
2254       --semaphore
2255           Work as a counting semaphore.
2256
2257           --semaphore will cause GNU parallel to start command in the
2258           background. When the number of jobs given by --jobs is reached, GNU
2259           parallel will wait for one of these to complete before starting
2260           another command.
2261
2262           --semaphore implies --bg unless --fg is specified.
2263
2264           The command sem is an alias for parallel --semaphore.
2265
2266           See also: man sem --bg --fg --semaphore-name --semaphore-timeout
2267           --wait
2268
2269       --semaphore-name name
2270       --id name
2271           Use name as the name of the semaphore.
2272
2273           The default is the name of the controlling tty (output from tty).
2274
2275           The default normally works as expected when used interactively, but
2276           when used in a script name should be set. $$ or my_task_name are
2277           often a good value.
2278
2279           The semaphore is stored in ~/.parallel/semaphores/
2280
2281           Implies --semaphore.
2282
2283           See also: man sem --semaphore
2284
2285       --semaphore-timeout secs
2286       --st secs
2287           If secs > 0: If the semaphore is not released within secs seconds,
2288           take it anyway.
2289
2290           If secs < 0: If the semaphore is not released within secs seconds,
2291           exit.
2292
2293           secs is in seconds, but can be postfixed with s, m, h, or d (see
2294           the section TIME POSTFIXES).
2295
2296           Implies --semaphore.
2297
2298           See also: man sem
2299
2300       --seqreplace replace-str
2301           Use the replacement string replace-str instead of {#} for job
2302           sequence number.
2303
2304           See also: {#}
2305
2306       --session
2307           Record names in current environment in $PARALLEL_IGNORED_NAMES and
2308           exit.
2309
2310           Only used with env_parallel. Aliases, functions, and variables with
2311           names in $PARALLEL_IGNORED_NAMES will not be copied.  So you should
2312           set variables/function you want copied after running --session.
2313
2314           It is similar to --record-env, but only for this session.
2315
2316           Only supported in Ash, Bash, Dash, Ksh, Sh, and Zsh.
2317
2318           See also: --env --record-env env_parallel
2319
2320       --shard shardexpr
2321           Use shardexpr as shard key and shard input to the jobs.
2322
2323           shardexpr is [column number|column name] [perlexpression] e.g.:
2324
2325             3
2326             Address
2327             3 $_%=100
2328             Address s/\d//g
2329
2330           Each input line is split using --colsep. The value of the column is
2331           put into $_, the perl expression is executed, the resulting value
2332           is hashed so that all lines of a given value is given to the same
2333           job slot.
2334
2335           This is similar to sharding in databases.
2336
2337           The performance is in the order of 100K rows per second. Faster if
2338           the shardcol is small (<10), slower if it is big (>100).
2339
2340           --shard requires --pipe and a fixed numeric value for --jobs.
2341
2342           See the section: SPREADING BLOCKS OF DATA.
2343
2344           See also: --bin --group-by --round-robin
2345
2346       --shebang
2347       --hashbang
2348           GNU parallel can be called as a shebang (#!) command as the first
2349           line of a script. The content of the file will be treated as
2350           inputsource.
2351
2352           Like this:
2353
2354             #!/usr/bin/parallel --shebang -r wget
2355
2356             https://ftpmirror.gnu.org/parallel/parallel-20120822.tar.bz2
2357             https://ftpmirror.gnu.org/parallel/parallel-20130822.tar.bz2
2358             https://ftpmirror.gnu.org/parallel/parallel-20140822.tar.bz2
2359
2360           --shebang must be set as the first option.
2361
2362           On FreeBSD env is needed:
2363
2364             #!/usr/bin/env -S parallel --shebang -r wget
2365
2366             https://ftpmirror.gnu.org/parallel/parallel-20120822.tar.bz2
2367             https://ftpmirror.gnu.org/parallel/parallel-20130822.tar.bz2
2368             https://ftpmirror.gnu.org/parallel/parallel-20140822.tar.bz2
2369
2370           There are many limitations of shebang (#!) depending on your
2371           operating system. See details on
2372           https://www.in-ulm.de/~mascheck/various/shebang/
2373
2374           See also: --shebang-wrap
2375
2376       --shebang-wrap
2377           GNU parallel can parallelize scripts by wrapping the shebang line.
2378           If the program can be run like this:
2379
2380             cat arguments | parallel the_program
2381
2382           then the script can be changed to:
2383
2384             #!/usr/bin/parallel --shebang-wrap /original/parser --options
2385
2386           E.g.
2387
2388             #!/usr/bin/parallel --shebang-wrap /usr/bin/python
2389
2390           If the program can be run like this:
2391
2392             cat data | parallel --pipe the_program
2393
2394           then the script can be changed to:
2395
2396             #!/usr/bin/parallel --shebang-wrap --pipe /orig/parser --opts
2397
2398           E.g.
2399
2400             #!/usr/bin/parallel --shebang-wrap --pipe /usr/bin/perl -w
2401
2402           --shebang-wrap must be set as the first option.
2403
2404           See also: --shebang
2405
2406       --shell-completion shell
2407           Generate shell completion code for interactive shells.
2408
2409           Supported shells: bash zsh.
2410
2411           Use auto as shell to automatically detect running shell.
2412
2413           Activate the completion code with:
2414
2415             zsh% eval "$(parallel --shell-completion auto)"
2416             bash$ eval "$(parallel --shell-completion auto)"
2417
2418           Or put this `/usr/share/zsh/site-functions/_parallel`, then
2419           `compinit` to generate `~/.zcompdump`:
2420
2421             #compdef parallel
2422
2423             (( $+functions[_comp_parallel] )) ||
2424               eval "$(parallel --shell-completion auto)" &&
2425               _comp_parallel
2426
2427       --shell-quote
2428           Does not run the command but quotes it. Useful for making quoted
2429           composed commands for GNU parallel.
2430
2431           Multiple --shell-quote with quote the string multiple times, so
2432           parallel --shell-quote | parallel --shell-quote can be written as
2433           parallel --shell-quote --shell-quote.
2434
2435           See also: --quote
2436
2437       --shuf
2438           Shuffle jobs.
2439
2440           When having multiple input sources it is hard to randomize jobs.
2441           --shuf will generate all jobs, and shuffle them before running
2442           them. This is useful to get a quick preview of the results before
2443           running the full batch.
2444
2445           Combined with --halt soon,done=1% you can run a random 1% sample of
2446           all jobs:
2447
2448             parallel --shuf --halt soon,done=1% echo ::: {1..100} ::: {1..100}
2449
2450           See also: --halt
2451
2452       --skip-first-line
2453           Do not use the first line of input (used by GNU parallel itself
2454           when called with --shebang).
2455
2456       --sql DBURL (obsolete)
2457           Use --sql-master instead.
2458
2459       --sql-master DBURL
2460           Submit jobs via SQL server. DBURL must point to a table, which will
2461           contain the same information as --joblog, the values from the input
2462           sources (stored in columns V1 .. Vn), and the output (stored in
2463           columns Stdout and Stderr).
2464
2465           If DBURL is prepended with '+' GNU parallel assumes the table is
2466           already made with the correct columns and appends the jobs to it.
2467
2468           If DBURL is not prepended with '+' the table will be dropped and
2469           created with the correct amount of V-columns unless
2470
2471           --sqlmaster does not run any jobs, but it creates the values for
2472           the jobs to be run. One or more --sqlworker must be run to actually
2473           execute the jobs.
2474
2475           If --wait is set, GNU parallel will wait for the jobs to complete.
2476
2477           The format of a DBURL is:
2478
2479             [sql:]vendor://[[user][:pwd]@][host][:port]/[db]/table
2480
2481           E.g.
2482
2483             sql:mysql://hr:hr@localhost:3306/hrdb/jobs
2484             mysql://scott:tiger@my.example.com/pardb/paralleljobs
2485             sql:oracle://scott:tiger@ora.example.com/xe/parjob
2486             postgresql://scott:tiger@pg.example.com/pgdb/parjob
2487             pg:///parjob
2488             sqlite3:///%2Ftmp%2Fpardb.sqlite/parjob
2489             csv:///%2Ftmp%2Fpardb/parjob
2490
2491           Notice how / in the path of sqlite and CVS must be encoded as %2F.
2492           Except the last / in CSV which must be a /.
2493
2494           It can also be an alias from ~/.sql/aliases:
2495
2496             :myalias mysql:///mydb/paralleljobs
2497
2498           See also: --sql-and-worker --sql-worker --joblog
2499
2500       --sql-and-worker DBURL
2501           Shorthand for: --sql-master DBURL --sql-worker DBURL.
2502
2503           See also: --sql-master --sql-worker
2504
2505       --sql-worker DBURL
2506           Execute jobs via SQL server. Read the input sources variables from
2507           the table pointed to by DBURL. The command on the command line
2508           should be the same as given by --sqlmaster.
2509
2510           If you have more than one --sqlworker jobs may be run more than
2511           once.
2512
2513           If --sqlworker runs on the local machine, the hostname in the SQL
2514           table will not be ':' but instead the hostname of the machine.
2515
2516           See also: --sql-master --sql-and-worker
2517
2518       --ssh sshcommand
2519           GNU parallel defaults to using ssh for remote access. This can be
2520           overridden with --ssh. It can also be set on a per server basis
2521           with --sshlogin.
2522
2523           See also: --sshlogin
2524
2525       --ssh-delay duration
2526           Delay starting next ssh by duration.
2527
2528           GNU parallel will not start another ssh for the next duration.
2529
2530           duration is in seconds, but can be postfixed with s, m, h, or d.
2531
2532           See also: TIME POSTFIXES --sshlogin --delay
2533
2534       --sshlogin
2535       [@hostgroups/][ncpus/]sshlogin[,[@hostgroups/][ncpus/]sshlogin[,...]]
2536       (alpha testing)
2537       --sshlogin @hostgroup (alpha testing)
2538       -S
2539       [@hostgroups/][ncpus/]sshlogin[,[@hostgroups/][ncpus/]sshlogin[,...]]
2540       (alpha testing)
2541       -S @hostgroup (alpha testing)
2542           Distribute jobs to remote computers.
2543
2544           The jobs will be run on a list of remote computers.
2545
2546           If hostgroups is given, the sshlogin will be added to that
2547           hostgroup. Multiple hostgroups are separated by '+'. The sshlogin
2548           will always be added to a hostgroup named the same as sshlogin.
2549
2550           If only the @hostgroup is given, only the sshlogins in that
2551           hostgroup will be used. Multiple @hostgroup can be given.
2552
2553           GNU parallel will determine the number of CPUs on the remote
2554           computers and run the number of jobs as specified by -j.  If the
2555           number ncpus is given GNU parallel will use this number for number
2556           of CPUs on the host. Normally ncpus will not be needed.
2557
2558           An sshlogin is of the form:
2559
2560             [sshcommand [options]] [username[:password]@]hostname
2561
2562           If password is given, sshpass will be used. Otherwise the sshlogin
2563           must not require a password (ssh-agent and ssh-copy-id may help
2564           with that).
2565
2566           If the hostname is an IPv6 address, the port can be given separated
2567           with p or #. If the address is enclosed in [] you can also use :.
2568           E.g. ::1p2222 ::1#2222 [::1]:2222
2569
2570           The sshlogin ':' is special, it means 'no ssh' and will therefore
2571           run on the local computer.
2572
2573           The sshlogin '..' is special, it read sshlogins from
2574           ~/.parallel/sshloginfile or $XDG_CONFIG_HOME/parallel/sshloginfile
2575
2576           The sshlogin '-' is special, too, it read sshlogins from stdin
2577           (standard input).
2578
2579           To specify more sshlogins separate the sshlogins by comma, newline
2580           (in the same string), or repeat the options multiple times.
2581
2582           GNU parallel splits on , (comma) so if your sshlogin contains ,
2583           (comma) you need to replace it with \, or ,,
2584
2585           For examples: see --sshloginfile.
2586
2587           The remote host must have GNU parallel installed.
2588
2589           --sshlogin is known to cause problems with -m and -X.
2590
2591           See also: --basefile --transferfile --return --cleanup --trc
2592           --sshloginfile --workdir --filter-hosts --ssh
2593
2594       --sshloginfile filename
2595       --slf filename
2596           File with sshlogins. The file consists of sshlogins on separate
2597           lines. Empty lines and lines starting with '#' are ignored.
2598           Example:
2599
2600             server.example.com
2601             username@server2.example.com
2602             8/my-8-cpu-server.example.com
2603             2/my_other_username@my-dualcore.example.net
2604             # This server has SSH running on port 2222
2605             ssh -p 2222 server.example.net
2606             4/ssh -p 2222 quadserver.example.net
2607             # Use a different ssh program
2608             myssh -p 2222 -l myusername hexacpu.example.net
2609             # Use a different ssh program with default number of CPUs
2610             //usr/local/bin/myssh -p 2222 -l myusername hexacpu
2611             # Use a different ssh program with 6 CPUs
2612             6//usr/local/bin/myssh -p 2222 -l myusername hexacpu
2613             # Assume 16 CPUs on the local computer
2614             16/:
2615             # Put server1 in hostgroup1
2616             @hostgroup1/server1
2617             # Put myusername@server2 in hostgroup1+hostgroup2
2618             @hostgroup1+hostgroup2/myusername@server2
2619             # Force 4 CPUs and put 'ssh -p 2222 server3' in hostgroup1
2620             @hostgroup1/4/ssh -p 2222 server3
2621
2622           When using a different ssh program the last argument must be the
2623           hostname.
2624
2625           Multiple --sshloginfile are allowed.
2626
2627           GNU parallel will first look for the file in current dir; if that
2628           fails it look for the file in ~/.parallel.
2629
2630           The sshloginfile '..' is special, it read sshlogins from
2631           ~/.parallel/sshloginfile
2632
2633           The sshloginfile '.' is special, it read sshlogins from
2634           /etc/parallel/sshloginfile
2635
2636           The sshloginfile '-' is special, too, it read sshlogins from stdin
2637           (standard input).
2638
2639           If the sshloginfile is changed it will be re-read when a job
2640           finishes though at most once per second. This makes it possible to
2641           add and remove hosts while running.
2642
2643           This can be used to have a daemon that updates the sshloginfile to
2644           only contain servers that are up:
2645
2646               cp original.slf tmp2.slf
2647               while [ 1 ] ; do
2648                 nice parallel --nonall -j0 -k --slf original.slf \
2649                   --tag echo | perl 's/\t$//' > tmp.slf
2650                 if diff tmp.slf tmp2.slf; then
2651                   mv tmp.slf tmp2.slf
2652                 fi
2653                 sleep 10
2654               done &
2655               parallel --slf tmp2.slf ...
2656
2657           See also: --filter-hosts
2658
2659       --slotreplace replace-str
2660           Use the replacement string replace-str instead of {%} for job slot
2661           number.
2662
2663           See also: {%}
2664
2665       --silent
2666           Silent.
2667
2668           The job to be run will not be printed. This is the default.  Can be
2669           reversed with -v.
2670
2671           See also: -v
2672
2673       --template file=repl
2674       --tmpl file=repl
2675           Replace replacement strings in file and save it in repl.
2676
2677           All replacement strings in the contents of file will be replaced.
2678           All replacement strings in the name repl will be replaced.
2679
2680           With --cleanup the new file will be removed when the job is done.
2681
2682           If my.tmpl contains this:
2683
2684             Xval: {x}
2685             Yval: {y}
2686             FixedValue: 9
2687             # x with 2 decimals
2688             DecimalX: {=x $_=sprintf("%.2f",$_) =}
2689             TenX: {=x $_=$_*10 =}
2690             RandomVal: {=1 $_=rand() =}
2691
2692           it can be used like this:
2693
2694             myprog() { echo Using "$@"; cat "$@"; }
2695             export -f myprog
2696             parallel --cleanup --header : --tmpl my.tmpl={#}.t myprog {#}.t \
2697               ::: x 1.234 2.345 3.45678 ::: y 1 2 3
2698
2699           See also: {} --cleanup
2700
2701       --tty
2702           Open terminal tty.
2703
2704           If GNU parallel is used for starting a program that accesses the
2705           tty (such as an interactive program) then this option may be
2706           needed. It will default to starting only one job at a time (i.e.
2707           -j1), not buffer the output (i.e. -u), and it will open a tty for
2708           the job.
2709
2710           You can of course override -j1 and -u.
2711
2712           Using --tty unfortunately means that GNU parallel cannot kill the
2713           jobs (with --timeout, --memfree, or --halt). This is due to GNU
2714           parallel giving each child its own process group, which is then
2715           killed. Process groups are dependant on the tty.
2716
2717           See also: --ungroup --open-tty
2718
2719       --tag (alpha testing)
2720           Tag lines with arguments.
2721
2722           Each output line will be prepended with the arguments and TAB (\t).
2723           When combined with --onall or --nonall the lines will be prepended
2724           with the sshlogin instead.
2725
2726           --tag is ignored when using -u.
2727
2728           See also: --tagstring --ctag
2729
2730       --tagstring str (alpha testing)
2731           Tag lines with a string.
2732
2733           Each output line will be prepended with str and TAB (\t). str can
2734           contain replacement strings such as {}.
2735
2736           --tagstring is ignored when using -u, --onall, and --nonall.
2737
2738           See also: --tag --ctagstring
2739
2740       --tee
2741           Pipe all data to all jobs.
2742
2743           Used with --pipe/--pipe-part and :::.
2744
2745             seq 1000 | parallel --pipe --tee -v wc {} ::: -w -l -c
2746
2747           How many numbers in 1..1000 contain 0..9, and how many bytes do
2748           they fill:
2749
2750             seq 1000 | parallel --pipe --tee --tag \
2751               'grep {1} | wc {2}' ::: {0..9} ::: -l -c
2752
2753           How many words contain a..z and how many bytes do they fill?
2754
2755             parallel -a /usr/share/dict/words --pipe-part --tee --tag \
2756               'grep {1} | wc {2}' ::: {a..z} ::: -l -c
2757
2758           See also: ::: --pipe --pipe-part
2759
2760       --term-seq sequence
2761           Termination sequence.
2762
2763           When a job is killed due to --timeout, --memfree, --halt, or
2764           abnormal termination of GNU parallel, sequence determines how the
2765           job is killed. The default is:
2766
2767               TERM,200,TERM,100,TERM,50,KILL,25
2768
2769           which sends a TERM signal, waits 200 ms, sends another TERM signal,
2770           waits 100 ms, sends another TERM signal, waits 50 ms, sends a KILL
2771           signal, waits 25 ms, and exits. GNU parallel detects if a process
2772           dies before the waiting time is up.
2773
2774           See also: --halt --timeout --memfree
2775
2776       --total-jobs jobs (alpha testing)
2777       --total jobs (alpha testing)
2778           Provide the total number of jobs for computing ETA which is also
2779           used for --bar.
2780
2781           Without --total-jobs GNU Parallel will read all jobs before
2782           starting a job. --total-jobs is useful if the input is generated
2783           slowly.
2784
2785           See also: --bar --eta
2786
2787       --tmpdir dirname
2788           Directory for temporary files.
2789
2790           GNU parallel normally buffers output into temporary files in /tmp.
2791           By setting --tmpdir you can use a different dir for the files.
2792           Setting --tmpdir is equivalent to setting $TMPDIR.
2793
2794           See also: --compress $TMPDIR $PARALLEL_REMOTE_TMPDIR
2795
2796       --tmux (Long beta testing)
2797           Use tmux for output. Start a tmux session and run each job in a
2798           window in that session. No other output will be produced.
2799
2800           See also: --tmuxpane
2801
2802       --tmuxpane (Long beta testing)
2803           Use tmux for output but put output into panes in the first window.
2804           Useful if you want to monitor the progress of less than 100
2805           concurrent jobs.
2806
2807           See also: --tmux
2808
2809       --timeout duration
2810           Time out for command. If the command runs for longer than duration
2811           seconds it will get killed as per --term-seq.
2812
2813           If duration is followed by a % then the timeout will dynamically be
2814           computed as a percentage of the median average runtime of
2815           successful jobs. Only values > 100% will make sense.
2816
2817           duration is in seconds, but can be postfixed with s, m, h, or d.
2818
2819           See also: TIME POSTFIXES --term-seq --retries
2820
2821       --verbose
2822       -t  Print the job to be run on stderr (standard error).
2823
2824           See also: -v --interactive
2825
2826       --transfer
2827           Transfer files to remote computers.
2828
2829           Shorthand for: --transferfile {}.
2830
2831           See also: --transferfile.
2832
2833       --transferfile filename
2834       --tf filename
2835           Transfer filename to remote computers.
2836
2837           --transferfile is used with --sshlogin to transfer files to the
2838           remote computers. The files will be transferred using rsync and
2839           will be put relative to the work dir.
2840
2841           The filename will normally contain a replacement string.
2842
2843           If the path contains /./ the remaining path will be relative to the
2844           work dir (for details: see rsync). If the work dir is /home/user,
2845           the transferring will be as follows:
2846
2847             /tmp/foo/bar   => /tmp/foo/bar
2848             tmp/foo/bar    => /home/user/tmp/foo/bar
2849             /tmp/./foo/bar => /home/user/foo/bar
2850             tmp/./foo/bar  => /home/user/foo/bar
2851
2852           Examples
2853
2854           This will transfer the file foo/bar.txt to the computer
2855           server.example.com to the file $HOME/foo/bar.txt before running wc
2856           foo/bar.txt on server.example.com:
2857
2858             echo foo/bar.txt | parallel --transferfile {} \
2859               --sshlogin server.example.com wc
2860
2861           This will transfer the file /tmp/foo/bar.txt to the computer
2862           server.example.com to the file /tmp/foo/bar.txt before running wc
2863           /tmp/foo/bar.txt on server.example.com:
2864
2865             echo /tmp/foo/bar.txt | parallel --transferfile {} \
2866               --sshlogin server.example.com wc
2867
2868           This will transfer the file /tmp/foo/bar.txt to the computer
2869           server.example.com to the file foo/bar.txt before running wc
2870           ./foo/bar.txt on server.example.com:
2871
2872             echo /tmp/./foo/bar.txt | parallel --transferfile {} \
2873               --sshlogin server.example.com wc {= s:.*/\./:./: =}
2874
2875           --transferfile is often used with --return and --cleanup. A
2876           shorthand for --transferfile {} is --transfer.
2877
2878           --transferfile is ignored when used with --sshlogin : or when not
2879           used with --sshlogin.
2880
2881           See also: --workdir --sshlogin --basefile --return --cleanup
2882
2883       --trc filename
2884           Transfer, Return, Cleanup. Shorthand for: --transfer --return
2885           filename --cleanup
2886
2887           See also: --transfer --return --cleanup
2888
2889       --trim <n|l|r|lr|rl>
2890           Trim white space in input.
2891
2892           n   No trim. Input is not modified. This is the default.
2893
2894           l   Left trim. Remove white space from start of input. E.g. " a bc
2895               " -> "a bc ".
2896
2897           r   Right trim. Remove white space from end of input. E.g. " a bc "
2898               -> " a bc".
2899
2900           lr
2901           rl  Both trim. Remove white space from both start and end of input.
2902               E.g. " a bc " -> "a bc". This is the default if --colsep is
2903               used.
2904
2905           See also: --no-run-if-empty {} --colsep
2906
2907       --ungroup
2908       -u  Ungroup output.
2909
2910           Output is printed as soon as possible and bypasses GNU parallel
2911           internal processing. This may cause output from different commands
2912           to be mixed thus should only be used if you do not care about the
2913           output. Compare these:
2914
2915             seq 4 | parallel -j0 \
2916               'sleep {};echo -n start{};sleep {};echo {}end'
2917             seq 4 | parallel -u -j0 \
2918               'sleep {};echo -n start{};sleep {};echo {}end'
2919
2920           It also disables --tag. GNU parallel outputs faster with -u.
2921           Compare the speeds of these:
2922
2923             parallel seq ::: 300000000 >/dev/null
2924             parallel -u seq ::: 300000000 >/dev/null
2925             parallel --line-buffer seq ::: 300000000 >/dev/null
2926
2927           Can be reversed with --group.
2928
2929           See also: --line-buffer --group
2930
2931       --extensionreplace replace-str
2932       --er replace-str
2933           Use the replacement string replace-str instead of {.} for input
2934           line without extension.
2935
2936           See also: {.}
2937
2938       --use-sockets-instead-of-threads
2939           See also: --use-cores-instead-of-threads
2940
2941       --use-cores-instead-of-threads
2942       --use-cpus-instead-of-cores (obsolete)
2943           Determine how GNU parallel counts the number of CPUs.
2944
2945           GNU parallel uses this number when the number of jobslots (--jobs)
2946           is computed relative to the number of CPUs (e.g. 100% or +1).
2947
2948           CPUs can be counted in three different ways:
2949
2950           sockets The number of filled CPU sockets (i.e. the number of
2951                   physical chips).
2952
2953           cores   The number of physical cores (i.e. the number of physical
2954                   compute cores).
2955
2956           threads The number of hyperthreaded cores (i.e. the number of
2957                   virtual cores - with some of them possibly being
2958                   hyperthreaded)
2959
2960           Normally the number of CPUs is computed as the number of CPU
2961           threads. With --use-sockets-instead-of-threads or
2962           --use-cores-instead-of-threads you can force it to be computed as
2963           the number of filled sockets or number of cores instead.
2964
2965           Most users will not need these options.
2966
2967           --use-cpus-instead-of-cores is a (misleading) alias for
2968           --use-sockets-instead-of-threads and is kept for backwards
2969           compatibility.
2970
2971           See also: --number-of-threads --number-of-cores --number-of-sockets
2972
2973       -v  Verbose.
2974
2975           Print the job to be run on stdout (standard output). Can be
2976           reversed with --silent.
2977
2978           Use -v -v to print the wrapping ssh command when running remotely.
2979
2980           See also: -t
2981
2982       --version
2983       -V  Print the version GNU parallel and exit.
2984
2985       --workdir mydir
2986       --wd mydir
2987           Jobs will be run in the dir mydir. The default is the current dir
2988           for the local machine, and the login dir for remote computers.
2989
2990           Files transferred using --transferfile and --return will be
2991           relative to mydir on remote computers.
2992
2993           The special mydir value ... will create working dirs under
2994           ~/.parallel/tmp/. If --cleanup is given these dirs will be removed.
2995
2996           The special mydir value . uses the current working dir.  If the
2997           current working dir is beneath your home dir, the value . is
2998           treated as the relative path to your home dir. This means that if
2999           your home dir is different on remote computers (e.g. if your login
3000           is different) the relative path will still be relative to your home
3001           dir.
3002
3003           To see the difference try:
3004
3005             parallel -S server pwd ::: ""
3006             parallel --wd . -S server pwd ::: ""
3007             parallel --wd ... -S server pwd ::: ""
3008
3009           mydir can contain GNU parallel's replacement strings.
3010
3011       --wait
3012           Wait for all commands to complete.
3013
3014           Used with --semaphore or --sqlmaster.
3015
3016           See also: man sem
3017
3018       -X  Multiple arguments with context replace. Insert as many arguments
3019           as the command line length permits. If multiple jobs are being run
3020           in parallel: distribute the arguments evenly among the jobs. Use
3021           -j1 to avoid this.
3022
3023           If {} is not used the arguments will be appended to the line.  If
3024           {} is used as part of a word (like pic{}.jpg) then the whole word
3025           will be repeated. If {} is used multiple times each {} will be
3026           replaced with the arguments.
3027
3028           Normally -X will do the right thing, whereas -m can give unexpected
3029           results if {} is used as part of a word.
3030
3031           Support for -X with --sshlogin is limited and may fail.
3032
3033           See also: -m
3034
3035       --exit
3036       -x  Exit if the size (see the -s option) is exceeded.
3037
3038       --xargs
3039           Multiple arguments. Insert as many arguments as the command line
3040           length permits.
3041
3042           If {} is not used the arguments will be appended to the line.  If
3043           {} is used multiple times each {} will be replaced with all the
3044           arguments.
3045
3046           Support for --xargs with --sshlogin is limited and may fail.
3047
3048           See also: -X
3049

EXAMPLES

3051       See: man parallel_examples
3052

SPREADING BLOCKS OF DATA

3054       --round-robin, --pipe-part, --shard, --bin and --group-by are all
3055       specialized versions of --pipe.
3056
3057       In the following n is the number of jobslots given by --jobs. A record
3058       starts with --recstart and ends with --recend. It is typically a full
3059       line. A chunk is a number of full records that is approximately the
3060       size of a block. A block can contain half records, a chunk cannot.
3061
3062       --pipe starts one job per chunk. It reads blocks from stdin (standard
3063       input). It finds a record end near a block border and passes a chunk to
3064       the program.
3065
3066       --pipe-part starts one job per chunk - just like normal --pipe. It
3067       first finds record endings near all block borders in the file and then
3068       starts the jobs. By using --block -1 it will set the block size to
3069       size-of-file/n. Used this way it will start n jobs in total.
3070
3071       --round-robin starts n jobs in total. It reads a block and passes a
3072       chunk to whichever job is ready to read. It does not parse the content
3073       except for identifying where a record ends to make sure it only passes
3074       full records.
3075
3076       --shard starts n jobs in total. It parses each line to read the value
3077       in the given column. Based on this value the line is passed to one of
3078       the n jobs. All lines having this value will be given to the same
3079       jobslot.
3080
3081       --bin works like --shard but the value of the column is the jobslot
3082       number it will be passed to. If the value is bigger than n, then n will
3083       be subtracted from the value until the values is smaller than or equal
3084       to n.
3085
3086       --group-by starts one job per chunk. Record borders are not given by
3087       --recend/--recstart. Instead a record is defined by a number of lines
3088       having the same value in a given column. So the value of a given column
3089       changes at a chunk border. With --pipe every line is parsed, with
3090       --pipe-part only a few lines are parsed to find the chunk border.
3091
3092       --group-by can be combined with --round-robin or --pipe-part.
3093

TIME POSTFIXES

3095       Arguments that give a duration are given in seconds, but can be
3096       expressed as floats postfixed with s, m, h, or d which would multiply
3097       the float by 1, 60, 60*60, or 60*60*24. Thus these are equivalent:
3098       100000 and 1d3.5h16.6m4s.
3099

UNIT PREFIX

3101       Many numerical arguments in GNU parallel can be postfixed with K, M, G,
3102       T, P, k, m, g, t, or p which would multiply the number with 1024,
3103       1048576, 1073741824, 1099511627776, 1125899906842624, 1000, 1000000,
3104       1000000000, 1000000000000, or 1000000000000000, respectively.
3105
3106       You can even give it as a math expression. E.g. 1000000 can be written
3107       as 1M-12*2.024*2k.
3108

QUOTING

3110       GNU parallel is very liberal in quoting. You only need to quote
3111       characters that have special meaning in shell:
3112
3113         ( ) $ ` ' " < > ; | \
3114
3115       and depending on context these needs to be quoted, too:
3116
3117         ~ & # ! ? space * {
3118
3119       Therefore most people will never need more quoting than putting '\' in
3120       front of the special characters.
3121
3122       Often you can simply put \' around every ':
3123
3124         perl -ne '/^\S+\s+\S+$/ and print $ARGV,"\n"' file
3125
3126       can be quoted:
3127
3128         parallel perl -ne \''/^\S+\s+\S+$/ and print $ARGV,"\n"'\' ::: file
3129
3130       However, when you want to use a shell variable you need to quote the
3131       $-sign. Here is an example using $PARALLEL_SEQ. This variable is set by
3132       GNU parallel itself, so the evaluation of the $ must be done by the sub
3133       shell started by GNU parallel:
3134
3135         seq 10 | parallel -N2 echo seq:\$PARALLEL_SEQ arg1:{1} arg2:{2}
3136
3137       If the variable is set before GNU parallel starts you can do this:
3138
3139         VAR=this_is_set_before_starting
3140         echo test | parallel echo {} $VAR
3141
3142       Prints: test this_is_set_before_starting
3143
3144       It is a little more tricky if the variable contains more than one space
3145       in a row:
3146
3147         VAR="two  spaces  between  each  word"
3148         echo test | parallel echo {} \'"$VAR"\'
3149
3150       Prints: test two  spaces  between  each  word
3151
3152       If the variable should not be evaluated by the shell starting GNU
3153       parallel but be evaluated by the sub shell started by GNU parallel,
3154       then you need to quote it:
3155
3156         echo test | parallel VAR=this_is_set_after_starting \; echo {} \$VAR
3157
3158       Prints: test this_is_set_after_starting
3159
3160       It is a little more tricky if the variable contains space:
3161
3162         echo test |\
3163           parallel VAR='"two  spaces  between  each  word"' echo {} \'"$VAR"\'
3164
3165       Prints: test two  spaces  between  each  word
3166
3167       $$ is the shell variable containing the process id of the shell. This
3168       will print the process id of the shell running GNU parallel:
3169
3170         seq 10 | parallel echo $$
3171
3172       And this will print the process ids of the sub shells started by GNU
3173       parallel.
3174
3175         seq 10 | parallel echo \$\$
3176
3177       If the special characters should not be evaluated by the sub shell then
3178       you need to protect it against evaluation from both the shell starting
3179       GNU parallel and the sub shell:
3180
3181         echo test | parallel echo {} \\\$VAR
3182
3183       Prints: test $VAR
3184
3185       GNU parallel can protect against evaluation by the sub shell by using
3186       -q:
3187
3188         echo test | parallel -q echo {} \$VAR
3189
3190       Prints: test $VAR
3191
3192       This is particularly useful if you have lots of quoting. If you want to
3193       run a perl script like this:
3194
3195         perl -ne '/^\S+\s+\S+$/ and print $ARGV,"\n"' file
3196
3197       It needs to be quoted like one of these:
3198
3199         ls | parallel perl -ne '/^\\S+\\s+\\S+\$/\ and\ print\ \$ARGV,\"\\n\"'
3200         ls | parallel perl -ne \''/^\S+\s+\S+$/ and print $ARGV,"\n"'\'
3201
3202       Notice how spaces, \'s, "'s, and $'s need to be quoted. GNU parallel
3203       can do the quoting by using option -q:
3204
3205         ls | parallel -q  perl -ne '/^\S+\s+\S+$/ and print $ARGV,"\n"'
3206
3207       However, this means you cannot make the sub shell interpret special
3208       characters. For example because of -q this WILL NOT WORK:
3209
3210         ls *.gz | parallel -q "zcat {} >{.}"
3211         ls *.gz | parallel -q "zcat {} | bzip2 >{.}.bz2"
3212
3213       because > and | need to be interpreted by the sub shell.
3214
3215       If you get errors like:
3216
3217         sh: -c: line 0: syntax error near unexpected token
3218         sh: Syntax error: Unterminated quoted string
3219         sh: -c: line 0: unexpected EOF while looking for matching `''
3220         sh: -c: line 1: syntax error: unexpected end of file
3221         zsh:1: no matches found:
3222
3223       then you might try using -q.
3224
3225       If you are using bash process substitution like <(cat foo) then you may
3226       try -q and prepending command with bash -c:
3227
3228         ls | parallel -q bash -c 'wc -c <(echo {})'
3229
3230       Or for substituting output:
3231
3232         ls | parallel -q bash -c \
3233           'tar c {} | tee >(gzip >{}.tar.gz) | bzip2 >{}.tar.bz2'
3234
3235       Conclusion: If this is confusing consider avoiding having to deal with
3236       quoting by writing a small script or a function (remember to export -f
3237       the function) and have GNU parallel call that.
3238

LIST RUNNING JOBS

3240       If you want a list of the jobs currently running you can run:
3241
3242         killall -USR1 parallel
3243
3244       GNU parallel will then print the currently running jobs on stderr
3245       (standard error).
3246

COMPLETE RUNNING JOBS BUT DO NOT START NEW JOBS

3248       If you regret starting a lot of jobs you can simply break GNU parallel,
3249       but if you want to make sure you do not have half-completed jobs you
3250       should send the signal SIGHUP to GNU parallel:
3251
3252         killall -HUP parallel
3253
3254       This will tell GNU parallel to not start any new jobs, but wait until
3255       the currently running jobs are finished before exiting.
3256

ENVIRONMENT VARIABLES

3258       $PARALLEL_HOME
3259                Dir where GNU parallel stores config files, semaphores, and
3260                caches information between invocations. If set to a non-
3261                existent dir, the dir will be created.
3262
3263                Default: $HOME/.parallel.
3264
3265       $PARALLEL_ARGHOSTGROUPS
3266                When using --hostgroups GNU parallel sets this to the
3267                hostgroups of the job.
3268
3269                Remember to quote the $, so it gets evaluated by the correct
3270                shell. Or use --plus and {agrp}.
3271
3272       $PARALLEL_HOSTGROUPS
3273                When using --hostgroups GNU parallel sets this to the
3274                hostgroups of the sshlogin that the job is run on.
3275
3276                Remember to quote the $, so it gets evaluated by the correct
3277                shell. Or use --plus and {hgrp}.
3278
3279       $PARALLEL_JOBSLOT
3280                Set by GNU parallel and can be used in jobs run by GNU
3281                parallel.  Remember to quote the $, so it gets evaluated by
3282                the correct shell. Or use --plus and {slot}.
3283
3284                $PARALLEL_JOBSLOT is the jobslot of the job. It is equal to
3285                {%} unless the job is being retried. See {%} for details.
3286
3287       $PARALLEL_PID
3288                Set by GNU parallel and can be used in jobs run by GNU
3289                parallel.  Remember to quote the $, so it gets evaluated by
3290                the correct shell.
3291
3292                This makes it possible for the jobs to communicate directly to
3293                GNU parallel.
3294
3295                Example: If each of the jobs tests a solution and one of jobs
3296                finds the solution the job can tell GNU parallel not to start
3297                more jobs by: kill -HUP $PARALLEL_PID. This only works on the
3298                local computer.
3299
3300       $PARALLEL_RSYNC_OPTS
3301                Options to pass on to rsync. Defaults to: -rlDzR.
3302
3303       $PARALLEL_SHELL
3304                Use this shell for the commands run by GNU parallel:
3305
3306                • $PARALLEL_SHELL. If undefined use:
3307
3308                • The shell that started GNU parallel. If that cannot be
3309                  determined:
3310
3311                • $SHELL. If undefined use:
3312
3313                • /bin/sh
3314
3315       $PARALLEL_SSH
3316                GNU parallel defaults to using the ssh command for remote
3317                access. This can be overridden with $PARALLEL_SSH, which again
3318                can be overridden with --ssh. It can also be set on a per
3319                server basis (see --sshlogin).
3320
3321       $PARALLEL_SSHHOST
3322                Set by GNU parallel and can be used in jobs run by GNU
3323                parallel.  Remember to quote the $, so it gets evaluated by
3324                the correct shell. Or use --plus and {host}.
3325
3326                $PARALLEL_SSHHOST is the host part of an sshlogin line. E.g.
3327
3328                  4//usr/bin/specialssh user@host
3329
3330                becomes:
3331
3332                  host
3333
3334       $PARALLEL_SSHLOGIN
3335                Set by GNU parallel and can be used in jobs run by GNU
3336                parallel.  Remember to quote the $, so it gets evaluated by
3337                the correct shell. Or use --plus and {sshlogin}.
3338
3339                The value is the sshlogin line with number of threads removed.
3340                E.g.
3341
3342                  4//usr/bin/specialssh user@host
3343
3344                becomes:
3345
3346                  /usr/bin/specialssh user@host
3347
3348       $PARALLEL_SEQ
3349                Set by GNU parallel and can be used in jobs run by GNU
3350                parallel.  Remember to quote the $, so it gets evaluated by
3351                the correct shell.
3352
3353                $PARALLEL_SEQ is the sequence number of the job running.
3354
3355                Example:
3356
3357                  seq 10 | parallel -N2 \
3358                    echo seq:'$'PARALLEL_SEQ arg1:{1} arg2:{2}
3359
3360                {#} is a shorthand for $PARALLEL_SEQ.
3361
3362       $PARALLEL_TMUX
3363                Path to tmux. If unset the tmux in $PATH is used.
3364
3365       $TMPDIR  Directory for temporary files.
3366
3367                See also: --tmpdir
3368
3369       $PARALLEL_REMOTE_TMPDIR
3370                Directory for temporary files on remote servers.
3371
3372                See also: --tmpdir
3373
3374       $PARALLEL
3375                The environment variable $PARALLEL will be used as default
3376                options for GNU parallel. If the variable contains special
3377                shell characters (e.g. $, *, or space) then these need to be
3378                to be escaped with \.
3379
3380                Example:
3381
3382                  cat list | parallel -j1 -k -v ls
3383                  cat list | parallel -j1 -k -v -S"myssh user@server" ls
3384
3385                can be written as:
3386
3387                  cat list | PARALLEL="-kvj1" parallel ls
3388                  cat list | PARALLEL='-kvj1 -S myssh\ user@server' \
3389                    parallel echo
3390
3391                Notice the \ after 'myssh' is needed because 'myssh' and
3392                'user@server' must be one argument.
3393
3394                See also: --profile
3395

DEFAULT PROFILE (CONFIG FILE)

3397       The global configuration file /etc/parallel/config, followed by user
3398       configuration file ~/.parallel/config (formerly known as .parallelrc)
3399       will be read in turn if they exist.  Lines starting with '#' will be
3400       ignored. The format can follow that of the environment variable
3401       $PARALLEL, but it is often easier to simply put each option on its own
3402       line.
3403
3404       Options on the command line take precedence, followed by the
3405       environment variable $PARALLEL, user configuration file
3406       ~/.parallel/config, and finally the global configuration file
3407       /etc/parallel/config.
3408
3409       Note that no file that is read for options, nor the environment
3410       variable $PARALLEL, may contain retired options such as --tollef.
3411

PROFILE FILES

3413       If --profile set, GNU parallel will read the profile from that file
3414       rather than the global or user configuration files. You can have
3415       multiple --profiles.
3416
3417       Profiles are searched for in ~/.parallel. If the name starts with / it
3418       is seen as an absolute path. If the name starts with ./ it is seen as a
3419       relative path from current dir.
3420
3421       Example: Profile for running a command on every sshlogin in
3422       ~/.ssh/sshlogins and prepend the output with the sshlogin:
3423
3424         echo --tag -S .. --nonall > ~/.parallel/nonall_profile
3425         parallel -J nonall_profile uptime
3426
3427       Example: Profile for running every command with -j-1 and nice
3428
3429         echo -j-1 nice > ~/.parallel/nice_profile
3430         parallel -J nice_profile bzip2 -9 ::: *
3431
3432       Example: Profile for running a perl script before every command:
3433
3434         echo "perl -e '\$a=\$\$; print \$a,\" \",'\$PARALLEL_SEQ',\" \";';" \
3435           > ~/.parallel/pre_perl
3436         parallel -J pre_perl echo ::: *
3437
3438       Note how the $ and " need to be quoted using \.
3439
3440       Example: Profile for running distributed jobs with nice on the remote
3441       computers:
3442
3443         echo -S .. nice > ~/.parallel/dist
3444         parallel -J dist --trc {.}.bz2 bzip2 -9 ::: *
3445

EXIT STATUS

3447       Exit status depends on --halt-on-error if one of these is used:
3448       success=X, success=Y%, fail=Y%.
3449
3450       0     All jobs ran without error. If success=X is used: X jobs ran
3451             without error. If success=Y% is used: Y% of the jobs ran without
3452             error.
3453
3454       1-100 Some of the jobs failed. The exit status gives the number of
3455             failed jobs. If Y% is used the exit status is the percentage of
3456             jobs that failed.
3457
3458       101   More than 100 jobs failed.
3459
3460       255   Other error.
3461
3462       -1 (In joblog and SQL table)
3463             Killed by Ctrl-C, timeout, not enough memory or similar.
3464
3465       -2 (In joblog and SQL table)
3466             skip() was called in {= =}.
3467
3468       -1000 (In SQL table)
3469             Job is ready to run (set by --sqlmaster).
3470
3471       -1220 (In SQL table)
3472             Job is taken by worker (set by --sqlworker).
3473
3474       If fail=1 is used, the exit status will be the exit status of the
3475       failing job.
3476

DIFFERENCES BETWEEN GNU Parallel AND ALTERNATIVES

3478       See: man parallel_alternatives
3479

BUGS

3481   Quoting of newline
3482       Because of the way newline is quoted this will not work:
3483
3484         echo 1,2,3 | parallel -vkd, "echo 'a{}b'"
3485
3486       However, these will all work:
3487
3488         echo 1,2,3 | parallel -vkd, echo a{}b
3489         echo 1,2,3 | parallel -vkd, "echo 'a'{}'b'"
3490         echo 1,2,3 | parallel -vkd, "echo 'a'"{}"'b'"
3491
3492   Speed
3493       Startup
3494
3495       GNU parallel is slow at starting up - around 250 ms the first time and
3496       150 ms after that.
3497
3498       Job startup
3499
3500       Starting a job on the local machine takes around 3-10 ms. This can be a
3501       big overhead if the job takes very few ms to run. Often you can group
3502       small jobs together using -X which will make the overhead less
3503       significant. Or you can run multiple GNU parallels as described in
3504       EXAMPLE: Speeding up fast jobs.
3505
3506       SSH
3507
3508       When using multiple computers GNU parallel opens ssh connections to
3509       them to figure out how many connections can be used reliably
3510       simultaneously (Namely SSHD's MaxStartups). This test is done for each
3511       host in serial, so if your --sshloginfile contains many hosts it may be
3512       slow.
3513
3514       If your jobs are short you may see that there are fewer jobs running on
3515       the remote systems than expected. This is due to time spent logging in
3516       and out. -M may help here.
3517
3518       Disk access
3519
3520       A single disk can normally read data faster if it reads one file at a
3521       time instead of reading a lot of files in parallel, as this will avoid
3522       disk seeks. However, newer disk systems with multiple drives can read
3523       faster if reading from multiple files in parallel.
3524
3525       If the jobs are of the form read-all-compute-all-write-all, so
3526       everything is read before anything is written, it may be faster to
3527       force only one disk access at the time:
3528
3529         sem --id diskio cat file | compute | sem --id diskio cat > file
3530
3531       If the jobs are of the form read-compute-write, so writing starts
3532       before all reading is done, it may be faster to force only one reader
3533       and writer at the time:
3534
3535         sem --id read cat file | compute | sem --id write cat > file
3536
3537       If the jobs are of the form read-compute-read-compute, it may be faster
3538       to run more jobs in parallel than the system has CPUs, as some of the
3539       jobs will be stuck waiting for disk access.
3540
3541   --nice limits command length
3542       The current implementation of --nice is too pessimistic in the max
3543       allowed command length. It only uses a little more than half of what it
3544       could. This affects -X and -m. If this becomes a real problem for you,
3545       file a bug-report.
3546
3547   Aliases and functions do not work
3548       If you get:
3549
3550         Can't exec "command": No such file or directory
3551
3552       or:
3553
3554         open3: exec of by command failed
3555
3556       or:
3557
3558         /bin/bash: command: command not found
3559
3560       it may be because command is not known, but it could also be because
3561       command is an alias or a function. If it is a function you need to
3562       export -f the function first or use env_parallel. An alias will only
3563       work if you use env_parallel.
3564
3565   Database with MySQL fails randomly
3566       The --sql* options may fail randomly with MySQL. This problem does not
3567       exist with PostgreSQL.
3568

REPORTING BUGS

3570       Report bugs to <parallel@gnu.org> or
3571       https://savannah.gnu.org/bugs/?func=additem&group=parallel
3572
3573       When you write your report, please keep in mind, that you must give the
3574       reader enough information to be able to run exactly what you run. So
3575       you need to include all data and programs that you use to show the
3576       problem.
3577
3578       See a perfect bug report on
3579       https://lists.gnu.org/archive/html/bug-parallel/2015-01/msg00000.html
3580
3581       Your bug report should always include:
3582
3583       • The error message you get (if any). If the error message is not from
3584         GNU parallel you need to show why you think GNU parallel caused this.
3585
3586       • The complete output of parallel --version. If you are not running the
3587         latest released version (see https://ftp.gnu.org/gnu/parallel/) you
3588         should specify why you believe the problem is not fixed in that
3589         version.
3590
3591       • A minimal, complete, and verifiable example (See description on
3592         https://stackoverflow.com/help/mcve).
3593
3594         It should be a complete example that others can run which shows the
3595         problem including all files needed to run the example. This should
3596         preferably be small and simple, so try to remove as many options as
3597         possible.
3598
3599         A combination of yes, seq, cat, echo, wc, and sleep can reproduce
3600         most errors.
3601
3602         If your example requires large files, see if you can make them with
3603         something like seq 100000000 > bigfile or yes | head -n 1000000000 >
3604         file. If you need multiple columns: paste <(seq 1000) <(seq 1000
3605         1999)
3606
3607         If your example requires remote execution, see if you can use
3608         localhost - maybe using another login.
3609
3610         If you have access to a different system (maybe a VirtualBox on your
3611         own machine), test if your MCVE shows the problem on that system. If
3612         it does not, read below.
3613
3614       • The output of your example. If your problem is not easily reproduced
3615         by others, the output might help them figure out the problem.
3616
3617       • Whether you have watched the intro videos
3618         (https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1), walked
3619         through the tutorial (man parallel_tutorial), and read the examples
3620         (man parallel_examples).
3621
3622   Bug dependent on environment
3623       If you suspect the error is dependent on your environment or
3624       distribution, please see if you can reproduce the error on one of these
3625       VirtualBox images:
3626       https://sourceforge.net/projects/virtualboximage/files/
3627       https://www.osboxes.org/virtualbox-images/
3628
3629       Specifying the name of your distribution is not enough as you may have
3630       installed software that is not in the VirtualBox images.
3631
3632       If you cannot reproduce the error on any of the VirtualBox images
3633       above, see if you can build a VirtualBox image on which you can
3634       reproduce the error. If not you should assume the debugging will be
3635       done through you. That will put a lot more burden on you and it is
3636       extra important you give any information that help. In general the
3637       problem will be fixed faster and with much less work for you if you can
3638       reproduce the error on a VirtualBox - even if you have to build a
3639       VirtualBox image.
3640
3641   In summary
3642       Your report must include:
3643
3644parallel --version
3645
3646       • output + error message
3647
3648       • full example including all files
3649
3650       • VirtualBox image, if you cannot reproduce it on other systems
3651

AUTHOR

3653       When using GNU parallel for a publication please cite:
3654
3655       O. Tange (2011): GNU Parallel - The Command-Line Power Tool, ;login:
3656       The USENIX Magazine, February 2011:42-47.
3657
3658       This helps funding further development; and it won't cost you a cent.
3659       If you pay 10000 EUR you should feel free to use GNU Parallel without
3660       citing.
3661
3662       Copyright (C) 2007-10-18 Ole Tange, http://ole.tange.dk
3663
3664       Copyright (C) 2008-2010 Ole Tange, http://ole.tange.dk
3665
3666       Copyright (C) 2010-2022 Ole Tange, http://ole.tange.dk and Free
3667       Software Foundation, Inc.
3668
3669       Parts of the manual concerning xargs compatibility is inspired by the
3670       manual of xargs from GNU findutils 4.4.2.
3671

LICENSE

3673       This program is free software; you can redistribute it and/or modify it
3674       under the terms of the GNU General Public License as published by the
3675       Free Software Foundation; either version 3 of the License, or at your
3676       option any later version.
3677
3678       This program is distributed in the hope that it will be useful, but
3679       WITHOUT ANY WARRANTY; without even the implied warranty of
3680       MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
3681       General Public License for more details.
3682
3683       You should have received a copy of the GNU General Public License along
3684       with this program.  If not, see <https://www.gnu.org/licenses/>.
3685
3686   Documentation license I
3687       Permission is granted to copy, distribute and/or modify this
3688       documentation under the terms of the GNU Free Documentation License,
3689       Version 1.3 or any later version published by the Free Software
3690       Foundation; with no Invariant Sections, with no Front-Cover Texts, and
3691       with no Back-Cover Texts.  A copy of the license is included in the
3692       file LICENSES/GFDL-1.3-or-later.txt.
3693
3694   Documentation license II
3695       You are free:
3696
3697       to Share to copy, distribute and transmit the work
3698
3699       to Remix to adapt the work
3700
3701       Under the following conditions:
3702
3703       Attribution
3704                You must attribute the work in the manner specified by the
3705                author or licensor (but not in any way that suggests that they
3706                endorse you or your use of the work).
3707
3708       Share Alike
3709                If you alter, transform, or build upon this work, you may
3710                distribute the resulting work only under the same, similar or
3711                a compatible license.
3712
3713       With the understanding that:
3714
3715       Waiver   Any of the above conditions can be waived if you get
3716                permission from the copyright holder.
3717
3718       Public Domain
3719                Where the work or any of its elements is in the public domain
3720                under applicable law, that status is in no way affected by the
3721                license.
3722
3723       Other Rights
3724                In no way are any of the following rights affected by the
3725                license:
3726
3727                • Your fair dealing or fair use rights, or other applicable
3728                  copyright exceptions and limitations;
3729
3730                • The author's moral rights;
3731
3732                • Rights other persons may have either in the work itself or
3733                  in how the work is used, such as publicity or privacy
3734                  rights.
3735
3736       Notice   For any reuse or distribution, you must make clear to others
3737                the license terms of this work.
3738
3739       A copy of the full license is included in the file as
3740       LICENCES/CC-BY-SA-4.0.txt
3741

DEPENDENCIES

3743       GNU parallel uses Perl, and the Perl modules Getopt::Long, IPC::Open3,
3744       Symbol, IO::File, POSIX, and File::Temp.
3745
3746       For --csv it uses the Perl module Text::CSV.
3747
3748       For remote usage it uses rsync with ssh.
3749

SEE ALSO

3751       parallel_tutorial(1), env_parallel(1), parset(1), parsort(1),
3752       parallel_alternatives(1), parallel_design(7), niceload(1), sql(1),
3753       ssh(1), ssh-agent(1), sshpass(1), ssh-copy-id(1), rsync(1)
3754
3755
3756
375720221122                          2022-11-22                       PARALLEL(1)
Impressum