1PARALLEL_DESIGN(7)                 parallel                 PARALLEL_DESIGN(7)
2
3
4

Design of GNU Parallel

6       This document describes design decisions made in the development of GNU
7       parallel and the reasoning behind them. It will give an overview of why
8       some of the code looks the way it does, and will help new maintainers
9       understand the code better.
10
11   One file program
12       GNU parallel is a Perl script in a single file. It is object oriented,
13       but contrary to normal Perl scripts each class is not in its own file.
14       This is due to user experience: The goal is that in a pinch the user
15       will be able to get GNU parallel working simply by copying a single
16       file: No need to mess around with environment variables like PERL5LIB.
17
18   Old Perl style
19       GNU parallel uses some old, deprecated constructs. This is due to a
20       goal of being able to run on old installations. Currently the target is
21       CentOS 3.9 and Perl 5.8.0.
22
23   Scalability up and down
24       The smallest system GNU parallel is tested on is a 32 MB ASUS WL500gP.
25       The largest is a 2 TB 128-core machine. It scales up to around 100
26       machines - depending on the duration of each job.
27
28   Exponentially back off
29       GNU parallel busy waits. This is because the reason why a job is not
30       started may be due to load average (when using --load), and thus it
31       will not make sense to wait for a job to finish. Instead the load
32       average must be checked again. Load average is not the only reason:
33       --timeout has a similar problem.
34
35       To not burn up too much CPU GNU parallel sleeps exponentially longer
36       and longer if nothing happens, maxing out at 1 second.
37
38   Shell compatibility
39       It is a goal to have GNU parallel work equally well in any shell.
40       However, in practice GNU parallel is being developed in bash and thus
41       testing in other shells is limited to reported bugs.
42
43       When an incompatibility is found there is often not an easy fix: Fixing
44       the problem in csh often breaks it in bash. In these cases the fix is
45       often to use a small Perl script and call that.
46
47   env_parallel
48       env_parallel is a dummy shell script that will run if env_parallel is
49       not an alias or a function and tell the user how to activate the
50       alias/function for the supported shells.
51
52       The alias or function will copy the current environment and run the
53       command with GNU parallel in the copy of the environment.
54
55       The problem is that you cannot access all of the current environment
56       inside Perl. E.g. aliases, functions and unexported shell variables.
57
58       The idea is therefore to take the environment and put it in
59       $PARALLEL_ENV which GNU parallel prepends to every command.
60
61       The only way to have access to the environment is directly from the
62       shell, so the program must be written in a shell script that will be
63       sourced and there has to deal with the dialect of the relevant shell.
64
65       env_parallel.*
66
67       These are the files that implements the alias or function env_parallel
68       for a given shell. It could be argued that these should be put in some
69       obscure place under /usr/lib, but by putting them in your path it
70       becomes trivial to find the path to them and source them:
71
72         source `which env_parallel.foo`
73
74       The beauty is that they can be put anywhere in the path without the
75       user having to know the location. So if the user's path includes
76       /afs/bin/i386_fc5 or /usr/pkg/parallel/bin or
77       /usr/local/parallel/20161222/sunos5.6/bin the files can be put in the
78       dir that makes most sense for the sysadmin.
79
80       env_parallel.bash / env_parallel.zsh / env_parallel.ksh /
81       env_parallel.pdksh
82
83       env_parallel.(bash|ksh|pdksh|zsh) sets the function env_parallel. It
84       uses alias and typeset to dump the configuration (with a few
85       exceptions) into $PARALLEL_ENV before running GNU parallel.
86
87       After GNU parallel is finished, $PARALLEL_ENV is deleted.
88
89       env_parallel.csh
90
91       env_parallel.csh has two purposes: If env_parallel is not an alias:
92       make it into an alias that sets $PARALLEL with arguments and calls
93       env_parallel.csh.
94
95       If env_parallel is an alias, then env_parallel.csh uses $PARALLEL as
96       the arguments for GNU parallel.
97
98       It exports the environment by writing a variable definition to a file
99       for each variable.  The definitions of aliases are appended to this
100       file. Finally the file is put into $PARALLEL_ENV.
101
102       GNU parallel is then run and $PARALLEL_ENV is deleted.
103
104       env_parallel.fish
105
106       First all functions definitions are generated using a loop and
107       functions.
108
109       Dumping the scalar variable definitions is harder.
110
111       fish can represent non-printable characters in (at least) 2 ways. To
112       avoid problems all scalars are converted to \XX quoting.
113
114       Then commands to generate the definitions are made and separated by
115       NUL.
116
117       This is then piped into a Perl script that quotes all values. List
118       elements will be appended using two spaces.
119
120       Finally \n is converted into \1 because fish variables cannot contain
121       \n. GNU parallel will later convert all \1 from $PARALLEL_ENV into \n.
122
123       This is then all saved in $PARALLEL_ENV.
124
125       GNU parallel is called, and $PARALLEL_ENV is deleted.
126
127   parset
128       parset is a shell function. This is the reason why parset can set
129       variables: It runs in the shell which is calling it.
130
131       It is also the reason why parset does not work, when data is piped into
132       it: ... | parset ... makes parset start in a subshell, and any changes
133       in environment can therefore not make it back to the calling shell.
134
135   Job slots
136       The easiest way to explain what GNU parallel does is to assume that
137       there are a number of job slots, and when a slot becomes available a
138       job from the queue will be run in that slot. But originally GNU
139       parallel did not model job slots in the code. Job slots have been added
140       to make it possible to use {%} as a replacement string.
141
142       While the job sequence number can be computed in advance, the job slot
143       can only be computed the moment a slot becomes available. So it has
144       been implemented as a stack with lazy evaluation: Draw one from an
145       empty stack and the stack is extended by one. When a job is done, push
146       the available job slot back on the stack.
147
148       This implementation also means that if you re-run the same jobs, you
149       cannot assume jobs will get the same slots. And if you use remote
150       executions, you cannot assume that a given job slot will remain on the
151       same remote server. This goes double since number of job slots can be
152       adjusted on the fly (by giving --jobs a file name).
153
154   Rsync protocol version
155       rsync 3.1.x uses protocol 31 which is unsupported by version 2.5.7.
156       That means that you cannot push a file to a remote system using rsync
157       protocol 31, if the remote system uses 2.5.7. rsync does not
158       automatically downgrade to protocol 30.
159
160       GNU parallel does not require protocol 31, so if the rsync version is
161       >= 3.1.0 then --protocol 30 is added to force newer rsyncs to talk to
162       version 2.5.7.
163
164   Compression
165       GNU parallel buffers output in temporary files. --compress compresses
166       the buffered data.  This is a bit tricky because there should be no
167       files to clean up if GNU parallel is killed by a power outage.
168
169       GNU parallel first selects a compression program. If the user has not
170       selected one, the first of these that is in $PATH is used: pzstd lbzip2
171       pbzip2 zstd pixz lz4 pigz lzop plzip lzip gzip lrz pxz bzip2 lzma xz
172       clzip. They are sorted by speed on a 128 core machine.
173
174       Schematically the setup is as follows:
175
176         command started by parallel | compress > tmpfile
177         cattail tmpfile | uncompress | parallel which reads the output
178
179       The setup is duplicated for both standard output (stdout) and standard
180       error (stderr).
181
182       GNU parallel pipes output from the command run into the compression
183       program which saves to a tmpfile. GNU parallel records the pid of the
184       compress program.  At the same time a small perl script (called cattail
185       above) is started: It basically does cat followed by tail -f, but it
186       also removes the tmpfile as soon as the first byte is read, and it
187       continously checks if the pid of the compression program is dead. If
188       the compress program is dead, cattail reads the rest of tmpfile and
189       exits.
190
191       As most compression programs write out a header when they start, the
192       tmpfile in practice is removed by cattail after around 40 ms.
193
194   Wrapping
195       The command given by the user can be wrapped in multiple templates.
196       Templates can be wrapped in other templates.
197
198       $COMMAND       the command to run.
199
200       $INPUT         the input to run.
201
202       $SHELL         the shell that started GNU Parallel.
203
204       $SSHLOGIN      the sshlogin.
205
206       $WORKDIR       the working dir.
207
208       $FILE          the file to read parts from.
209
210       $STARTPOS      the first byte position to read from $FILE.
211
212       $LENGTH        the number of bytes to read from $FILE.
213
214       --shellquote   echo Double quoted $INPUT
215
216       --nice pri     Remote: See The remote system wrapper.
217
218                      Local: setpriority(0,0,$nice)
219
220       --cat
221                        cat > {}; $COMMAND {};
222                        perl -e '$bash = shift;
223                          $csh = shift;
224                          for(@ARGV) { unlink;rmdir; }
225                          if($bash =~ s/h//) { exit $bash;  }
226                          exit $csh;' "$?h" "$status" {};
227
228                      {} is set to $PARALLEL_TMP which is a tmpfile. The Perl
229                      script saves the exit value, unlinks the tmpfile, and
230                      returns the exit value - no matter if the shell is
231                      bash/ksh/zsh (using $?) or *csh/fish (using $status).
232
233       --fifo
234                        perl -e '($s,$c,$f) = @ARGV;
235                          # mkfifo $PARALLEL_TMP
236                          system "mkfifo", $f;
237                          # spawn $shell -c $command &
238                          $pid = fork || exec $s, "-c", $c;
239                          open($o,">",$f) || die $!;
240                          # cat > $PARALLEL_TMP
241                          while(sysread(STDIN,$buf,131072)){
242                             syswrite $o, $buf;
243                          }
244                          close $o;
245                          # waitpid to get the exit code from $command
246                          waitpid $pid,0;
247                          # Cleanup
248                          unlink $f;
249                          exit $?/256;' $SHELL -c $COMMAND $PARALLEL_TMP
250
251                      This is an elaborate way of: mkfifo {}; run $COMMAND in
252                      the background using $SHELL; copying STDIN to {};
253                      waiting for background to complete; remove {} and exit
254                      with the exit code from $COMMAND.
255
256                      It is made this way to be compatible with *csh/fish.
257
258       --pipepart
259                        < $FILE perl -e 'while(@ARGV) {
260                            sysseek(STDIN,shift,0) || die;
261                            $left = shift;
262                            while($read = sysread(STDIN,$buf, ($left > 131072 ? 131072 : $left))){
263                              $left -= $read;
264                              syswrite(STDOUT,$buf);
265                            }
266                          }' $STARTPOS $LENGTH
267
268                      This will read $LENGTH bytes from $FILE starting at
269                      $STARTPOS and send it to STDOUT.
270
271       --sshlogin $SSHLOGIN
272                        ssh $SSHLOGIN "$COMMAND"
273
274       --transfer
275                        ssh $SSHLOGIN mkdir -p ./$WORKDIR;
276                        rsync --protocol 30 -rlDzR -essh ./{} $SSHLOGIN:./$WORKDIR;
277                        ssh $SSHLOGIN "$COMMAND"
278
279                      Read about --protocol 30 in the section Rsync protocol
280                      version.
281
282       --transferfile file
283                      <<todo>>
284
285       --basefile     <<todo>>
286
287       --return file
288                        $COMMAND; _EXIT_status=$?; mkdir -p $WORKDIR;
289                        rsync --protocol 30 --rsync-path=cd\ ./$WORKDIR\;\ rsync \
290                          -rlDzR -essh $SSHLOGIN:./$FILE ./$WORKDIR; exit $_EXIT_status;
291
292                      The --rsync-path=cd ... is needed because old versions
293                      of rsync do not support --no-implied-dirs.
294
295                      The $_EXIT_status trick is to postpone the exit value.
296                      This makes it incompatible with *csh and should be fixed
297                      in the future. Maybe a wrapping 'sh -c' is enough?
298
299       --cleanup      $RETURN is the wrapper from --return
300
301                        $COMMAND; _EXIT_status=$?; $RETURN;
302                        ssh $SSHLOGIN \(rm\ -f\ ./$WORKDIR/{}\;\ rmdir\ ./$WORKDIR\ \>\&/dev/null\;\);
303                        exit $_EXIT_status;
304
305                      $_EXIT_status: see --return above.
306
307       --pipe
308                        perl -e 'if(sysread(STDIN, $buf, 1)) {
309                              open($fh, "|-", "@ARGV") || die;
310                              syswrite($fh, $buf);
311                              # Align up to 128k block
312                              if($read = sysread(STDIN, $buf, 131071)) {
313                                  syswrite($fh, $buf);
314                              }
315                              while($read = sysread(STDIN, $buf, 131072)) {
316                                  syswrite($fh, $buf);
317                              }
318                              close $fh;
319                              exit ($?&127 ? 128+($?&127) : 1+$?>>8)
320                          }' $SHELL -c $COMMAND
321
322                      This small wrapper makes sure that $COMMAND will never
323                      be run if there is no data.
324
325       --tmux         <<TODO Fixup>> mkfifo /tmp/tmx3cMEV &&
326                        sh -c 'tmux -S /tmp/tmsaKpv1 new-session -s p334310 -d
327                      "sleep .2" >/dev/null 2>&1'; tmux -S /tmp/tmsaKpv1 new-
328                      window -t p334310 -n wc\ 10 \(wc\ 10\)\;\ perl\ -e\
329                      \'while\(\$t++\<3\)\{\ print\ \$ARGV\[0\],\"\\n\"\ \}\'\
330                      \$\?h/\$status\ \>\>\ /tmp/tmx3cMEV\&echo\ wc\\\ 10\;\
331                      echo\ \Job\ finished\ at:\ \`date\`\;sleep\ 10; exec
332                      perl -e '$/="/";$_=<>;$c=<>;unlink $ARGV; /(\d+)h/ and
333                      exit($1);exit$c' /tmp/tmx3cMEV
334
335                      mkfifo tmpfile.tmx; tmux -S <tmpfile.tms> new-session -s
336                      pPID -d 'sleep .2' >&/dev/null; tmux -S <tmpfile.tms>
337                      new-window -t pPID -n <<shell quoted input>> \(<<shell
338                      quoted input>>\)\;\ perl\ -e\ \'while\(\$t++\<3\)\{\
339                      print\ \$ARGV\[0\],\"\\n\"\ \}\'\ \$\?h/\$status\ \>\>\
340                      tmpfile.tmx\&echo\ <<shell double quoted input>>\;echo\
341                      \Job\ finished\ at:\ \`date\`\;sleep\ 10; exec perl -e
342                      '$/="/";$_=<>;$c=<>;unlink $ARGV; /(\d+)h/ and
343                      exit($1);exit$c' tmpfile.tmx
344
345                      First a FIFO is made (.tmx). It is used for
346                      communicating exit value. Next a new tmux session is
347                      made. This may fail if there is already a session, so
348                      the output is ignored. If all job slots finish at the
349                      same time, then tmux will close the session. A temporary
350                      socket is made (.tms) to avoid a race condition in tmux.
351                      It is cleaned up when GNU parallel finishes.
352
353                      The input is used as the name of the windows in tmux.
354                      When the job inside tmux finishes, the exit value is
355                      printed to the FIFO (.tmx).  This FIFO is opened by perl
356                      outside tmux, and perl then removes the FIFO. Perl
357                      blocks until the first value is read from the FIFO, and
358                      this value is used as exit value.
359
360                      To make it compatible with csh and bash the exit value
361                      is printed as: $?h/$status and this is parsed by perl.
362
363                      There is a bug that makes it necessary to print the exit
364                      value 3 times.
365
366                      Another bug in tmux requires the length of the tmux
367                      title and command to not have certain limits.  When
368                      inside these limits, 75 '\ ' are added to the title to
369                      force it to be outside the limits.
370
371                      You can map the bad limits using:
372
373                        perl -e 'sub r { int(rand(shift)).($_[0] && "\t".r(@_)) } print map { r(@ARGV)."\n" } 1..10000' 1600 1500 90 |
374                          perl -ane '$F[0]+$F[1]+$F[2] < 2037 and print ' |
375                          parallel --colsep '\t' --tagstring '{1}\t{2}\t{3}' tmux -S /tmp/p{%}-'{=3 $_="O"x$_ =}' \
376                            new-session -d -n '{=1 $_="O"x$_ =}' true'\ {=2 $_="O"x$_ =};echo $?;rm -f /tmp/p{%}-O*'
377
378                        perl -e 'sub r { int(rand(shift)).($_[0] && "\t".r(@_)) } print map { r(@ARGV)."\n" } 1..10000' 17000 17000 90 |
379                          parallel --colsep '\t' --tagstring '{1}\t{2}\t{3}' \
380                        tmux -S /tmp/p{%}-'{=3 $_="O"x$_ =}' new-session -d -n '{=1 $_="O"x$_ =}' true'\ {=2 $_="O"x$_ =};echo $?;rm /tmp/p{%}-O*'
381                        > value.csv 2>/dev/null
382
383                        R -e 'a<-read.table("value.csv");X11();plot(a[,1],a[,2],col=a[,4]+5,cex=0.1);Sys.sleep(1000)'
384
385                      For tmux 1.8 17000 can be lowered to 2100.
386
387                      The interesting areas are title 0..1000 with (title +
388                      whole command) in 996..1127 and 9331..9636.
389
390       The ordering of the wrapping is important:
391
392       ·    $PARALLEL_ENV which is set in env_parallel.* must be prepended to
393            the command first, as the command may contain exported variables
394            or functions.
395
396       ·    --nice/--cat/--fifo should be done on the remote machine
397
398       ·    --pipepart/--pipe should be done on the local machine inside
399            --tmux
400
401   Convenience options --nice --basefile --transfer --return --cleanup --tmux
402       --group --compress --cat --fifo --workdir
403       These are all convenience options that make it easier to do a task. But
404       more importantly: They are tested to work on corner cases, too. Take
405       --nice as an example:
406
407         nice parallel command ...
408
409       will work just fine. But when run remotely, you need to move the nice
410       command so it is being run on the server:
411
412         parallel -S server nice command ...
413
414       And this will again work just fine, as long as you are running a single
415       command. When you are running a composed command you need nice to apply
416       to the whole command, and it gets harder still:
417
418         parallel -S server -q nice bash -c 'command1 ...; command2 | command3'
419
420       It is not impossible, but by using --nice GNU parallel will do the
421       right thing for you. Similarly when transferring files: It starts to
422       get hard when the file names contain space, :, `, *, or other special
423       characters.
424
425       To run the commands in a tmux session you basically just need to quote
426       the command. For simple commands that is easy, but when commands
427       contain special characters, it gets much harder to get right.
428
429       --compress not only compresses standard output (stdout) but also
430       standard error (stderr); and it does so into files, that are open but
431       deleted, so a crash will not leave these files around.
432
433       --cat and --fifo are easy to do by hand, until you want to clean up the
434       tmpfile and keep the exit code of the command.
435
436       The real killer comes when you try to combine several of these: Doing
437       that correctly for all corner cases is next to impossible to do by
438       hand.
439
440   Shell shock
441       The shell shock bug in bash did not affect GNU parallel, but the
442       solutions did. bash first introduced functions in variables named:
443       BASH_FUNC_myfunc() and later changed that to BASH_FUNC_myfunc%%. When
444       transferring functions GNU parallel reads off the function and changes
445       that into a function definition, which is copied to the remote system
446       and executed before the actual command is executed. Therefore GNU
447       parallel needs to know how to read the function.
448
449       From version 20150122 GNU parallel tries both the ()-version and the
450       %%-version, and the function definition works on both pre- and post-
451       shellshock versions of bash.
452
453   The remote system wrapper
454       The remote system wrapper does some initialization before starting the
455       command on the remote system.
456
457       Ctrl-C and standard error (stderr)
458
459       If the user presses Ctrl-C the user expects jobs to stop. This works
460       out of the box if the jobs are run locally. Unfortunately it is not so
461       simple if the jobs are run remotely.
462
463       If remote jobs are run in a tty using ssh -tt, then Ctrl-C works, but
464       all output to standard error (stderr) is sent to standard output
465       (stdout). This is not what the user expects.
466
467       If remote jobs are run without a tty using ssh (without -tt), then
468       output to standard error (stderr) is kept on stderr, but Ctrl-C does
469       not kill remote jobs. This is not what the user expects.
470
471       So what is needed is a way to have both. It seems the reason why Ctrl-C
472       does not kill the remote jobs is because the shell does not propagate
473       the hang-up signal from sshd. But when sshd dies, the parent of the
474       login shell becomes init (process id 1). So by exec'ing a Perl wrapper
475       to monitor the parent pid and kill the child if the parent pid becomes
476       1, then Ctrl-C works and stderr is kept on stderr.
477
478       To be able to kill all (grand)*children a new process group is started.
479
480       --nice
481
482       niceing the remote process is done by setpriority(0,0,$nice). A few old
483       systems do not implement this and --nice is unsupported on those.
484
485       Setting $PARALLEL_TMP
486
487       $PARALLEL_TMP is used by --fifo and --cat and must point to a non-
488       exitent file in $TMPDIR. This file name is computed on the remote
489       system.
490
491       The wrapper
492
493       The wrapper looks like this:
494
495         $shell = $PARALLEL_SHELL || $SHELL;
496         $tmpdir = $TMPDIR;
497         $nice = $opt::nice;
498         # Set $PARALLEL_TMP to a non-existent file name in $TMPDIR
499         do {
500             $ENV{PARALLEL_TMP} = $tmpdir."/par".
501               join"", map { (0..9,"a".."z","A".."Z")[rand(62)] } (1..5);
502         } while(-e $ENV{PARALLEL_TMP});
503         $SIG{CHLD} = sub { $done = 1; };
504         $pid = fork;
505         unless($pid) {
506             # Make own process group to be able to kill HUP it later
507             setpgrp;
508             eval { setpriority(0,0,$nice) };
509             exec $shell, "-c", ($bashfunc."@ARGV");
510             die "exec: $!\n";
511         }
512         do {
513             # Parent is not init (ppid=1), so sshd is alive
514             # Exponential sleep up to 1 sec
515             $s = $s < 1 ? 0.001 + $s * 1.03 : $s;
516             select(undef, undef, undef, $s);
517         } until ($done || getppid == 1);
518         # Kill HUP the process group if job not done
519         kill(SIGHUP, -${pid}) unless $done;
520         wait;
521         exit ($?&127 ? 128+($?&127) : 1+$?>>8)
522
523   Transferring of variables and functions
524       Transferring of variables and functions given by --env is done by
525       running a Perl script remotely that calls the actual command. The Perl
526       script sets $ENV{variable} to the correct value before exec'ing a shell
527       that runs the function definition followed by the actual command.
528
529       The function env_parallel copies the full current environment into the
530       environment variable PARALLEL_ENV. This variable is picked up by GNU
531       parallel and used to create the Perl script mentioned above.
532
533   Base64 encoded bzip2
534       csh limits words of commands to 1024 chars. This is often too little
535       when GNU parallel encodes environment variables and wraps the command
536       with different templates. All of these are combined and quoted into one
537       single word, which often is longer than 1024 chars.
538
539       When the line to run is > 1000 chars, GNU parallel therefore encodes
540       the line to run. The encoding bzip2s the line to run, converts this to
541       base64, splits the base64 into 1000 char blocks (so csh does not fail),
542       and prepends it with this Perl script that decodes, decompresses and
543       evals the line.
544
545           @GNU_Parallel=("use","IPC::Open3;","use","MIME::Base64");
546           eval "@GNU_Parallel";
547
548           $SIG{CHLD}="IGNORE";
549           # Search for bzip2. Not found => use default path
550           my $zip = (grep { -x $_ } "/usr/local/bin/bzip2")[0] || "bzip2";
551           # $in = stdin on $zip, $out = stdout from $zip
552           my($in, $out,$eval);
553           open3($in,$out,">&STDERR",$zip,"-dc");
554           if(my $perlpid = fork) {
555               close $in;
556               $eval = join "", <$out>;
557               close $out;
558           } else {
559               close $out;
560               # Pipe decoded base64 into 'bzip2 -dc'
561               print $in (decode_base64(join"",@ARGV));
562               close $in;
563               exit;
564           }
565           wait;
566           eval $eval;
567
568       Perl and bzip2 must be installed on the remote system, but a small test
569       showed that bzip2 is installed by default on all platforms that runs
570       GNU parallel, so this is not a big problem.
571
572       The added bonus of this is that much bigger environments can now be
573       transferred as they will be below bash's limit of 131072 chars.
574
575   Which shell to use
576       Different shells behave differently. A command that works in tcsh may
577       not work in bash.  It is therefore important that the correct shell is
578       used when GNU parallel executes commands.
579
580       GNU parallel tries hard to use the right shell. If GNU parallel is
581       called from tcsh it will use tcsh.  If it is called from bash it will
582       use bash. It does this by looking at the (grand)*parent process: If the
583       (grand)*parent process is a shell, use this shell; otherwise look at
584       the parent of this (grand)*parent. If none of the (grand)*parents are
585       shells, then $SHELL is used.
586
587       This will do the right thing if called from:
588
589       · an interactive shell
590
591       · a shell script
592
593       · a Perl script in `` or using system if called as a single string.
594
595       While these cover most cases, there are situations where it will fail:
596
597       · When run using exec.
598
599       · When run as the last command using -c from another shell (because
600         some shells use exec):
601
602           zsh% bash -c "parallel 'echo {} is not run in bash; \
603                set | grep BASH_VERSION' ::: This"
604
605         You can work around that by appending '&& true':
606
607           zsh% bash -c "parallel 'echo {} is run in bash; \
608                set | grep BASH_VERSION' ::: This && true"
609
610       · When run in a Perl script using system with parallel as the first
611         string:
612
613           #!/usr/bin/perl
614
615           system("parallel",'setenv a {}; echo $a',":::",2);
616
617         Here it depends on which shell is used to call the Perl script. If
618         the Perl script is called from tcsh it will work just fine, but if it
619         is called from bash it will fail, because the command setenv is not
620         known to bash.
621
622       If GNU parallel guesses wrong in these situation, set the shell using
623       $PARALLEL_SHELL.
624
625   Quoting
626       Quoting depends on the shell. For most shells \ is used for all special
627       chars and ' is used for newline. Whether a char is special depends on
628       the shell and the context. Luckily quoting a bit too many chars does
629       not break things.
630
631       It is fast, but has the distinct disadvantage that if a string needs to
632       be quoted multiple times, the \'s double every time - increasing the
633       string length exponentially.
634
635       For tcsh/csh newline is quoted as \ followed by newline.
636
637       For rc everything is quoted using '.
638
639   --pipepart vs. --pipe
640       While --pipe and --pipepart look much the same to the user, they are
641       implemented very differently.
642
643       With --pipe GNU parallel reads the blocks from standard input (stdin),
644       which is then given to the command on standard input (stdin); so every
645       block is being processed by GNU parallel itself. This is the reason why
646       --pipe maxes out at around 500 MB/sec.
647
648       --pipepart, on the other hand, first identifies at which byte positions
649       blocks start and how long they are. It does that by seeking into the
650       file by the size of a block and then reading until it meets end of a
651       block. The seeking explains why GNU parallel does not know the line
652       number and why -L/-l and -N do not work.
653
654       With a reasonable block and file size this seeking is more than 1000
655       time faster than reading the full file. The byte positions are then
656       given to a small script that reads from position X to Y and sends
657       output to standard output (stdout). This small script is prepended to
658       the command and the full command is executed just as if GNU parallel
659       had been in its normal mode. The script looks like this:
660
661         < file perl -e 'while(@ARGV) {
662            sysseek(STDIN,shift,0) || die;
663            $left = shift;
664            while($read = sysread(STDIN,$buf, ($left > 32768 ? 32768 : $left))){
665              $left -= $read; syswrite(STDOUT,$buf);
666            }
667         }' startbyte length_in_bytes
668
669       It delivers 1 GB/s per core.
670
671       Instead of the script dd was tried, but many versions of dd do not
672       support reading from one byte to another and might cause partial data.
673       See this for a surprising example:
674
675         yes | dd bs=1024k count=10 | wc
676
677   --block-size adjustment
678       Every time GNU parallel detects a record bigger than --block-size it
679       increases the block size by 30%. A small --block-size gives very poor
680       performance; by exponentially increasing the block size performance
681       will not suffer.
682
683       GNU parallel will waste CPU power if --block-size does not contain a
684       full record, because it tries to find a full record and will fail to do
685       so. The recommendation is therefore to use a --block-size > 2 records,
686       so you always get at least one full record when you read one block.
687
688       If you use -N then --block-size should be big enough to contain N+1
689       records.
690
691   Automatic --block-size computation
692       With --pipepart GNU parallel can compute the --block-size
693       automatically. A --block-size of -1 will use a block size so that each
694       jobslot will receive approximately 1 block. --block -2 will pass 2
695       blocks to each jobslot and -n will pass n blocks to each jobslot.
696
697       This can be done because --pipepart reads from files, and we can
698       compute the total size of the input.
699
700   --jobs and --onall
701       When running the same commands on many servers what should --jobs
702       signify? Is it the number of servers to run on in parallel?  Is it the
703       number of jobs run in parallel on each server?
704
705       GNU parallel lets --jobs represent the number of servers to run on in
706       parallel. This is to make it possible to run a sequence of commands
707       (that cannot be parallelized) on each server, but run the same sequence
708       on multiple servers.
709
710   --shuf
711       When using --shuf to shuffle the jobs, all jobs are read, then they are
712       shuffled, and finally executed. When using SQL this makes the
713       --sqlmaster be the part that shuffles the jobs. The --sqlworkers simply
714       executes according to Seq number.
715
716   Buffering on disk
717       GNU parallel buffers output, because if output is not buffered you have
718       to be ridiculously careful on sizes to avoid mixing of outputs (see
719       excellent example on https://catern.com/posts/pipes.html).
720
721       GNU parallel buffers on disk in $TMPDIR using files, that are removed
722       as soon as they are created, but which are kept open. So even if GNU
723       parallel is killed by a power outage, there will be no files to clean
724       up afterwards. Another advantage is that the file system is aware that
725       these files will be lost in case of a crash, so it does not need to
726       sync them to disk.
727
728       It gives the odd situation that a disk can be fully used, but there are
729       no visible files on it.
730
731       Partly buffering in memory
732
733       When using output formats SQL and CSV then GNU Parallel has to read the
734       whole output into memory. When run normally it will only read the
735       output from a single job. But when using --linebuffer every line
736       printed will also be buffered in memory - for all jobs currently
737       running.
738
739       If memory is tight, then do not use the output format SQL/CSV with
740       --linebuffer.
741
742       Comparing to buffering in memory
743
744       gargs is a parallelizing tool that buffers in memory. It is therefore a
745       useful way of comparing the advantages and disadvantages of buffering
746       in memory to buffering on disk.
747
748       On an system with 6 GB RAM free and 6 GB free swap these were tested
749       with different sizes:
750
751         echo /dev/zero | gargs "head -c $size {}" >/dev/null
752         echo /dev/zero | parallel "head -c $size {}" >/dev/null
753
754       The results are here:
755
756         JobRuntime      Command
757              0.344      parallel_test 1M
758              0.362      parallel_test 10M
759              0.640      parallel_test 100M
760              9.818      parallel_test 1000M
761             23.888      parallel_test 2000M
762             30.217      parallel_test 2500M
763             30.963      parallel_test 2750M
764             34.648      parallel_test 3000M
765             43.302      parallel_test 4000M
766             55.167      parallel_test 5000M
767             67.493      parallel_test 6000M
768            178.654      parallel_test 7000M
769            204.138      parallel_test 8000M
770            230.052      parallel_test 9000M
771            255.639      parallel_test 10000M
772            757.981      parallel_test 30000M
773              0.537      gargs_test 1M
774              0.292      gargs_test 10M
775              0.398      gargs_test 100M
776              3.456      gargs_test 1000M
777              8.577      gargs_test 2000M
778             22.705      gargs_test 2500M
779            123.076      gargs_test 2750M
780             89.866      gargs_test 3000M
781            291.798      gargs_test 4000M
782
783       GNU parallel is pretty much limited by the speed of the disk: Up to 6
784       GB data is written to disk but cached, so reading is fast. Above 6 GB
785       data are both written and read from disk. When the 30000MB job is
786       running, the disk system is slow, but usable: If you are not using the
787       disk, you almost do not feel it.
788
789       gargs has a speed advantage up until 2500M where it hits a wall. Then
790       the system starts swapping like crazy and is completely unusable. At
791       5000M it goes out of memory.
792
793       You can make GNU parallel behave similar to gargs if you point $TMPDIR
794       to a tmpfs-filesystem: It will be faster for small outputs, but may
795       kill your system for larger outputs and cause you to lose output.
796
797   Disk full
798       GNU parallel buffers on disk. If the disk is full, data may be lost. To
799       check if the disk is full GNU parallel writes a 8193 byte file every
800       second. If this file is written successfully, it is removed
801       immediately. If it is not written successfully, the disk is full. The
802       size 8193 was chosen because 8192 gave wrong result on some file
803       systems, whereas 8193 did the correct thing on all tested filesystems.
804
805   Memory usage
806       Normally GNU parallel will use around 17 MB RAM constantly - no matter
807       how many jobs or how much output there is. There are a few things that
808       cause the memory usage to rise:
809
810       ·  Multiple input sources. GNU parallel reads an input source only
811          once. This is by design, as an input source can be a stream (e.g.
812          FIFO, pipe, standard input (stdin)) which cannot be rewound and read
813          again. When reading a single input source, the memory is freed as
814          soon as the job is done - thus keeping the memory usage constant.
815
816          But when reading multiple input sources GNU parallel keeps the
817          already read values for generating all combinations with other input
818          sources.
819
820       ·  Computing the number of jobs. --bar, --eta, and --halt xx% use
821          total_jobs() to compute the total number of jobs. It does this by
822          generating the data structures for all jobs. All these job data
823          structures will be stored in memory and take up around 400
824          bytes/job.
825
826       ·  Buffering a full line. --linebuffer will read a full line per
827          running job. A very long output line (say 1 GB without \n) will
828          increase RAM usage temporarily: From when the beginning of the line
829          is read till the line is printed.
830
831       ·  Buffering the full output of a single job. This happens when using
832          --results *.csv/*.tsv or --sql*. Here GNU parallel will read the
833          whole output of a single job and save it as csv/tsv or SQL.
834
835   Perl replacement strings, {= =}, and --rpl
836       The shorthands for replacement strings make a command look more
837       cryptic. Different users will need different replacement strings.
838       Instead of inventing more shorthands you get more flexible replacement
839       strings if they can be programmed by the user.
840
841       The language Perl was chosen because GNU parallel is written in Perl
842       and it was easy and reasonably fast to run the code given by the user.
843
844       If a user needs the same programmed replacement string again and again,
845       the user may want to make his own shorthand for it. This is what --rpl
846       is for. It works so well, that even GNU parallel's own shorthands are
847       implemented using --rpl.
848
849       In Perl code the bigrams {= and =} rarely exist. They look like a
850       matching pair and can be entered on all keyboards. This made them good
851       candidates for enclosing the Perl expression in the replacement
852       strings. Another candidate ,, and ,, was rejected because they do not
853       look like a matching pair. --parens was made, so that the users can
854       still use ,, and ,, if they like: --parens ,,,,
855
856       Internally, however, the {= and =} are replaced by \257< and \257>.
857       This is to make it simpler to make regular expressions. You only need
858       to look one character ahead, and never have to look behind.
859
860   Test suite
861       GNU parallel uses its own testing framework. This is mostly due to
862       historical reasons. It deals reasonably well with tests that are
863       dependent on how long a given test runs (e.g. more than 10 secs is a
864       pass, but less is a fail). It parallelizes most tests, but it is easy
865       to force a test to run as the single test (which may be important for
866       timing issues). It deals reasonably well with tests that fail
867       intermittently. It detects which tests failed and pushes these to the
868       top, so when running the test suite again, the tests that failed most
869       recently are run first.
870
871       If GNU parallel should adopt a real testing framework then those
872       elements would be important.
873
874       Since many tests are dependent on which hardware it is running on,
875       these tests break when run on a different hardware than what the test
876       was written for.
877
878       When most bugs are fixed a test is added, so this bug will not
879       reappear. It is, however, sometimes hard to create the environment in
880       which the bug shows up - especially if the bug only shows up sometimes.
881       One of the harder problems was to make a machine start swapping without
882       forcing it to its knees.
883
884   Median run time
885       Using a percentage for --timeout causes GNU parallel to compute the
886       median run time of a job. The median is a better indicator of the
887       expected run time than average, because there will often be outliers
888       taking way longer than the normal run time.
889
890       To avoid keeping all run times in memory, an implementation of remedian
891       was made (Rousseeuw et al).
892
893   Error messages and warnings
894       Error messages like: ERROR, Not found, and 42 are not very helpful. GNU
895       parallel strives to inform the user:
896
897       · What went wrong?
898
899       · Why did it go wrong?
900
901       · What can be done about it?
902
903       Unfortunately it is not always possible to predict the root cause of
904       the error.
905
906   Computation of load
907       Contrary to the obvious --load does not use load average. This is due
908       to load average rising too slowly. Instead it uses ps to list the
909       number of threads in running or blocked state (state D, O or R). This
910       gives an instant load.
911
912       As remote calculation of load can be slow, a process is spawned to run
913       ps and put the result in a file, which is then used next time.
914
915   Killing jobs
916       GNU parallel kills jobs. It can be due to --memfree, --halt, or when
917       GNU parallel meets a condition from which it cannot recover. Every job
918       is started as its own process group. This way any (grand)*children will
919       get killed, too. The process group is killed with the specification
920       mentioned in --termseq.
921
922   SQL interface
923       GNU parallel uses the DBURL from GNU sql to give database software,
924       username, password, host, port, database, and table in a single string.
925
926       The DBURL must point to a table name. The table will be dropped and
927       created. The reason for not reusing an exising table is that the user
928       may have added more input sources which would require more columns in
929       the table. By prepending '+' to the DBURL the table will not be
930       dropped.
931
932       The table columns are similar to joblog with the addition of V1 .. Vn
933       which are values from the input sources, and Stdout and Stderr which
934       are the output from standard output and standard error, respectively.
935
936       The Signal column has been renamed to _Signal due to Signal being a
937       reserved word in MySQL.
938
939   Logo
940       The logo is inspired by the Cafe Wall illusion. The font is DejaVu
941       Sans.
942
943   Citation notice
944       Funding a free software project is hard. GNU parallel is no exception.
945       On top of that it seems the less visible a project is, the harder it is
946       to get funding. And the nature of GNU parallel is that it will never be
947       seen by "the guy with the checkbook", but only by the people doing the
948       actual work.
949
950       This problem has been covered by others - though no solution has been
951       found: https://www.slideshare.net/NadiaEghbal/consider-the-maintainer
952       https://www.numfocus.org/blog/why-is-numpy-only-now-getting-funded/
953
954       Before implementing the citation notice it was discussed with the
955       users:
956       https://lists.gnu.org/archive/html/parallel/2013-11/msg00006.html
957
958       There is no doubt that this is not an ideal solution, but no one has so
959       far come up with an ideal solution - neither for funding GNU parallel
960       nor other free software.
961
962       If you believe you have the perfect solution, you should try it out,
963       and if it works, you should post it on the email list. Ideas that will
964       cost work and which have not been tested are, however, unlikely to be
965       prioritized.
966

Ideas for new design

968   Multiple processes working together
969       Open3 is slow. Printing is slow. It would be good if they did not tie
970       up ressources, but were run in separate threads.
971
972   --rrs on remote using a perl wrapper
973       ... | perl -pe '$/=$recend$recstart;BEGIN{ if(substr($_) eq $recstart)
974       substr($_)="" } eof and substr($_) eq $recend) substr($_)=""
975
976       It ought to be possible to write a filter that removed rec sep on the
977       fly instead of inside GNU parallel. This could then use more cpus.
978
979       Will that require 2x record size memory?
980
981       Will that require 2x block size memory?
982

Historical decisions

984       These decisions were relevant for earlier versions of GNU parallel, but
985       not the current version. They are kept here as historical record.
986
987   --tollef
988       You can read about the history of GNU parallel on
989       https://www.gnu.org/software/parallel/history.html
990
991       --tollef was included to make GNU parallel switch compatible with the
992       parallel from moreutils (which is made by Tollef Fog Heen). This was
993       done so that users of that parallel easily could port their use to GNU
994       parallel: Simply set PARALLEL="--tollef" and that would be it.
995
996       But several distributions chose to make --tollef global (by putting it
997       into /etc/parallel/config) without making the users aware of this, and
998       that caused much confusion when people tried out the examples from GNU
999       parallel's man page and these did not work.  The users became
1000       frustrated because the distribution did not make it clear to them that
1001       it has made --tollef global.
1002
1003       So to lessen the frustration and the resulting support, --tollef was
1004       obsoleted 20130222 and removed one year later.
1005
1006   Transferring of variables and functions
1007       Until 20150122 variables and functions were transferred by looking at
1008       $SHELL to see whether the shell was a *csh shell. If so the variables
1009       would be set using setenv. Otherwise they would be set using =. This
1010       caused the content of the variable to be repeated:
1011
1012       echo $SHELL | grep "/t\{0,1\}csh" > /dev/null && setenv VAR foo ||
1013       export VAR=foo
1014
1015
1016
101720180222                          2018-03-13                PARALLEL_DESIGN(7)
Impressum