1CLUSH(1)                   ClusterShell User Manual                   CLUSH(1)
2
3
4

NAME

6       clush - execute shell commands on a cluster
7

SYNOPSIS

9       clush -a | -g group | -w nodes  [OPTIONS]
10
11       clush -a | -g group | -w nodes  [OPTIONS] command
12
13       clush  -a  |  -g group | -w nodes  [OPTIONS] --copy file | dir [ file |
14       dir ...] [ --dest path ]
15
16       clush -a | -g group | -w nodes  [OPTIONS] --rcopy file | dir [  file  |
17       dir ...] [ --dest path ]
18

DESCRIPTION

20       clush  is a program for executing commands in parallel on a cluster and
21       for gathering their results. clush executes commands  interactively  or
22       can  be used within shell scripts and other applications.  It is a par‐
23       tial front-end to the ClusterShell library that ensures a  light,  uni‐
24       fied  and  robust parallel command execution framework. Thus, it allows
25       traditional shell scripts to benefit from some of the library features.
26       clush  currently  makes  use  of the Ssh worker of ClusterShell, by de‐
27       fault, that only requires ssh(1) (OpenSSH SSH client).
28

INVOCATION

30       clush can be started non-interactively to run a shell command,  or  can
31       be  invoked  as an interactive shell. To start a clush interactive ses‐
32       sion, invoke the clush command without providing command.
33
34       Non-interactive mode
35              When clush is started non-interactively, the command is executed
36              on  the  specified  remote  hosts  in  parallel. If option -b or
37              --dshbak is specified, clush waits for  command  completion  and
38              then displays gathered output results.
39
40              The  -w option allows you to specify remote hosts by using Clus‐
41              terShell NodeSet syntax, including the node groups  @group  spe‐
42              cial  syntax  and  the Extended Patterns syntax to benefits from
43              NodeSet basic arithmetic (like @Agroup\&@Bgroup).  See  EXTENDED
44              PATTERNS in nodeset(1) and also groups.conf(5) for more informa‐
45              tion.
46
47              Unless the option --nostdin (or -n) is specified, clush  detects
48              when  its  standard  input is connected to a terminal (as deter‐
49              mined by isatty(3)).  If actually connected to a terminal, clush
50              listens to standard input when commands are running, waiting for
51              an Enter key press.  Doing so will display the status of current
52              nodes.   If  standard  input is not connected to a terminal, and
53              unless the option --nostdin is specified, clush binds the  stan‐
54              dard input of the remote commands to its own standard input, al‐
55              lowing scripting methods like:
56                 # echo foo | clush -w node[40-42] -b cat
57                 ---------------
58                 node[40-42]
59                 ---------------
60                 foo
61
62
63              Please see some other great examples in the EXAMPLES section be‐
64              low.
65
66       Interactive session
67              If  a  command  is not specified, and its standard input is con‐
68              nected to a terminal, clush runs interactively.  In  this  mode,
69              clush uses the GNU readline library to read command lines. Read‐
70              line provides commands for searching through the command history
71              for lines containing a specified string. For instance, type Con‐
72              trol-R to search in the history for the next entry matching  the
73              search  string typed so far.  clush also recognizes special sin‐
74              gle-character prefixes that allows the user to  see  and  modify
75              the current nodeset (the nodes where the commands are executed).
76
77              Single-character interactive commands are:
78
79                     clush> ?
80                            show current nodeset
81
82                     clush> @<NODESET>
83                            set current nodeset
84
85                     clush> +<NODESET>
86                            add nodes to current nodeset
87
88                     clush> -<NODESET>
89                            remove nodes from current nodeset
90
91                     clush> !COMMAND
92                            execute COMMAND on the local system
93
94                     clush> =
95                            toggle  the  output  format  (gathered or standard
96                            mode)
97
98              To leave an interactive session, type quit or Control-D.
99
100       Local execution ( --worker=exec or -R exec )
101              Instead of running provided command on remote nodes,  clush  can
102              use the dedicated exec worker to launch the command locally, for
103              each node.  Some parameters could be used in the command line to
104              make  a different command for each node. %h or %host will be re‐
105              placed by node name and %n or %rank by the remote rank [0-N] (to
106              get a literal % use %%)
107
108       File copying mode ( --copy )
109              When  clush is started with the -c or --copy option, it will at‐
110              tempt to copy specified file and/or dir to the  provided  target
111              cluster  nodes.   If the --dest option is specified, it will put
112              the copied files there.
113
114       Reverse file copying mode ( --rcopy )
115              When clush is started with the --rcopy option, it  will  attempt
116              to  retrieve  specified  file  and/or  dir from provided cluster
117              nodes. If the --dest option is specified, it must be a directory
118              path  where  the  files  will  be stored with their hostname ap‐
119              pended. If the destination path is not specified, it  will  take
120              the  first  file or dir basename directory as the local destina‐
121              tion.
122

OPTIONS

124       --version
125              show clush version number and exit
126
127       -s GROUPSOURCE, --groupsource=GROUPSOURCE
128              optional groups.conf(5) group source to use
129
130       -n, --nostdin
131              do not watch for possible input from stdin; this should be  used
132              when clush is run in the background (or in scripts).
133
134       --groupsconf=FILE
135              use alternate config file for groups.conf(5)
136
137       --conf=FILE
138              use alternate config file for clush.conf(5)
139
140       -O <KEY=VALUE>, --option=<KEY=VALUE>
141              override any key=value clush.conf(5) options (repeat as needed)
142
143       Selecting target nodes:
144
145              -w NODES
146                     nodes where to run the command
147
148              -x NODES
149                     exclude nodes from the node list
150
151              -a, --all
152                     run command on all nodes
153
154              -g GROUP, --group=GROUP
155                     run command on a group of nodes
156
157              -X GROUP
158                     exclude nodes from this group
159
160              --hostfile=FILE, --machinefile=FILE
161                     path  to  a  file containing a list of single hosts, node
162                     sets or node groups, separated by spaces and  lines  (may
163                     be specified multiple times, one per file)
164
165              --topology=FILE
166                     topology configuration file to use for tree mode
167
168              --pick=N
169                     pick N node(s) at random in nodeset
170
171       Output behaviour:
172
173              -q, --quiet
174                     be quiet, print essential output only
175
176              -v, --verbose
177                     be verbose, print informative messages
178
179              -d, --debug
180                     output more messages for debugging purpose
181
182              -G, --groupbase
183                     do not display group source prefix
184
185              -L     disable  header block and order output by nodes; if -b/-B
186                     is not specified, clush will wait  for  all  commands  to
187                     finish  and  then  display  aggregated output of commands
188                     with same return codes, ordered by  node  name;  alterna‐
189                     tively,  when  used  in conjunction with -b/-B (eg. -bL),
190                     clush will enable a "life gathering" of results by  line,
191                     such  as  the  next line is displayed as soon as possible
192                     (eg. when all nodes have sent the line)
193
194              -N     disable labeling of command line
195
196              -P, --progress
197                     show progress during command  execution;  if  writing  is
198                     performed  to standard input, the live progress indicator
199                     will display the global bandwidth of data written to  the
200                     target nodes
201
202              -b, --dshbak
203                     display  gathered  results in a dshbak-like way (note: it
204                     will only try to aggregate the output  of  commands  with
205                     same return codes)
206
207              -B     like -b but including standard error
208
209              -r, --regroup
210                     fold nodeset using node groups
211
212              -S, --maxrc
213                     return the largest of command return codes
214
215              --color=WHENCOLOR
216                     clush can use NO_COLOR, CLICOLOR and CLICOLOR_FORCE envi‐
217                     ronment variables. NO_COLOR takes  precedence  over  CLI‐
218                     COLOR_FORCE  which  takes precedence over CLICOLOR.  When
219                     --color option is used these  environment  variables  are
220                     not taken into account. --color tells whether to use ANSI
221                     colors to surround node or nodeset prefix/header with es‐
222                     cape  sequences to display them in color on the terminal.
223                     WHENCOLOR is never, always or auto (which  use  color  if
224                     standard  output/error  refer  to a terminal). Colors are
225                     set to [34m (blue foreground text) for  stdout  and  [31m
226                     (red foreground text) for stderr, and cannot be modified.
227
228              --diff show diff between common outputs (find the best reference
229                     output by focusing on largest nodeset  and  also  smaller
230                     command return code)
231
232       File copying:
233
234              -c, --copy
235                     copy local file or directory to remote nodes
236
237              --rcopy
238                     copy file or directory from remote nodes
239
240              --dest=DEST_PATH
241                     destination file or directory on the nodes (optional: use
242                     the first source directory path when not specified)
243
244              -p     preserve modification times and modes
245
246       Connection options:
247
248              -f FANOUT, --fanout=FANOUT
249                     do not execute more than  FANOUT  commands  at  the  same
250                     time,  useful  to limit resource usage. In tree mode, the
251                     same fanout value is used on the head node  and  on  each
252                     gateway (the fanout value is propagated). That is, if the
253                     fanout is 16, each gateway will initate up to 16  connec‐
254                     tions  to  their  target  nodes at the same time. Default
255                     fanout value is defined in clush.conf(5).
256
257              -l USER, --user=USER
258                     execute remote command as user
259
260              -o OPTIONS, --options=OPTIONS
261                     can be used to give ssh  options,  eg.  -o  "-p  2022  -i
262                     ~/.ssh/myidrsa"; these options are added first to ssh and
263                     override default ones
264
265              -t CONNECT_TIMEOUT, --connect_timeout=CONNECT_TIMEOUT
266                     limit time to connect to a node
267
268              -u COMMAND_TIMEOUT, --command_timeout=COMMAND_TIMEOUT
269                     limit time for command to run on the node
270
271              -R WORKER, --worker=WORKER
272                     worker name to use for connection (exec, ssh, rsh,  pdsh,
273                     or the name of a Python worker module), default is ssh
274
275              --remote=REMOTE
276                     whether  to  enable remote execution: in tree mode, 'yes'
277                     forces connections to the leaf nodes for execution,  'no'
278                     establishes  connections  up to the leaf parent nodes for
279                     execution (default is 'yes')
280
281       For a short explanation of these options, see -h, --help.
282

EXIT STATUS

284       By default, an exit status of zero indicates success of the clush  com‐
285       mand  but  gives  no information about the remote commands exit status.
286       However, when the -S option is specified, the exit status of  clush  is
287       the largest value of the remote commands return codes.
288
289       For  failed  remote  commands whose exit status is non-zero, and unless
290       the combination of options -qS is specified,  clush  displays  messages
291       similar to:
292
293       clush: node[40-42]: exited with exit code 1
294

EXAMPLES

296   Remote parallel execution
297       # clush -w node[3-5,62] uname -r
298              Run  command  uname -r in parallel on nodes: node3, node4, node5
299              and node62
300
301   Local parallel execution
302       # clush -w node[1-3] --worker=exec ping -c1 %host
303              Run locally, in parallel, a ping command for nodes: node1, node2
304              and  node3.   You  may  also use -R exec as the shorter and pdsh
305              compatible option.
306
307   Display features
308       # clush -w node[3-5,62] -b uname -r
309              Run command uname -r on nodes[3-5,62] and display gathered  out‐
310              put results (integrated dshbak-like).
311
312       # clush -w node[3-5,62] -bL uname -r
313              Line  mode:  run  command  uname -r on nodes[3-5,62] and display
314              gathered output results without default header block.
315
316       # ssh node32 find /etc/yum.repos.d -type f | clush  -w  node[40-42]  -b
317       xargs ls -l
318              Search some files on node32 in /etc/yum.repos.d and use clush to
319              list the matching ones on node[40-42], and  use  -b  to  display
320              gathered results.
321
322       # clush -w node[3-5,62] --diff dmidecode -s bios-version
323              Run  this Linux command to get BIOS version on nodes[3-5,62] and
324              show version differences (if any).
325
326   All nodes
327       # clush -a uname -r
328              Run command uname -r on all cluster nodes, see groups.conf(5) to
329              setup all cluster nodes (all: field).
330
331       # clush -a -x node[5,7] uname -r
332              Run  command uname -r on all cluster nodes except on nodes node5
333              and node7.
334
335       # clush -a --diff cat /some/file
336              Run command cat /some/file on all cluster nodes and show differ‐
337              ences (if any), line by line, between common outputs.
338
339   Node groups
340       # clush -w @oss modprobe lustre
341              Run  command modprobe lustre on nodes from node group named oss,
342              see groups.conf(5) to setup node groups (map: field).
343
344       # clush -g oss modprobe lustre
345              Same as previous example but using -g to avoid @ group prefix.
346
347       # clush -w @mds,@oss modprobe lustre
348              You may specify several node groups by separating them with com‐
349              mas  (please  see  EXTENDED  PATTERNS  in  nodeset(1)  and  also
350              groups.conf(5) for more information).
351
352   Copy files
353       # clush -w node[3-5,62] --copy /etc/motd
354              Copy local file /etc/motd to remote nodes node[3-5,62].
355
356       # clush -w node[3-5,62] --copy /etc/motd --dest /tmp/motd2
357              Copy local file /etc/motd to remote nodes node[3-5,62]  at  path
358              /tmp/motd2.
359
360       # clush -w node[3-5,62] -c /usr/share/doc/clustershell
361              Recursively  copy local directory /usr/share/doc/clustershell to
362              the same path on remote nodes node[3-5,62].
363
364       # clush -w node[3-5,62] --rcopy /etc/motd --dest /tmp
365              Copy /etc/motd from remote nodes node[3-5,62] to local /tmp  di‐
366              rectory,  each  file  having their remote hostname appended, eg.
367              /tmp/motd.node3.
368

FILES

370       /etc/clustershell/clush.conf
371              System-wide clush configuration file.
372
373       $XDG_CONFIG_HOME/clustershell/clush.conf
374              User configuration file for clush. If  $XDG_CONFIG_HOME  is  not
375              defined, $HOME/.config/clustershell/clush.conf is used instead.
376
377       $HOME/.local/etc/clustershell/clush.conf
378              Local  user  configuration  file for clush (default installation
379              for pip --user)
380
381       ~/.clush.conf
382              Deprecated per-user clush configuration file.
383
384       ~/.clush_history
385              File in which interactive clush command history is saved.
386

SEE ALSO

388       clubak(1),   cluset(1),   nodeset(1),    readline(3),    clush.conf(5),
389       groups.conf(5).
390
391       http://clustershell.readthedocs.org/
392

BUG REPORTS

394       Use the following URL to submit a bug report or feedback:
395              https://github.com/cea-hpc/clustershell/issues
396

AUTHOR

398       Stephane Thiell <sthiell@stanford.edu>
399
401       GNU Lesser General Public License version 2.1 or later (LGPLv2.1+)
402
403
404
405
4061.8.4                             2021-11-03                          CLUSH(1)
Impressum