1CLUSH(1) ClusterShell User Manual CLUSH(1)
2
3
4
6 clush - execute shell commands on a cluster
7
9 clush -a | -g group | -w nodes [OPTIONS]
10
11 clush -a | -g group | -w nodes [OPTIONS] command
12
13 clush -a | -g group | -w nodes [OPTIONS] --copy file | dir [ file |
14 dir ...] [ --dest path ]
15
16 clush -a | -g group | -w nodes [OPTIONS] --rcopy file | dir [ file |
17 dir ...] [ --dest path ]
18
20 clush is a program for executing commands in parallel on a cluster and
21 for gathering their results. clush executes commands interactively or
22 can be used within shell scripts and other applications. It is a par‐
23 tial front-end to the ClusterShell library that ensures a light, uni‐
24 fied and robust parallel command execution framework. Thus, it allows
25 traditional shell scripts to benefit from some of the library features.
26 clush currently makes use of the Ssh worker of ClusterShell, by de‐
27 fault, that only requires ssh(1) (OpenSSH SSH client).
28
30 clush can be started non-interactively to run a shell command, or can
31 be invoked as an interactive shell. To start a clush interactive ses‐
32 sion, invoke the clush command without providing command.
33
34 Non-interactive mode
35 When clush is started non-interactively, the command is executed
36 on the specified remote hosts in parallel. If option -b or
37 --dshbak is specified, clush waits for command completion and
38 then displays gathered output results.
39
40 The -w option allows you to specify remote hosts by using Clus‐
41 terShell NodeSet syntax, including the node groups @group spe‐
42 cial syntax and the Extended Patterns syntax to benefits from
43 NodeSet basic arithmetic (like @Agroup\&@Bgroup). See EXTENDED
44 PATTERNS in nodeset(1) and also groups.conf(5) for more informa‐
45 tion.
46
47 Unless the option --nostdin (or -n) is specified, clush detects
48 when its standard input is connected to a terminal (as deter‐
49 mined by isatty(3)). If actually connected to a terminal, clush
50 listens to standard input when commands are running, waiting for
51 an Enter key press. Doing so will display the status of current
52 nodes. If standard input is not connected to a terminal, and
53 unless the option --nostdin is specified, clush binds the stan‐
54 dard input of the remote commands to its own standard input, al‐
55 lowing scripting methods like:
56 # echo foo | clush -w node[40-42] -b cat
57 ---------------
58 node[40-42]
59 ---------------
60 foo
61
62
63 Please see some other great examples in the EXAMPLES section be‐
64 low.
65
66 Interactive session
67 If a command is not specified, and its standard input is con‐
68 nected to a terminal, clush runs interactively. In this mode,
69 clush uses the GNU readline library to read command lines. Read‐
70 line provides commands for searching through the command history
71 for lines containing a specified string. For instance, type Con‐
72 trol-R to search in the history for the next entry matching the
73 search string typed so far. clush also recognizes special sin‐
74 gle-character prefixes that allows the user to see and modify
75 the current nodeset (the nodes where the commands are executed).
76
77 Single-character interactive commands are:
78
79 clush> ?
80 show current nodeset
81
82 clush> @<NODESET>
83 set current nodeset
84
85 clush> +<NODESET>
86 add nodes to current nodeset
87
88 clush> -<NODESET>
89 remove nodes from current nodeset
90
91 clush> !COMMAND
92 execute COMMAND on the local system
93
94 clush> =
95 toggle the output format (gathered or standard
96 mode)
97
98 To leave an interactive session, type quit or Control-D.
99
100 Local execution ( --worker=exec or -R exec )
101 Instead of running provided command on remote nodes, clush can
102 use the dedicated exec worker to launch the command locally, for
103 each node. Some parameters could be used in the command line to
104 make a different command for each node. %h or %host will be re‐
105 placed by node name and %n or %rank by the remote rank [0-N] (to
106 get a literal % use %%)
107
108 File copying mode ( --copy )
109 When clush is started with the -c or --copy option, it will at‐
110 tempt to copy specified files and/or directories to the provided
111 cluster nodes. The --dest option can be used to specify a sin‐
112 gle path where all the file(s) should be copied to on the target
113 nodes. In the absence of --dest, clush will attempt to copy
114 each file or directory found in the command line to their same
115 location on the target nodes.
116
117 Reverse file copying mode ( --rcopy )
118 When clush is started with the --rcopy option, it will attempt
119 to retrieve specified file and/or dir from provided cluster
120 nodes. If the --dest option is specified, it must be a directory
121 path where the files will be stored with their hostname ap‐
122 pended. If the destination path is not specified, it will take
123 each file or directory's parent directory as the local destina‐
124 tion.
125
127 --version
128 show clush version number and exit
129
130 -s GROUPSOURCE, --groupsource=GROUPSOURCE
131 optional groups.conf(5) group source to use
132
133 -n, --nostdin
134 do not watch for possible input from stdin; this should be used
135 when clush is run in the background (or in scripts).
136
137 --groupsconf=FILE
138 use alternate config file for groups.conf(5)
139
140 --conf=FILE
141 use alternate config file for clush.conf(5)
142
143 -O <KEY=VALUE>, --option=<KEY=VALUE>
144 override any key=value clush.conf(5) options (repeat as needed)
145
146 Selecting target nodes:
147
148 -w NODES
149 nodes where to run the command
150
151 -x NODES
152 exclude nodes from the node list
153
154 -a, --all
155 run command on all nodes
156
157 -g GROUP, --group=GROUP
158 run command on a group of nodes
159
160 -X GROUP
161 exclude nodes from this group
162
163 --hostfile=FILE, --machinefile=FILE
164 path to a file containing a list of single hosts, node
165 sets or node groups, separated by spaces and lines (may
166 be specified multiple times, one per file)
167
168 --topology=FILE
169 topology configuration file to use for tree mode
170
171 --pick=N
172 pick N node(s) at random in nodeset
173
174 Output behaviour:
175
176 -q, --quiet
177 be quiet, print essential output only
178
179 -v, --verbose
180 be verbose, print informative messages
181
182 -d, --debug
183 output more messages for debugging purpose
184
185 -G, --groupbase
186 do not display group source prefix
187
188 -L disable header block and order output by nodes; if -b/-B
189 is not specified, clush will wait for all commands to
190 finish and then display aggregated output of commands
191 with same return codes, ordered by node name; alterna‐
192 tively, when used in conjunction with -b/-B (eg. -bL),
193 clush will enable a "life gathering" of results by line,
194 such as the next line is displayed as soon as possible
195 (eg. when all nodes have sent the line)
196
197 -N disable labeling of command line
198
199 -P, --progress
200 show progress during command execution; if writing is
201 performed to standard input, the live progress indicator
202 will display the global bandwidth of data written to the
203 target nodes
204
205 -b, --dshbak
206 display gathered results in a dshbak-like way (note: it
207 will only try to aggregate the output of commands with
208 same return codes)
209
210 -B like -b but including standard error
211
212 -r, --regroup
213 fold nodeset using node groups
214
215 -S, --maxrc
216 return the largest of command return codes
217
218 --color=WHENCOLOR
219 clush can use NO_COLOR, CLICOLOR and CLICOLOR_FORCE envi‐
220 ronment variables. NO_COLOR takes precedence over CLI‐
221 COLOR_FORCE which takes precedence over CLICOLOR. When
222 --color option is used these environment variables are
223 not taken into account. --color tells whether to use ANSI
224 colors to surround node or nodeset prefix/header with es‐
225 cape sequences to display them in color on the terminal.
226 WHENCOLOR is never, always or auto (which use color if
227 standard output/error refer to a terminal). Colors are
228 set to [34m (blue foreground text) for stdout and [31m
229 (red foreground text) for stderr, and cannot be modified.
230
231 --diff show diff between common outputs (find the best reference
232 output by focusing on largest nodeset and also smaller
233 command return code)
234
235 --outdir=OUTDIR
236 output directory for stdout files (OPTIONAL)
237
238 --errdir=ERRDIR
239 output directory for stderr files (OPTIONAL)
240
241 File copying:
242
243 -c, --copy
244 copy local file or directory to remote nodes
245
246 --rcopy
247 copy file or directory from remote nodes
248
249 --dest=DEST_PATH
250 destination file or directory on the nodes (optional: use
251 the first source directory path when not specified)
252
253 -p preserve modification times and modes
254
255 Connection options:
256
257 -f FANOUT, --fanout=FANOUT
258 do not execute more than FANOUT commands at the same
259 time, useful to limit resource usage. In tree mode, the
260 same fanout value is used on the head node and on each
261 gateway (the fanout value is propagated). That is, if the
262 fanout is 16, each gateway will initiate up to 16 connec‐
263 tions to their target nodes at the same time. Default
264 fanout value is defined in clush.conf(5).
265
266 -l USER, --user=USER
267 execute remote command as user
268
269 -o OPTIONS, --options=OPTIONS
270 can be used to give ssh options, eg. -o "-p 2022 -i
271 ~/.ssh/myidrsa"; these options are added first to ssh and
272 override default ones
273
274 -t CONNECT_TIMEOUT, --connect_timeout=CONNECT_TIMEOUT
275 limit time to connect to a node
276
277 -u COMMAND_TIMEOUT, --command_timeout=COMMAND_TIMEOUT
278 limit time for command to run on the node
279
280 -m MODE, --mode=MODE
281 run mode; define MODEs in <confdir>/*.conf
282
283 -R WORKER, --worker=WORKER
284 worker name to use for connection (exec, ssh, rsh, pdsh,
285 or the name of a Python worker module), default is ssh
286
287 --remote=REMOTE
288 whether to enable remote execution: in tree mode, 'yes'
289 forces connections to the leaf nodes for execution, 'no'
290 establishes connections up to the leaf parent nodes for
291 execution (default is 'yes')
292
293 For a short explanation of these options, see -h, --help.
294
296 By default, an exit status of zero indicates success of the clush com‐
297 mand but gives no information about the remote commands exit status.
298 However, when the -S option is specified, the exit status of clush is
299 the largest value of the remote commands return codes.
300
301 For failed remote commands whose exit status is non-zero, and unless
302 the combination of options -qS is specified, clush displays messages
303 similar to:
304
305 clush: node[40-42]: exited with exit code 1
306
308 Remote parallel execution
309 # clush -w node[3-5,62] uname -r
310 Run command uname -r in parallel on nodes: node3, node4, node5
311 and node62
312
313 Local parallel execution
314 # clush -w node[1-3] --worker=exec ping -c1 %host
315 Run locally, in parallel, a ping command for nodes: node1, node2
316 and node3. You may also use -R exec as the shorter and pdsh
317 compatible option.
318
319 Display features
320 # clush -w node[3-5,62] -b uname -r
321 Run command uname -r on nodes[3-5,62] and display gathered out‐
322 put results (integrated dshbak-like).
323
324 # clush -w node[3-5,62] -bL uname -r
325 Line mode: run command uname -r on nodes[3-5,62] and display
326 gathered output results without default header block.
327
328 # ssh node32 find /etc/yum.repos.d -type f | clush -w node[40-42] -b
329 xargs ls -l
330 Search some files on node32 in /etc/yum.repos.d and use clush to
331 list the matching ones on node[40-42], and use -b to display
332 gathered results.
333
334 # clush -w node[3-5,62] --diff dmidecode -s bios-version
335 Run this Linux command to get BIOS version on nodes[3-5,62] and
336 show version differences (if any).
337
338 All nodes
339 # clush -a uname -r
340 Run command uname -r on all cluster nodes, see groups.conf(5) to
341 setup all cluster nodes (all: field).
342
343 # clush -a -x node[5,7] uname -r
344 Run command uname -r on all cluster nodes except on nodes node5
345 and node7.
346
347 # clush -a --diff cat /some/file
348 Run command cat /some/file on all cluster nodes and show differ‐
349 ences (if any), line by line, between common outputs.
350
351 Node groups
352 # clush -w @oss modprobe lustre
353 Run command modprobe lustre on nodes from node group named oss,
354 see groups.conf(5) to setup node groups (map: field).
355
356 # clush -g oss modprobe lustre
357 Same as previous example but using -g to avoid @ group prefix.
358
359 # clush -w @mds,@oss modprobe lustre
360 You may specify several node groups by separating them with com‐
361 mas (please see EXTENDED PATTERNS in nodeset(1) and also
362 groups.conf(5) for more information).
363
364 Copy files
365 # clush -w node[3-5,62] --copy /etc/motd
366 Copy local file /etc/motd to remote nodes node[3-5,62].
367
368 # clush -w node[3-5,62] --copy /etc/motd --dest /tmp/motd2
369 Copy local file /etc/motd to remote nodes node[3-5,62] at path
370 /tmp/motd2.
371
372 # clush -w node[3-5,62] -c /usr/share/doc/clustershell
373 Recursively copy local directory /usr/share/doc/clustershell to
374 the same path on remote nodes node[3-5,62].
375
376 # clush -w node[3-5,62] --rcopy /etc/motd --dest /tmp
377 Copy /etc/motd from remote nodes node[3-5,62] to local /tmp di‐
378 rectory, each file having their remote hostname appended, eg.
379 /tmp/motd.node3.
380
382 $CLUSTERSHELL_CFGDIR/clush.conf
383 Global clush configuration file. If $CLUSTERSHELL_CFGDIR is not
384 defined, /etc/clustershell/clush.conf is used instead.
385
386 $XDG_CONFIG_HOME/clustershell/clush.conf
387 User configuration file for clush. If $XDG_CONFIG_HOME is not
388 defined, $HOME/.config/clustershell/clush.conf is used instead.
389
390 $HOME/.local/etc/clustershell/clush.conf
391 Local user configuration file for clush (default installation
392 for pip --user)
393
394 ~/.clush.conf
395 Deprecated per-user clush configuration file.
396
397 ~/.clush_history
398 File in which interactive clush command history is saved.
399
401 clubak(1), cluset(1), nodeset(1), readline(3), clush.conf(5),
402 groups.conf(5).
403
404 http://clustershell.readthedocs.org/
405
407 Use the following URL to submit a bug report or feedback:
408 https://github.com/cea-hpc/clustershell/issues
409
411 Stephane Thiell <sthiell@stanford.edu>
412
414 GNU Lesser General Public License version 2.1 or later (LGPLv2.1+)
415
416
417
418
4191.9.2 2023-09-29 CLUSH(1)