1CLUSH(1) ClusterShell User Manual CLUSH(1)
2
3
4
6 clush - execute shell commands on a cluster
7
9 clush -a | -g group | -w nodes [OPTIONS]
10
11 clush -a | -g group | -w nodes [OPTIONS] command
12
13 clush -a | -g group | -w nodes [OPTIONS] --copy file | dir [ file |
14 dir ...] [ --dest path ]
15
16 clush -a | -g group | -w nodes [OPTIONS] --rcopy file | dir [ file |
17 dir ...] [ --dest path ]
18
20 clush is a program for executing commands in parallel on a cluster and
21 for gathering their results. clush executes commands interactively or
22 can be used within shell scripts and other applications. It is a par‐
23 tial front-end to the ClusterShell library that ensures a light, uni‐
24 fied and robust parallel command execution framework. Thus, it allows
25 traditional shell scripts to benefit from some of the library features.
26 clush currently makes use of the Ssh worker of ClusterShell, by de‐
27 fault, that only requires ssh(1) (OpenSSH SSH client).
28
30 clush can be started non-interactively to run a shell command, or can
31 be invoked as an interactive shell. To start a clush interactive ses‐
32 sion, invoke the clush command without providing command.
33
34 Non-interactive mode
35 When clush is started non-interactively, the command is executed
36 on the specified remote hosts in parallel. If option -b or
37 --dshbak is specified, clush waits for command completion and
38 then displays gathered output results.
39
40 The -w option allows you to specify remote hosts by using Clus‐
41 terShell NodeSet syntax, including the node groups @group spe‐
42 cial syntax and the Extended Patterns syntax to benefits from
43 NodeSet basic arithmetic (like @Agroup\&@Bgroup). See EXTENDED
44 PATTERNS in nodeset(1) and also groups.conf(5) for more informa‐
45 tion.
46
47 Unless the option --nostdin (or -n) is specified, clush detects
48 when its standard input is connected to a terminal (as deter‐
49 mined by isatty(3)). If actually connected to a terminal, clush
50 listens to standard input when commands are running, waiting for
51 an Enter key press. Doing so will display the status of current
52 nodes. If standard input is not connected to a terminal, and
53 unless the option --nostdin is specified, clush binds the stan‐
54 dard input of the remote commands to its own standard input, al‐
55 lowing scripting methods like:
56 # echo foo | clush -w node[40-42] -b cat
57 ---------------
58 node[40-42]
59 ---------------
60 foo
61
62
63 Please see some other great examples in the EXAMPLES section be‐
64 low.
65
66 Interactive session
67 If a command is not specified, and its standard input is con‐
68 nected to a terminal, clush runs interactively. In this mode,
69 clush uses the GNU readline library to read command lines. Read‐
70 line provides commands for searching through the command history
71 for lines containing a specified string. For instance, type Con‐
72 trol-R to search in the history for the next entry matching the
73 search string typed so far. clush also recognizes special sin‐
74 gle-character prefixes that allows the user to see and modify
75 the current nodeset (the nodes where the commands are executed).
76
77 Single-character interactive commands are:
78
79 clush> ?
80 show current nodeset
81
82 clush> @<NODESET>
83 set current nodeset
84
85 clush> +<NODESET>
86 add nodes to current nodeset
87
88 clush> -<NODESET>
89 remove nodes from current nodeset
90
91 clush> !COMMAND
92 execute COMMAND on the local system
93
94 clush> =
95 toggle the output format (gathered or standard
96 mode)
97
98 To leave an interactive session, type quit or Control-D.
99
100 Local execution ( --worker=exec or -R exec )
101 Instead of running provided command on remote nodes, clush can
102 use the dedicated exec worker to launch the command locally, for
103 each node. Some parameters could be used in the command line to
104 make a different command for each node. %h or %host will be re‐
105 placed by node name and %n or %rank by the remote rank [0-N] (to
106 get a literal % use %%)
107
108 File copying mode ( --copy )
109 When clush is started with the -c or --copy option, it will at‐
110 tempt to copy specified file and/or dir to the provided target
111 cluster nodes. If the --dest option is specified, it will put
112 the copied files there.
113
114 Reverse file copying mode ( --rcopy )
115 When clush is started with the --rcopy option, it will attempt
116 to retrieve specified file and/or dir from provided cluster
117 nodes. If the --dest option is specified, it must be a directory
118 path where the files will be stored with their hostname ap‐
119 pended. If the destination path is not specified, it will take
120 the first file or dir basename directory as the local destina‐
121 tion.
122
124 --version
125 show clush version number and exit
126
127 -s GROUPSOURCE, --groupsource=GROUPSOURCE
128 optional groups.conf(5) group source to use
129
130 -n, --nostdin
131 do not watch for possible input from stdin; this should be used
132 when clush is run in the background (or in scripts).
133
134 --sudo enable sudo password prompt: a prompt will ask for your sudo
135 password and sudo will be used to run your commands on the tar‐
136 get nodes. The password must be the same on all target nodes.
137 The actual sudo command used by clush can be changed in
138 clush.conf(5) or in command line using -O sudo_command="...".
139 The configured sudo_command must be able to read a password on
140 stdin followed by a new line (which is what sudo -S does).
141
142 --groupsconf=FILE
143 use alternate config file for groups.conf(5)
144
145 --conf=FILE
146 use alternate config file for clush.conf(5)
147
148 -O <KEY=VALUE>, --option=<KEY=VALUE>
149 override any key=value clush.conf(5) options (repeat as needed)
150
151 Selecting target nodes:
152
153 -w NODES
154 nodes where to run the command
155
156 -x NODES
157 exclude nodes from the node list
158
159 -a, --all
160 run command on all nodes
161
162 -g GROUP, --group=GROUP
163 run command on a group of nodes
164
165 -X GROUP
166 exclude nodes from this group
167
168 --hostfile=FILE, --machinefile=FILE
169 path to a file containing a list of single hosts, node
170 sets or node groups, separated by spaces and lines (may
171 be specified multiple times, one per file)
172
173 --topology=FILE
174 topology configuration file to use for tree mode
175
176 --pick=N
177 pick N node(s) at random in nodeset
178
179 Output behaviour:
180
181 -q, --quiet
182 be quiet, print essential output only
183
184 -v, --verbose
185 be verbose, print informative messages
186
187 -d, --debug
188 output more messages for debugging purpose
189
190 -G, --groupbase
191 do not display group source prefix
192
193 -L disable header block and order output by nodes; if -b/-B
194 is not specified, clush will wait for all commands to
195 finish and then display aggregated output of commands
196 with same return codes, ordered by node name; alterna‐
197 tively, when used in conjunction with -b/-B (eg. -bL),
198 clush will enable a "life gathering" of results by line,
199 such as the next line is displayed as soon as possible
200 (eg. when all nodes have sent the line)
201
202 -N disable labeling of command line
203
204 -P, --progress
205 show progress during command execution; if writing is
206 performed to standard input, the live progress indicator
207 will display the global bandwidth of data written to the
208 target nodes
209
210 -b, --dshbak
211 display gathered results in a dshbak-like way (note: it
212 will only try to aggregate the output of commands with
213 same return codes)
214
215 -B like -b but including standard error
216
217 -r, --regroup
218 fold nodeset using node groups
219
220 -S, --maxrc
221 return the largest of command return codes
222
223 --color=WHENCOLOR
224 clush can use NO_COLOR, CLICOLOR and CLICOLOR_FORCE envi‐
225 ronment variables. NO_COLOR takes precedence over CLI‐
226 COLOR_FORCE which takes precedence over CLICOLOR. When
227 --color option is used these environment variables are
228 not taken into account. --color tells whether to use ANSI
229 colors to surround node or nodeset prefix/header with es‐
230 cape sequences to display them in color on the terminal.
231 WHENCOLOR is never, always or auto (which use color if
232 standard output/error refer to a terminal). Colors are
233 set to [34m (blue foreground text) for stdout and [31m
234 (red foreground text) for stderr, and cannot be modified.
235
236 --diff show diff between common outputs (find the best reference
237 output by focusing on largest nodeset and also smaller
238 command return code)
239
240 --outdir=OUTDIR
241 output directory for stdout files (OPTIONAL)
242
243 --errdir=ERRDIR
244 output directory for sterr files (OPTIONAL)
245
246 File copying:
247
248 -c, --copy
249 copy local file or directory to remote nodes
250
251 --rcopy
252 copy file or directory from remote nodes
253
254 --dest=DEST_PATH
255 destination file or directory on the nodes (optional: use
256 the first source directory path when not specified)
257
258 -p preserve modification times and modes
259
260 Connection options:
261
262 -f FANOUT, --fanout=FANOUT
263 do not execute more than FANOUT commands at the same
264 time, useful to limit resource usage. In tree mode, the
265 same fanout value is used on the head node and on each
266 gateway (the fanout value is propagated). That is, if the
267 fanout is 16, each gateway will initate up to 16 connec‐
268 tions to their target nodes at the same time. Default
269 fanout value is defined in clush.conf(5).
270
271 -l USER, --user=USER
272 execute remote command as user
273
274 -o OPTIONS, --options=OPTIONS
275 can be used to give ssh options, eg. -o "-p 2022 -i
276 ~/.ssh/myidrsa"; these options are added first to ssh and
277 override default ones
278
279 -t CONNECT_TIMEOUT, --connect_timeout=CONNECT_TIMEOUT
280 limit time to connect to a node
281
282 -u COMMAND_TIMEOUT, --command_timeout=COMMAND_TIMEOUT
283 limit time for command to run on the node
284
285 -m MODE, --mode=MODE
286 run mode; define MODEs in <confdir>/*.conf
287
288 -R WORKER, --worker=WORKER
289 worker name to use for connection (exec, ssh, rsh, pdsh,
290 or the name of a Python worker module), default is ssh
291
292 --remote=REMOTE
293 whether to enable remote execution: in tree mode, 'yes'
294 forces connections to the leaf nodes for execution, 'no'
295 establishes connections up to the leaf parent nodes for
296 execution (default is 'yes')
297
298 For a short explanation of these options, see -h, --help.
299
301 By default, an exit status of zero indicates success of the clush com‐
302 mand but gives no information about the remote commands exit status.
303 However, when the -S option is specified, the exit status of clush is
304 the largest value of the remote commands return codes.
305
306 For failed remote commands whose exit status is non-zero, and unless
307 the combination of options -qS is specified, clush displays messages
308 similar to:
309
310 clush: node[40-42]: exited with exit code 1
311
313 Remote parallel execution
314 # clush -w node[3-5,62] uname -r
315 Run command uname -r in parallel on nodes: node3, node4, node5
316 and node62
317
318 Local parallel execution
319 # clush -w node[1-3] --worker=exec ping -c1 %host
320 Run locally, in parallel, a ping command for nodes: node1, node2
321 and node3. You may also use -R exec as the shorter and pdsh
322 compatible option.
323
324 Display features
325 # clush -w node[3-5,62] -b uname -r
326 Run command uname -r on nodes[3-5,62] and display gathered out‐
327 put results (integrated dshbak-like).
328
329 # clush -w node[3-5,62] -bL uname -r
330 Line mode: run command uname -r on nodes[3-5,62] and display
331 gathered output results without default header block.
332
333 # ssh node32 find /etc/yum.repos.d -type f | clush -w node[40-42] -b
334 xargs ls -l
335 Search some files on node32 in /etc/yum.repos.d and use clush to
336 list the matching ones on node[40-42], and use -b to display
337 gathered results.
338
339 # clush -w node[3-5,62] --diff dmidecode -s bios-version
340 Run this Linux command to get BIOS version on nodes[3-5,62] and
341 show version differences (if any).
342
343 All nodes
344 # clush -a uname -r
345 Run command uname -r on all cluster nodes, see groups.conf(5) to
346 setup all cluster nodes (all: field).
347
348 # clush -a -x node[5,7] uname -r
349 Run command uname -r on all cluster nodes except on nodes node5
350 and node7.
351
352 # clush -a --diff cat /some/file
353 Run command cat /some/file on all cluster nodes and show differ‐
354 ences (if any), line by line, between common outputs.
355
356 Node groups
357 # clush -w @oss modprobe lustre
358 Run command modprobe lustre on nodes from node group named oss,
359 see groups.conf(5) to setup node groups (map: field).
360
361 # clush -g oss modprobe lustre
362 Same as previous example but using -g to avoid @ group prefix.
363
364 # clush -w @mds,@oss modprobe lustre
365 You may specify several node groups by separating them with com‐
366 mas (please see EXTENDED PATTERNS in nodeset(1) and also
367 groups.conf(5) for more information).
368
369 Copy files
370 # clush -w node[3-5,62] --copy /etc/motd
371 Copy local file /etc/motd to remote nodes node[3-5,62].
372
373 # clush -w node[3-5,62] --copy /etc/motd --dest /tmp/motd2
374 Copy local file /etc/motd to remote nodes node[3-5,62] at path
375 /tmp/motd2.
376
377 # clush -w node[3-5,62] -c /usr/share/doc/clustershell
378 Recursively copy local directory /usr/share/doc/clustershell to
379 the same path on remote nodes node[3-5,62].
380
381 # clush -w node[3-5,62] --rcopy /etc/motd --dest /tmp
382 Copy /etc/motd from remote nodes node[3-5,62] to local /tmp di‐
383 rectory, each file having their remote hostname appended, eg.
384 /tmp/motd.node3.
385
387 $CLUSTERSHELL_CFGDIR/clush.conf
388 Global clush configuration file. If $CLUSTERSHELL_CFGDIR is not
389 defined, /etc/clustershell/clush.conf is used instead.
390
391 $XDG_CONFIG_HOME/clustershell/clush.conf
392 User configuration file for clush. If $XDG_CONFIG_HOME is not
393 defined, $HOME/.config/clustershell/clush.conf is used instead.
394
395 $HOME/.local/etc/clustershell/clush.conf
396 Local user configuration file for clush (default installation
397 for pip --user)
398
399 ~/.clush.conf
400 Deprecated per-user clush configuration file.
401
402 ~/.clush_history
403 File in which interactive clush command history is saved.
404
406 clubak(1), cluset(1), nodeset(1), readline(3), clush.conf(5),
407 groups.conf(5).
408
409 http://clustershell.readthedocs.org/
410
412 Use the following URL to submit a bug report or feedback:
413 https://github.com/cea-hpc/clustershell/issues
414
416 Stephane Thiell <sthiell@stanford.edu>
417
419 GNU Lesser General Public License version 2.1 or later (LGPLv2.1+)
420
421
422
423
4241.9 2022-11-25 CLUSH(1)