1CLUSH(1) ClusterShell User Manual CLUSH(1)
2
3
4
6 clush - execute shell commands on a cluster
7
9 clush -a | -g group | -w nodes [OPTIONS]
10
11 clush -a | -g group | -w nodes [OPTIONS] command
12
13 clush -a | -g group | -w nodes [OPTIONS] --copy file | dir [ file |
14 dir ...] [ --dest path ]
15
16 clush -a | -g group | -w nodes [OPTIONS] --rcopy file | dir [ file |
17 dir ...] [ --dest path ]
18
20 clush is a program for executing commands in parallel on a cluster and
21 for gathering their results. clush executes commands interactively or
22 can be used within shell scripts and other applications. It is a par‐
23 tial front-end to the ClusterShell library that ensures a light, uni‐
24 fied and robust parallel command execution framework. Thus, it allows
25 traditional shell scripts to benefit from some of the library features.
26 clush currently makes use of the Ssh worker of ClusterShell, by
27 default, that only requires ssh(1) (OpenSSH SSH client).
28
30 clush can be started non-interactively to run a shell command, or can
31 be invoked as an interactive shell. To start a clush interactive ses‐
32 sion, invoke the clush command without providing command.
33
34 Non-interactive mode
35 When clush is started non-interactively, the command is executed
36 on the specified remote hosts in parallel. If option -b or
37 --dshbak is specified, clush waits for command completion and
38 then displays gathered output results.
39
40 The -w option allows you to specify remote hosts by using Clus‐
41 terShell NodeSet syntax, including the node groups @group spe‐
42 cial syntax and the Extended Patterns syntax to benefits from
43 NodeSet basic arithmetic (like @Agroup\&@Bgroup). See EXTENDED
44 PATTERNS in nodeset(1) and also groups.conf(5) for more informa‐
45 tion.
46
47 Unless the option --nostdin (or -n) is specified, clush detects
48 when its standard input is connected to a terminal (as deter‐
49 mined by isatty(3)). If actually connected to a terminal, clush
50 listens to standard input when commands are running, waiting for
51 an Enter key press. Doing so will display the status of current
52 nodes. If standard input is not connected to a terminal, and
53 unless the option --nostdin is specified, clush binds the stan‐
54 dard input of the remote commands to its own standard input,
55 allowing scripting methods like:
56 # echo foo | clush -w node[40-42] -b cat
57 ---------------
58 node[40-42]
59 ---------------
60 foo
61
62
63 Please see some other great examples in the EXAMPLES section
64 below.
65
66 Interactive session
67 If a command is not specified, and its standard input is con‐
68 nected to a terminal, clush runs interactively. In this mode,
69 clush uses the GNU readline library to read command lines. Read‐
70 line provides commands for searching through the command history
71 for lines containing a specified string. For instance, type Con‐
72 trol-R to search in the history for the next entry matching the
73 search string typed so far. clush also recognizes special sin‐
74 gle-character prefixes that allows the user to see and modify
75 the current nodeset (the nodes where the commands are executed).
76
77 Single-character interactive commands are:
78
79 clush> ?
80 show current nodeset
81
82 clush> @<NODESET>
83 set current nodeset
84
85 clush> +<NODESET>
86 add nodes to current nodeset
87
88 clush> -<NODESET>
89 remove nodes from current nodeset
90
91 clush> !COMMAND
92 execute COMMAND on the local system
93
94 clush> =
95 toggle the output format (gathered or standard
96 mode)
97
98 To leave an interactive session, type quit or Control-D.
99
100 Local execution ( --worker=exec or -R exec )
101 Instead of running provided command on remote nodes, clush can
102 use the dedicated exec worker to launch the command locally, for
103 each node. Some parameters could be used in the command line to
104 make a different command for each node. %h or %host will be
105 replaced by node name and %n or %rank by the remote rank [0-N]
106 (to get a literal % use %%)
107
108 File copying mode ( --copy )
109 When clush is started with the -c or --copy option, it will
110 attempt to copy specified file and/or dir to the provided target
111 cluster nodes. If the --dest option is specified, it will put
112 the copied files there.
113
114 Reverse file copying mode ( --rcopy )
115 When clush is started with the --rcopy option, it will attempt
116 to retrieve specified file and/or dir from provided cluster
117 nodes. If the --dest option is specified, it must be a directory
118 path where the files will be stored with their hostname
119 appended. If the destination path is not specified, it will take
120 the first file or dir basename directory as the local destina‐
121 tion.
122
124 --version
125 show clush version number and exit
126
127 -s GROUPSOURCE, --groupsource=GROUPSOURCE
128 optional groups.conf(5) group source to use
129
130 -n, --nostdin
131 do not watch for possible input from stdin; this should be used
132 when clush is run in the background (or in scripts).
133
134 --groupsconf=FILE
135 use alternate config file for groups.conf(5)
136
137 --conf=FILE
138 use alternate config file for clush.conf(5)
139
140 -O <KEY=VALUE>, --option=<KEY=VALUE>
141 override any key=value clush.conf(5) options (repeat as needed)
142
143 Selecting target nodes:
144
145 -w NODES
146 nodes where to run the command
147
148 -x NODES
149 exclude nodes from the node list
150
151 -a, --all
152 run command on all nodes
153
154 -g GROUP, --group=GROUP
155 run command on a group of nodes
156
157 -X GROUP
158 exclude nodes from this group
159
160 --hostfile=FILE, --machinefile=FILE
161 path to a file containing a list of single hosts, node
162 sets or node groups, separated by spaces and lines (may
163 be specified multiple times, one per file)
164
165 --topology=FILE
166 topology configuration file to use for tree mode
167
168 --pick=N
169 pick N node(s) at random in nodeset
170
171 Output behaviour:
172
173 -q, --quiet
174 be quiet, print essential output only
175
176 -v, --verbose
177 be verbose, print informative messages
178
179 -d, --debug
180 output more messages for debugging purpose
181
182 -G, --groupbase
183 do not display group source prefix
184
185 -L disable header block and order output by nodes; if -b/-B
186 is not specified, clush will wait for all commands to
187 finish and then display aggregated output of commands
188 with same return codes, ordered by node name; alterna‐
189 tively, when used in conjunction with -b/-B (eg. -bL),
190 clush will enable a "life gathering" of results by line,
191 such as the next line is displayed as soon as possible
192 (eg. when all nodes have sent the line)
193
194 -N disable labeling of command line
195
196 -P, --progress
197 show progress during command execution; if writing is
198 performed to standard input, the live progress indicator
199 will display the global bandwidth of data written to the
200 target nodes
201
202 -b, --dshbak
203 display gathered results in a dshbak-like way (note: it
204 will only try to aggregate the output of commands with
205 same return codes)
206
207 -B like -b but including standard error
208
209 -r, --regroup
210 fold nodeset using node groups
211
212 -S return the largest of command return codes
213
214 --color=WHENCOLOR
215 whether to use ANSI colors to surround node or nodeset
216 prefix/header with escape sequences to display them in
217 color on the terminal. WHENCOLOR is never, always or auto
218 (which use color if standard output/error refer to a ter‐
219 minal). Colors are set to [34m (blue foreground text) for
220 stdout and [31m (red foreground text) for stderr, and
221 cannot be modified.
222
223 --diff show diff between common outputs (find the best reference
224 output by focusing on largest nodeset and also smaller
225 command return code)
226
227 File copying:
228
229 -c, --copy
230 copy local file or directory to remote nodes
231
232 --rcopy
233 copy file or directory from remote nodes
234
235 --dest=DEST_PATH
236 destination file or directory on the nodes (optional: use
237 the first source directory path when not specified)
238
239 -p preserve modification times and modes
240
241 Connection options:
242
243 -f FANOUT, --fanout=FANOUT
244 do not execute more than FANOUT commands at the same
245 time, useful to limit resource usage. In tree mode, the
246 same fanout value is used on the head node and on each
247 gateway (the fanout value is propagated). That is, if the
248 fanout is 16, each gateway will initate up to 16 connec‐
249 tions to their target nodes at the same time. Default
250 fanout value is defined in clush.conf(5).
251
252 -l USER, --user=USER
253 execute remote command as user
254
255 -o OPTIONS, --options=OPTIONS
256 can be used to give ssh options, eg. -o "-p 2022 -i
257 ~/.ssh/myidrsa"; these options are added first to ssh and
258 override default ones
259
260 -t CONNECT_TIMEOUT, --connect_timeout=CONNECT_TIMEOUT
261 limit time to connect to a node
262
263 -u COMMAND_TIMEOUT, --command_timeout=COMMAND_TIMEOUT
264 limit time for command to run on the node
265
266 -R WORKER, --worker=WORKER
267 worker name to use for connection (exec, ssh, rsh, pdsh),
268 default is ssh
269
270 --remote=REMOTE
271 whether to enable remote execution: in tree mode, 'yes'
272 forces connections to the leaf nodes for execution, 'no'
273 establishes connections up to the leaf parent nodes for
274 execution (default is 'yes')
275
276 For a short explanation of these options, see -h, --help.
277
279 By default, an exit status of zero indicates success of the clush com‐
280 mand but gives no information about the remote commands exit status.
281 However, when the -S option is specified, the exit status of clush is
282 the largest value of the remote commands return codes.
283
284 For failed remote commands whose exit status is non-zero, and unless
285 the combination of options -qS is specified, clush displays messages
286 similar to:
287
288 clush: node[40-42]: exited with exit code 1
289
291 Remote parallel execution
292 # clush -w node[3-5,62] uname -r
293 Run command uname -r in parallel on nodes: node3, node4, node5
294 and node62
295
296 Local parallel execution
297 # clush -w node[1-3] --worker=exec ping -c1 %host
298 Run locally, in parallel, a ping command for nodes: node1, node2
299 and node3. You may also use -R exec as the shorter and pdsh
300 compatible option.
301
302 Display features
303 # clush -w node[3-5,62] -b uname -r
304 Run command uname -r on nodes[3-5,62] and display gathered out‐
305 put results (integrated dshbak-like).
306
307 # clush -w node[3-5,62] -bL uname -r
308 Line mode: run command uname -r on nodes[3-5,62] and display
309 gathered output results without default header block.
310
311 # ssh node32 find /etc/yum.repos.d -type f | clush -w node[40-42] -b
312 xargs ls -l
313 Search some files on node32 in /etc/yum.repos.d and use clush to
314 list the matching ones on node[40-42], and use -b to display
315 gathered results.
316
317 # clush -w node[3-5,62] --diff dmidecode -s bios-version
318 Run this Linux command to get BIOS version on nodes[3-5,62] and
319 show version differences (if any).
320
321 All nodes
322 # clush -a uname -r
323 Run command uname -r on all cluster nodes, see groups.conf(5) to
324 setup all cluster nodes (all: field).
325
326 # clush -a -x node[5,7] uname -r
327 Run command uname -r on all cluster nodes except on nodes node5
328 and node7.
329
330 # clush -a --diff cat /some/file
331 Run command cat /some/file on all cluster nodes and show differ‐
332 ences (if any), line by line, between common outputs.
333
334 Node groups
335 # clush -w @oss modprobe lustre
336 Run command modprobe lustre on nodes from node group named oss,
337 see groups.conf(5) to setup node groups (map: field).
338
339 # clush -g oss modprobe lustre
340 Same as previous example but using -g to avoid @ group prefix.
341
342 # clush -w @mds,@oss modprobe lustre
343 You may specify several node groups by separating them with com‐
344 mas (please see EXTENDED PATTERNS in nodeset(1) and also
345 groups.conf(5) for more information).
346
347 Copy files
348 # clush -w node[3-5,62] --copy /etc/motd
349 Copy local file /etc/motd to remote nodes node[3-5,62].
350
351 # clush -w node[3-5,62] --copy /etc/motd --dest /tmp/motd2
352 Copy local file /etc/motd to remote nodes node[3-5,62] at path
353 /tmp/motd2.
354
355 # clush -w node[3-5,62] -c /usr/share/doc/clustershell
356 Recursively copy local directory /usr/share/doc/clustershell to
357 the same path on remote nodes node[3-5,62].
358
359 # clush -w node[3-5,62] --rcopy /etc/motd --dest /tmp
360 Copy /etc/motd from remote nodes node[3-5,62] to local /tmp
361 directory, each file having their remote hostname appended, eg.
362 /tmp/motd.node3.
363
365 /etc/clustershell/clush.conf
366 System-wide clush configuration file.
367
368 $XDG_CONFIG_HOME/clustershell/clush.conf
369 User configuration file for clush. If $XDG_CONFIG_HOME is not
370 defined, $HOME/.config/clustershell/clush.conf is used instead.
371
372 $HOME/.local/etc/clustershell/clush.conf
373 Local user configuration file for clush (default installation
374 for pip --user)
375
376 ~/.clush.conf
377 Deprecated per-user clush configuration file.
378
379 ~/.clush_history
380 File in which interactive clush command history is saved.
381
383 clubak(1), cluset(1), nodeset(1), readline(3), clush.conf(5),
384 groups.conf(5).
385
386 http://clustershell.readthedocs.org/
387
389 Use the following URL to submit a bug report or feedback:
390 https://github.com/cea-hpc/clustershell/issues
391
393 Stephane Thiell <sthiell@stanford.edu>
394
396 GNU Lesser General Public License version 2.1 or later (LGPLv2.1+)
397
398
399
400
4011.8.3 2019-12-01 CLUSH(1)