1CLUSH(1)                   ClusterShell User Manual                   CLUSH(1)
2
3
4

NAME

6       clush - execute shell commands on a cluster
7

SYNOPSIS

9       clush -a | -g group | -w nodes  [OPTIONS]
10
11       clush -a | -g group | -w nodes  [OPTIONS] command
12
13       clush  -a  |  -g group | -w nodes  [OPTIONS] --copy file | dir [ file |
14       dir ...] [ --dest dest_path ]
15
16       clush -a | -g group | -w nodes  [OPTIONS] --rcopy file | dir [  file  |
17       dir ...] [ --dest dest_path ]
18

DESCRIPTION

20       clush  is a program for executing commands in parallel on a cluster and
21       for gathering their results. clush executes commands  interactively  or
22       can  be used within shell scripts and other applications.  It is a par‐
23       tial front-end to the ClusterShell library that ensures a  light,  uni‐
24       fied and robust command execution framework.  clush currently makes use
25       of the Ssh worker of ClusterShell that only  requires  ssh(1)  (OpenSSH
26       SSH client).
27

INVOCATION

29       clush  can  be started non-interactively to run a shell command, or can
30       be invoked as an interactive shell. To start a clush  interactive  ses‐
31       sion, invoke the clush command without providing command.
32
33       Non-interactive mode
34              When clush is started non-interactively, the command is executed
35              on the specified remote hosts  in  parallel.  If  option  -b  or
36              --dshbak  is  specified,  clush waits for command completion and
37              then displays gathered output results.
38
39              The -w option allows you to specify remote hosts by using  Clus‐
40              terShell  NodeSet  syntax, including the node groups @group spe‐
41              cial syntax and the Extended Patterns syntax  to  benefits  from
42              NodeSet  basic arithmetics (like @Agroup\&@Bgroup). See EXTENDED
43              PATTERNS in nodeset(1) and also groups.conf(5) for more informa‐
44              tion.
45
46              Unless  option  --nostdin  is  specified, clush detects when its
47              standard input is connected to  a  terminal  (as  determined  by
48              isatty(3)).   If actually connected to a terminal, clush listens
49              to standard input when commands  are  running,  waiting  for  an
50              Enter  key  press.  Doing  so will display the status of current
51              nodes.  If standard input is not connected to  a  terminal,  and
52              unless  option  --nostdin is specified, clush binds the standard
53              input of the remote commands to its own standard input, allowing
54              scripting methods like:
55              # echo foo | clush -w node[40-42] -b cat
56              ---------------
57              node[40-42]
58              ---------------
59              foo
60
61
62              Please  see  some  other  great examples in the EXAMPLES section
63              below.
64
65       Interactive session
66              If a command is not specified, and its standard  input  is  con‐
67              nected  to  a  terminal, clush runs interactively. In this mode,
68              clush uses the GNU readline library to read command lines. Read‐
69              line provides commands for searching through the command history
70              for lines containing a specified string. For instance, type Con‐
71              trol-R  to search in the history for the next entry matching the
72              search string typed so far.  clush also recognizes special  sin‐
73              gle-character  prefixes  that  allows the user to see and modify
74              the current nodeset (the nodes where the commands are executed).
75
76              Single-character interactive commands are:
77
78                     clush> ?
79                            show current nodeset
80
81                     clush> =<NODESET>
82                            set current nodeset
83
84                     clush> +<NODESET>
85                            add nodes to current nodeset
86
87                     clush> -<NODESET>
88                            remove nodes from current nodeset
89
90                     clush> !COMMAND
91                            execute COMMAND on the local system
92
93                     clush> =
94                            toggle the output  format  (gathered  or  standard
95                            mode)
96
97              To leave an interactive session, type quit or Control-D.
98
99       File copying mode ( --copy )
100
101              When  clush is started with the -c or --copy option, it attempts
102              to copy specified file and/or dir to the provided target cluster
103              nodes.   If  the  --dest  option  is  specified, it will put the
104              copied files there.
105
106       Reverse file copying mode ( --rcopy )
107
108              When clush is started with the --rcopy option,  it  attempts  to
109              retrieve  specified file and/or dir from provided cluster nodes.
110              If the --dest option is specified, it must be a  directory  path
111              where  the files will be stored with their hostname appended. If
112              the destination path is not specified, it will  take  the  first
113              file or dir basename directory as the local destination.
114

OPTIONS

116       --version
117              show clush version number and exit
118
119       -s GROUPSOURCE, --groupsource=GROUPSOURCE
120              optional groups.conf(5) group source to use
121
122       --nostdin
123              do not watch for possible input from stdin
124
125       Selecting target nodes:
126
127              -w NODES
128                     nodes where to run the command
129
130              -x EXCLUDE
131                     exclude nodes from the node list
132
133              -a, --all
134                     run command on all nodes
135
136              -g GROUP, --group=GROUP
137                     run command on a group of nodes
138
139              -X EXGROUP
140                     exclude nodes from this group
141
142       Output behaviour:
143
144              -q, --quiet
145                     be quiet, print essential output only
146
147              -v, --verbose
148                     be verbose, print informative messages
149
150              -d, --debug
151                     output more messages for debugging purpose
152
153              -G, --groupbase
154                     do not display group source prefix
155
156              -L     disable header block and order output by nodes; addition‐
157                     ally, when used in conjunction with -b/-B, it will enable
158                     "life  gathering"  of  results  by line mode, such as the
159                     next line is displayed as soon as possible (eg. when  all
160                     nodes have sent the line)
161
162              -N     disable labeling of command line
163
164              -b, --dshbak
165                     display gathered results in a dshbak-like way
166
167              -B     like -b but including standard error
168
169              -r, --regroup
170                     fold nodeset using node groups
171
172              -S     return the largest of command return codes
173
174              --color=WHENCOLOR
175                     whether  to  use  ANSI colors to surround node or nodeset
176                     prefix/header with escape sequences to  display  them  in
177                     color on the terminal. WHENCOLOR is never, always or auto
178                     (which use color if standard output/error refer to a ter‐
179                     minal). Colors are set to [34m (blue foreground text) for
180                     stdout and [31m (red foreground  text)  for  stderr,  and
181                     cannot be modified.
182
183       File copying:
184
185              -c, --copy
186                     copy local file or directory to remote nodes
187
188              --rcopy
189                     copy file or directory from remote nodes
190
191              --dest=DEST_PATH
192                     destination file or directory on the nodes (optional: use
193                     the first source directory path when not specified)
194
195              -p     preserve modification times and modes
196
197       Ssh options:
198
199              -f FANOUT, --fanout=FANOUT
200                     use a specified fanout
201
202              -l USER, --user=USER
203                     execute remote command as user
204
205              -o OPTIONS, --options=OPTIONS
206                     can be used to give ssh options, eg. -o "-oPort=2022"
207
208              -t CONNECT_TIMEOUT, --connect_timeout=CONNECT_TIMEOUT
209                     limit time to connect to a node
210
211              -u COMMAND_TIMEOUT, --command_timeout=COMMAND_TIMEOUT
212                     limit time for command to run on the node
213
214       For a short explanation of these options, see -h, --help.
215

EXIT STATUS

217       By default, an exit status of zero indicates success of the clush  com‐
218       mand  but  gives  no information about the remote commands exit status.
219       However, when the -S option is specified, the exit status of  clush  is
220       the largest value of the remote commands return codes.
221
222       For  failed  remote  commands whose exit status is non-zero, and unless
223       the combination of options -qS is specified,  clush  displays  messages
224       similar to:
225
226       clush: node[40-42]: exited with exit code 1
227

EXAMPLES

229       # clush -w node[3-5,62] uname -r
230              Run command uname -r on nodes: node3, node4, node5 and node62
231
232       # clush -w node[3-5,62] -b uname -r
233              Run  command uname -r on nodes[3-5,62] and display gathered out‐
234              put results (dshbak-like).
235
236       # ssh node32 find /etc/yum.repos.d -type f | clush  -w  node[40-42]  -b
237       xargs ls -l
238              Search some files on node32 in /etc/yum.repos.d and use clush to
239              list the matching ones on node[40-42], and  use  -b  to  display
240              gathered results.
241
242   All/NodeGroups examples
243       # clush -a uname -r
244              Run  command uname -r on all cluster nodes, see clush.conf(5) to
245              setup all cluster nodes (nodes_all: field).
246
247       # clush -a -x node[5,7] uname -r
248              Run command uname -r on all cluster nodes except on nodes  node5
249              and node7.
250
251       # clush -g oss modprobe lustre
252              Run  command modprobe lustre on nodes from node group named oss,
253              see clush.conf(5) to setup node groups (nodes_group: field).
254
255   Copy files
256       # clush -w node[3-5,62] --copy /etc/motd
257              Copy local file /etc/motd to remote nodes node[3-5,62].
258
259       # clush -w node[3-5,62] --copy /etc/motd --dest /tmp/motd2
260              Copy local file /etc/motd to remote nodes node[3-5,62]  at  path
261              /tmp/motd2.
262
263       # clush -w node[3-5,62] -c /usr/share/doc/clustershell
264              Recursively  copy local directory /usr/share/doc/clustershell to
265              the same path on remote nodes node[3-5,62].
266
267       # clush -w node[3-5,62] --rcopy /etc/motd --dest /tmp
268              Copy /etc/motd from remote  nodes  node[3-5,62]  to  local  /tmp
269              directory,  each file having their remote hostname appended, eg.
270              /tmp/motd.node3.
271

FILES

273       /etc/clustershell/clush.conf
274
275              System-wide clush configuration file.
276
277       ~/.clush.conf
278
279              This is the per-user clush configuration file.
280
281       ~/.clush_history
282
283              File in which interactive clush command history is saved.
284

SEE ALSO

286       clubak(1), nodeset(1), readline(3), clush.conf(5), groups.conf(5).
287

BUG REPORTS

289       Use the following URL to submit a bug report or feedback:
290              http://sourceforge.net/apps/trac/clustershell/report
291

AUTHOR

293       Stephane Thiell, CEA DAM  <stephane.thiell@cea.fr>
294
296       CeCILL-C V1
297
298
299
300
3011.5.1                             2011-06-09                          CLUSH(1)
Impressum