1MPIEXEC(1)                       LAM COMMANDS                       MPIEXEC(1)
2
3
4

NAME

6       mpiexec - Run MPI programs on LAM nodes.
7

SYNTAX

9       mpiexec [global_args] local_args1 [: local_args2 [...]]
10
11       mpiexec [global_args] -configfile <filename>
12

OPTIONS

14       Global  arguments  apply  to  all  commands  that  will  be launched by
15       mpiexec.  They come at the beginning of the command line.
16
17       -boot     Boot the LAM run-time  environment  before  running  the  MPI
18                 program.   If  -machinefile is not specified, use the default
19                 boot schema.  When the MPI processes finish, the LAM run-time
20                 environment will be shut down.
21
22       -boot-args <args>
23                 Pass  arguments  to the back-end lamboot command when booting
24                 the LAM run-time environment.  Implies -boot.
25
26       -d        Enable lots of debugging output.  Implies -v.
27
28       -machinefile <hostfile>
29                 Enable "one shot"  MPI  executions;  boot  the  LAM  run-time
30                 environment with the boot schema specified by <hostfile> (see
31                 bhost(5)), run the MPI program, and then shut  down  the  LAM
32                 run-time environment.  Implies -boot.
33
34       -prefix <lam/install/path>
35                 Use  the  LAM installation specified in </lam/install/path/>.
36                 Not compatible with LAM/MPI versions prior to 7.1.
37
38       -ssi <key> <value>
39                 Set the SSI parameter <key> to the value <value>.
40
41       -tv       Launch the MPI processes under the TotalView debugger.
42
43       -v        Be verbose
44
45       One or more sets of local arguments must  be  specified  (or  a  config
46       file;  see  below).   Local  arguments  essentially  include everything
47       allowed in an appschema(5) as well as the following  options  specified
48       by  the  MPI-2  standard  (note  that  the options listed below must be
49       specified before appschema arguments):
50
51       -n <numprocs>
52                 Number of copies of the process to start.
53
54       -host <hostname>
55                 Specify the hostname  to  start  the  MPI  process  on.   The
56                 hostname must be resolvable by the lamnodes command after the
57                 LAM run-time environment is booted (see lamnodes(1)).
58
59       -arch <architecture>
60                 Specify  the  architecture  to  start  the  MPI  process  on.
61                 mpiexec  essentially  uses  the  provided <architecture> as a
62                 pattern match against the  output  of  the  GNU  config.guess
63                 utility on each machine in the LAM run-time environment.  Any
64                 subset will match.  See EXAMPLES, below.
65
66       -wdir <directory>
67                 Set the working directory of the executable.
68
69       -soft     Not yet supported.
70
71       -path     Not yet supported.
72
73       -file     Not yet supported.
74
75       <other_arguments>
76                 When mpiexec first encounters an  argument  that  it  doesn't
77                 recognize  (such  as  an  appschema(5)  argument,  or the MPI
78                 executable name), the remainder  of  the  arguments  will  be
79                 passed  back  to  mpirun  to  actually start the process.  As
80                 such, all of mpiexec's arguments  that  are  described  above
81                 must   come   before   appschema  arguments  and/or  the  MPI
82                 executable name.  Similarly,  all  arguments  after  the  MPI
83                 executable  name will be transparently passed as command line
84                 argument to the MPI process and will be will  be  effectively
85                 ignored by mpirun.
86

DESCRIPTION

88       mpiexec  is  loosely  defined  in  the  Miscellany chapter of the MPI-2
89       standard (see http://www.mpi-forum.org/).  It is meant to be a portable
90       mechanism  for  starting  MPI processes.  The MPI-2 standard recommends
91       several command line options, but does not mandate any.  LAM's  mpiexec
92       currently supports several of these options, but not all.
93
94       LAM's  mpiexec  is  actually  a  perl  script  that is a wrapper around
95       several underlying LAM commands,  most  notably  lamboot,  mpirun,  and
96       lamhalt.   As such, the functionality provided by mpiexec can always be
97       performed manually.  Unless otherwise specified in arguments  that  are
98       passed  back  to  mpirun,  mpiexec  will  use the per-CPU scheduling as
99       described in mpirun(1) (i.e., the "cX" and "C" notation).
100
101       mpiexec can either use an already-existing LAM universe (i.e., a booted
102       LAM  run-time environment), similar to mpirun, or can be used for "one-
103       shot" MPI executions where it boots the LAM run-time environment,  runs
104       the   MPI   executable(s),   and  then  shuts  down  the  LAM  run-time
105       environment.
106
107       mpiexec can also be used to launch MPMD MPI jobs from the command line.
108       mpirun  also supports launching MPMD MPI jobs, but the user must make a
109       text file appschema(5) first.
110
111       Perhaps one of most useful features  is  the  command-line  ability  to
112       launch different executables on different architectures using the -arch
113       flag (see EXAMPLES, below).  Essentially, the string argument  that  is
114       given to -arch is used as a pattern match against the output of the GNU
115       config.guess utility on each node.  If the user-provided <architecture>
116       string  matches any subset of the output of config.guess, it is ruled a
117       match.  Wildcards are not possible.  The GNU  config.guess  utility  is
118       available  both  in the LAM/MPI source code distribution (in the config
119       subdirectory) and at ftp://ftp.gnu.org/gnu/config/config.guess.
120
121       Some sample outputs from config.guess include:
122
123       sparc-sun-solaris2.8
124                 Solaris 2.8 running on a SPARC platform.
125
126       i686-pc-linux-gnu
127                 Linux running on an i686 architecture.
128
129       mips-sgi-irix6.5
130                 IRIX 6.5 running on an SGI/MIPS architecture.
131
132       You might want to run the laminfo command on your  available  platforms
133       to  see  what  string  config.guess  reported.  See laminfo(1) for more
134       details (e.g., the -arch flag to laminfo).
135
136   Configfile option
137       It  is  possible  to  specify  any  set  of  local  parameters   in   a
138       configuration   file   rather  than  on  the  command  line  using  the
139       -configfile option.  This option is typically used when the  number  of
140       command  line  options  is too large for some shells, or when automated
141       processes generate the command line arguments and  it  is  simply  more
142       convenient to put them in a file for later processing by mpiexec.
143
144       The config file can contain both comments and one or more sets of local
145       arguments.  Lines beginning with "#" are considered  comments  and  are
146       ignored.   Other lines are considered to be one or more groups of local
147       arguments.  Each group must be separated by either a newline or a colon
148       (":").  For example:
149
150         # Sample mpiexec config file
151         # Launch foo on two nodes
152         -host node1.example.com foo : -host node2.example.com foo
153         # Launch two copies of bar on a third node
154         -host node3.example.com -np 2 bar
155

ERRORS

157       In  the  event of an error, mpiexec will do its best to shut everything
158       down and return to the state before it was executed.  For  example,  if
159       mpiexec  was  used  to boot a LAM run-time environment, mpiexec will do
160       its best to take down whatever  successfully  booted  of  the  run-time
161       environment (to include invoking lamhalt and/or lamwipe).
162

EXAMPLES

164       The  following  are some examples of how to use mpiexec.  Note that all
165       examples assume  the  CPU-based  scheduling  (which  does  NOT  map  to
166       physical CPUs) as described in mpirun(1).
167
168       mpiexec -n 4 my_mpi_program
169                 Launch  4 copies of my_mpi_program in an already-existing LAM
170                 universe.
171
172       mpiexec -n 4 my_mpi_program arg1 arg2
173                 Similar to the previous example, but pass "arg1"  and  "arg2"
174                 as command line arguments to each copy of my_mpi_program.
175
176       mpiexec -ssi rpi gm -n 4 my_mpi_program
177                 Similar  to the previous example, but pass "-ssi rpi gm" back
178                 to mpirun to tell the MPI processes to use the  Myrinet  (gm)
179                 RPI for MPI message passing.
180
181       mpiexec -n 4 program1 : -n 4 program2
182                 Launch  4  copies  of program1 and 4 copies of program2 in an
183                 already-existing LAM universe.   All  8  resulting  processes
184                 will share a common MPI_COMM_WORLD.
185
186       mpiexec -machinefile hostfile -n 4 my_mpi_program
187                 Boot  the  LAM  run-time environment with the nodes listed in
188                 the hostfile, run 4 copies of my_mpi_program in the resulting
189                 LAM universe, and then shut down the LAM universe.
190
191       mpiexec -machinefile hostfile my_mpi_program
192                 Similar  to  above,  but  run my_mpi_program on all available
193                 CPUs in the LAM universe.
194
195       mpiexec -arch solaris2.8 sol_program : -arch linux linux_program
196                 Run as many copies  of  sol_program  as  there  are  CPUs  on
197                 Solaris  machines  in  the  current LAM universe, and as many
198                 copies of linux_program as there are CPUs on  linux  machines
199                 in  the  current  LAM universe.  All resulting processes will
200                 share a common MPI_COMM_WORLD.
201
202       mpiexec -arch solaris2.8 sol2.8_prog : -arch solaris2.9 sol2.9_program
203                 Similar to the  above  example,  except  distinguish  between
204                 Solaris  2.8  and  2.9  (since they may have different shared
205                 libraries, etc.).
206

SEE ALSO

208       appschema(5), bhost(5), lamboot(1), lamexec(1), mpirun(1)
209
210
211
212LAM 7.1.2                         March, 2006                       MPIEXEC(1)
Impressum