1mdrun(1)                  GROMACS suite, VERSION 4.5                  mdrun(1)
2
3
4

NAME

6       mdrun  -  performs a simulation, do a normal mode analysis or an energy
7       minimization
8
9       VERSION 4.5
10

SYNOPSIS

12       mdrun  -s  topol.tpr  -o  traj.trr  -x  traj.xtc  -cpi  state.cpt  -cpo
13       state.cpt  -c  confout.gro  -e ener.edr -g md.log -dhdl dhdl.xvg -field
14       field.xvg -table table.xvg -tablep tablep.xvg -tableb table.xvg  -rerun
15       rerun.xtc  -tpi  tpi.xvg  -tpid  tpidist.xvg -ei sam.edi -eo sam.edo -j
16       wham.gct  -jo  bam.gct  -ffout  gct.xvg  -devout  deviatie.xvg   -runav
17       runaver.xvg  -px  pullx.xvg  -pf  pullf.xvg  -mtx nm.mtx -dn dipole.ndx
18       -[no]h -[no]version -nice int -deffnm string -xvg enum -[no]pd -dd vec‐
19       tor  -nt  int -npme int -ddorder enum -[no]ddcheck -rdd real -rcon real
20       -dlb enum -dds real -gcom int -[no]v -[no]compact  -[no]seppot  -pforce
21       real -[no]reprod -cpt real -[no]cpnum -[no]append -maxh real -multi int
22       -replex int -reseed int -[no]ionize
23

DESCRIPTION

25       The mdrun program is the main  computational  chemistry  engine  within
26       GROMACS.  Obviously, it performs Molecular Dynamics simulations, but it
27       can also perform Stochastic Dynamics, Energy Minimization, test  parti‐
28       cle  insertion or (re)calculation of energies.  Normal mode analysis is
29       another option. In this case mdrun builds a Hessian matrix from  single
30       conformation.  For usual Normal Modes-like calculations, make sure that
31       the structure provided is  properly  energy-minimized.   The  generated
32       matrix can be diagonalized by g_nmeig.
33
34
35       The  mdrun  program  reads the run input file ( -s) and distributes the
36       topology over nodes if needed.  mdrun produces  at  least  four  output
37       files.   A single log file ( -g) is written, unless the option  -seppot
38       is used, in which case each node writes a  log  file.   The  trajectory
39       file  (  -o),  contains  coordinates, velocities and optionally forces.
40       The structure file ( -c) contains the coordinates and velocities of the
41       last  step.   The energy file ( -e) contains energies, the temperature,
42       pressure, etc, a lot of these things are also printed in the log  file.
43       Optionally coordinates can be written to a compressed trajectory file (
44       -x).
45
46
47       The option  -dhdl is only used when free energy calculation  is  turned
48       on.
49
50
51       When  mdrun is started using MPI with more than 1 node, parallelization
52       is used. By default domain  decomposition  is  used,  unless  the   -pd
53       option is set, which selects particle decomposition.
54
55
56       With  domain  decomposition,  the spatial decomposition can be set with
57       option  -dd. By default mdrun selects a good decomposition.   The  user
58       only  needs  to  change  this  when  the  system is very inhomogeneous.
59       Dynamic load balancing is set with the option  -dlb, which can  give  a
60       significant  performance improvement, especially for inhomogeneous sys‐
61       tems. The only disadvantage of dynamic load balancing is that runs  are
62       no longer binary reproducible, but in most cases this is not important.
63       By default the dynamic load balancing is automatically turned  on  when
64       the  measured performance loss due to load imbalance is 5% or more.  At
65       low parallelization these are the only  important  options  for  domain
66       decomposition.   At  high  parallelization  the options in the next two
67       sections could be important for increasing the performace.
68
69
70       When PME is used with  domain  decomposition,  separate  nodes  can  be
71       assigned  to  do only the PME mesh calculation; this is computationally
72       more efficient starting at about 12 nodes.  The number of PME nodes  is
73       set  with  option   -npme, this can not be more than half of the nodes.
74       By default mdrun makes a guess for the number of  PME  nodes  when  the
75       number  of  nodes  is larger than 11 or performance wise not compatible
76       with the PME grid x dimension.  But the user should optimize npme. Per‐
77       formance  statistics  on  this  issue are written at the end of the log
78       file.  For good load balancing at high parallelization, the PME grid  x
79       and  y  dimensions  should be divisible by the number of PME nodes (the
80       simulation will run correctly also when this is not the case).
81
82
83       This section lists all options that affect the domain decomposition.
84
85       Option  -rdd can be used to set the required maximum distance for inter
86       charge-group  bonded  interactions.   Communication for two-body bonded
87       interactions below the non-bonded cut-off  distance  always  comes  for
88       free  with  the  non-bonded communication.  Atoms beyond the non-bonded
89       cut-off are only communicated when they have  missing  bonded  interac‐
90       tions; this means that the extra cost is minor and nearly indepedent of
91       the value of  -rdd.  With dynamic load balancing option  -rdd also sets
92       the  lower  limit  for the domain decomposition cell sizes.  By default
93       -rdd is determined by mdrun based on the initial coordinates. The  cho‐
94       sen value will be a balance between interaction range and communication
95       cost.
96
97       When inter charge-group  bonded  interactions  are  beyond  the  bonded
98       cut-off  distance,  mdrun  terminates  with an error message.  For pair
99       interactions and tabulated bonds that do not generate exclusions,  this
100       check can be turned off with the option  -noddcheck.
101
102       When  constraints  are  present, option  -rcon influences the cell size
103       limit as well.  Atoms connected by NC  constraints,  where  NC  is  the
104       LINCS  order  plus  1,  should  not be beyond the smallest cell size. A
105       error message is generated when this happens and the user should change
106       the  decomposition  or decrease the LINCS order and increase the number
107       of LINCS iterations.  By default mdrun estimates the minimum cell  size
108       required  for  P-LINCS in a conservative fashion. For high paralleliza‐
109       tion it can be useful to set the distance required for P-LINCS with the
110       option  -rcon.
111
112       The   -dds option sets the minimum allowed x, y and/or z scaling of the
113       cells with dynamic load balancing. mdrun will ensure that the cells can
114       scale  down  by at least this factor. This option is used for the auto‐
115       mated spatial decomposition (when not using  -dd) as well as for deter‐
116       mining  the  number  of  grid  pulses,  which  in turn sets the minimum
117       allowed cell size. Under certain circumstances the value of  -dds might
118       need to be adjusted to account for high or low spatial inhomogeneity of
119       the system.
120
121
122       The option  -gcom can be used to only do global communication  every  n
123       steps.   This  can  improve performance for highly parallel simulations
124       where this global communication step becomes  the  bottleneck.   For  a
125       global  thermostat and/or barostat the temperature and/or pressure will
126       also only be updated every -gcom steps.  By default it is  set  to  the
127       minimum of nstcalcenergy and nstlist.
128
129
130       With   -rerun  an  input  trajectory  can be given for which forces and
131       energies will be (re)calculated. Neighbor searching will  be  performed
132       for every frame, unless  nstlist is zero (see the  .mdp file).
133
134
135       ED  (essential dynamics) sampling is switched on by using the  -ei flag
136       followed by an  .edi file.   The   .edi  file  can  be  produced  using
137       options  in  the  essdyn  menu of the WHAT IF program. mdrun produces a
138       .edo file that contains projections of positions, velocities and forces
139       onto selected eigenvectors.
140
141
142       When  user-defined  potential functions have been selected in the  .mdp
143       file the  -table option is used to pass mdrun a  formatted  table  with
144       potential functions. The file is read from either the current directory
145       or from the GMXLIB directory.  A number  of  pre-formatted  tables  are
146       presented  in  the  GMXLIB  dir, for 6-8, 6-9, 6-10, 6-11, 6-12 Lennard
147       Jones potentials with  normal  Coulomb.   When  pair  interactions  are
148       present  a  separate table for pair interaction functions is read using
149       the  -tablep option.
150
151
152       When tabulated bonded functions are present in the  topology,  interac‐
153       tion  functions are read using the  -tableb option.  For each different
154       tabulated interaction type the table file name is modified in a differ‐
155       ent  way: before the file extension an underscore is appended, then a b
156       for bonds, an a for angles or a d for dihedrals and finally  the  table
157       number of the interaction type.
158
159
160       The options  -px and  -pf are used for writing pull COM coordinates and
161       forces when pulling is selected in the  .mdp file.
162
163
164       With  -multi multiple systems are simulated in parallel.  As many input
165       files  are  required  as  the  number of systems.  The system number is
166       appended to the run  input  and  each  output  filename,  for  instance
167       topol.tpr  becomes topol0.tpr, topol1.tpr etc.  The number of nodes per
168       system is the total number of nodes divided by the number  of  systems.
169       One use of this option is for NMR refinement: when distance or orienta‐
170       tion restraints are present these can be ensemble averaged over all the
171       systems.
172
173
174       With   -replex  replica  exchange  is  attempted  every given number of
175       steps. The number of replicas is  set  with  the   -multi  option,  see
176       above.   All  run  input files should use a different coupling tempera‐
177       ture, the order of the files is not important. The random seed  is  set
178       with  -reseed. The velocities are scaled and neighbor searching is per‐
179       formed after every exchange.
180
181
182       Finally some experimental algorithms can be tested when the appropriate
183       options  have  been  given. Currently under investigation are: polariz‐
184       ability, and X-Ray bombardments.
185
186
187       The option  -pforce is useful when you suspect a simulation crashes due
188       to  too  large forces. With this option coordinates and forces of atoms
189       with a force larger than a certain value will be printed to stderr.
190
191
192       Checkpoints containing the complete state of the system are written  at
193       regular intervals (option  -cpt) to the file  -cpo, unless option  -cpt
194       is set to -1.  The previous checkpoint is backed up to   state_prev.cpt
195       to  make  sure  that  a recent state of the system is always available,
196       even when the simulation is  terminated  while  writing  a  checkpoint.
197       With   -cpnum  all checkpoint files are kept and appended with the step
198       number.  A simulation can be continued by reading the full  state  from
199       file  with  option  -cpi. This option is intelligent in the way that if
200       no checkpoint file is found, Gromacs just  assumes  a  normal  run  and
201       starts  from the first step of the tpr file. By default the output will
202       be appending to the existing output files. The checkpoint file contains
203       checksums of all output files, such that you will never loose data when
204       some output files are modified, corrupt or removed.   There  are  three
205       scenarios with  -cpi:
206
207       *  no files with matching names are present: new output files are writ‐
208       ten
209
210       * all files are present with names and checksums matching those  stored
211       in the checkpoint file: files are appended
212
213       * otherwise no files are modified and a fatal error is generated
214
215       With   -noappend  new  output  files are opened and the simulation part
216       number is added to all output file names.  Note that in all  cases  the
217       checkpoint  file  itself is not renamed and will be overwritten, unless
218       its name does not match the  -cpo option.
219
220
221       With checkpointing the output is appended to previously written  output
222       files,  unless   -noappend is used or none of the previous output files
223       are present (except for the checkpoint file).   The  integrity  of  the
224       files  to  be  appended is verified using checksums which are stored in
225       the checkpoint file. This ensures that output can not be  mixed  up  or
226       corrupted  due to file appending. When only some of the previous output
227       files are present, a fatal error is generated and no old  output  files
228       are  modified  and  no  new  output  files are opened.  The result with
229       appending will be the same as from a single run.  The contents will  be
230       binary identical, unless you use a different number of nodes or dynamic
231       load balancing or the FFT library uses optimizations through timing.
232
233
234       With option  -maxh a simulation is terminated and a checkpoint file  is
235       written  at  the  first neighbor search step where the run time exceeds
236       -maxh*0.99 hours.
237
238
239       When mdrun receives a TERM signal, it will set nsteps  to  the  current
240       step  plus  one. When mdrun receives an INT signal (e.g. when ctrl+C is
241       pressed), it will stop  after  the  next  neighbor  search  step  (with
242       nstlist=0  at  the next step).  In both cases all the usual output will
243       be written to file.  When running with MPI, a  signal  to  one  of  the
244       mdrun processes is sufficient, this signal should not be sent to mpirun
245       or the mdrun process that is the parent of the others.
246
247
248       When mdrun is started with MPI, it does not run niced by default.
249

FILES

251       -s topol.tpr Input
252        Run input file: tpr tpb tpa
253
254       -o traj.trr Output
255        Full precision trajectory: trr trj cpt
256
257       -x traj.xtc Output, Opt.
258        Compressed trajectory (portable xdr format)
259
260       -cpi state.cpt Input, Opt.
261        Checkpoint file
262
263       -cpo state.cpt Output, Opt.
264        Checkpoint file
265
266       -c confout.gro Output
267        Structure file: gro g96 pdb etc.
268
269       -e ener.edr Output
270        Energy file
271
272       -g md.log Output
273        Log file
274
275       -dhdl dhdl.xvg Output, Opt.
276        xvgr/xmgr file
277
278       -field field.xvg Output, Opt.
279        xvgr/xmgr file
280
281       -table table.xvg Input, Opt.
282        xvgr/xmgr file
283
284       -tablep tablep.xvg Input, Opt.
285        xvgr/xmgr file
286
287       -tableb table.xvg Input, Opt.
288        xvgr/xmgr file
289
290       -rerun rerun.xtc Input, Opt.
291        Trajectory: xtc trr trj gro g96 pdb cpt
292
293       -tpi tpi.xvg Output, Opt.
294        xvgr/xmgr file
295
296       -tpid tpidist.xvg Output, Opt.
297        xvgr/xmgr file
298
299       -ei sam.edi Input, Opt.
300        ED sampling input
301
302       -eo sam.edo Output, Opt.
303        ED sampling output
304
305       -j wham.gct Input, Opt.
306        General coupling stuff
307
308       -jo bam.gct Output, Opt.
309        General coupling stuff
310
311       -ffout gct.xvg Output, Opt.
312        xvgr/xmgr file
313
314       -devout deviatie.xvg Output, Opt.
315        xvgr/xmgr file
316
317       -runav runaver.xvg Output, Opt.
318        xvgr/xmgr file
319
320       -px pullx.xvg Output, Opt.
321        xvgr/xmgr file
322
323       -pf pullf.xvg Output, Opt.
324        xvgr/xmgr file
325
326       -mtx nm.mtx Output, Opt.
327        Hessian matrix
328
329       -dn dipole.ndx Output, Opt.
330        Index file
331
332

OTHER OPTIONS

334       -[no]hno
335        Print help info and quit
336
337       -[no]versionno
338        Print version info and quit
339
340       -nice int 0
341        Set the nicelevel
342
343       -deffnm string
344        Set the default filename for all file options
345
346       -xvg enum xmgrace
347        xvg plot formatting:  xmgrace,  xmgr or  none
348
349       -[no]pdno
350        Use particle decompostion
351
352       -dd vector 0 0 0
353        Domain decomposition grid, 0 is optimize
354
355       -nt int 0
356        Number of threads to start (0 is guess)
357
358       -npme int -1
359        Number of separate nodes to be used for PME, -1 is guess
360
361       -ddorder enum interleave
362        DD node order:  interleave,  pp_pme or  cartesian
363
364       -[no]ddcheckyes
365        Check for all bonded interactions with DD
366
367       -rdd real 0
368        The maximum distance for bonded interactions with DD (nm), 0 is deter‐
369       mine from initial coordinates
370
371       -rcon real 0
372        Maximum distance for P-LINCS (nm), 0 is estimate
373
374       -dlb enum auto
375        Dynamic load balancing (with DD):  auto,  no or  yes
376
377       -dds real 0.8
378        Minimum allowed dlb scaling of the DD cell size
379
380       -gcom int -1
381        Global communication frequency
382
383       -[no]vno
384        Be loud and noisy
385
386       -[no]compactyes
387        Write a compact log file
388
389       -[no]seppotno
390        Write  separate V and dVdl terms for each interaction type and node to
391       the log file(s)
392
393       -pforce real -1
394        Print all forces larger than this (kJ/mol nm)
395
396       -[no]reprodno
397        Try to avoid optimizations that affect binary reproducibility
398
399       -cpt real 15
400        Checkpoint interval (minutes)
401
402       -[no]cpnumno
403        Keep and number checkpoint files
404
405       -[no]appendyes
406        Append to  previous  output  files  when  continuing  from  checkpoint
407       instead of adding the simulation part number to all file names
408
409       -maxh real -1
410        Terminate after 0.99 times this time (hours)
411
412       -multi int 0
413        Do multiple simulations in parallel
414
415       -replex int 0
416        Attempt replica exchange every  steps
417
418       -reseed int -1
419        Seed for replica exchange, -1 is generate a seed
420
421       -[no]ionizeno
422        Do  a  simulation including the effect of an X-Ray bombardment on your
423       system
424
425

SEE ALSO

427       gromacs(7)
428
429       More  information  about  GROMACS  is  available  at   <http://www.gro
430       macs.org/>.
431
432
433
434                                Thu 26 Aug 2010                       mdrun(1)
Impressum