1slurmd(8)                        Slurm Daemon                        slurmd(8)
2
3
4

NAME

6       slurmd - The compute node daemon for Slurm.
7
8

SYNOPSIS

10       slurmd [OPTIONS...]
11
12

DESCRIPTION

14       slurmd  is the compute node daemon of Slurm. It monitors all tasks run‐
15       ning on the compute node , accepts work (tasks),  launches  tasks,  and
16       kills running tasks upon request.
17
18       OPTIONS
19
20       -b     Report  node  rebooted  when  daemon restarted. Used for testing
21              purposes.
22
23
24       -c     Clear system locks as needed. This may  be  required  if  slurmd
25              terminated abnormally.
26
27
28       -C     Print actual hardware configuration and exit. The format of out‐
29              put is the same as used in slurm.conf to describe a node's  con‐
30              figuration plus its uptime.
31
32
33       --conf-server <host>[:<port>]
34              Comma-separated list of controllers, the first being the primary
35              slurmctld. A port can (optionally) be specified  for  each  con‐
36              troller. These hosts are where the slurmd will fetch the config‐
37              uration from when running in "configless" mode.
38
39
40       -d <file>
41              Specify the fully qualified pathname to the  slurmstepd  program
42              to  be  used  for shepherding user job steps. This can be useful
43              for testing purposes.
44
45       -D     Run slurmd in the foreground. Error and debug messages  will  be
46              copied to stderr.
47
48       -f <file>
49              Read configuration from the specified file. See NOTES below.
50
51       -F[feature]
52              Start this node as a Dynamic Future node. It will try to match a
53              node definition with a state of  FUTURE,  optionally  using  the
54              specified feature to match the node definition.
55
56       -G     Print   Generic   RESource   (GRES)  configuration  (based  upon
57              slurm.conf GRES merged with gres.conf contents  for  this  node)
58              and exit.
59
60       -h     Help; print a brief summary of command options.
61
62       -L <file>
63              Write log messages to the specified file.
64
65       -M     Lock  slurmd pages into system memory using mlockall (2) to dis‐
66              able paging of the slurmd process. This may help in cases  where
67              nodes  are marked DOWN during periods of heavy swap activity. If
68              the mlockall (2) system call is not available, an error will  be
69              printed to the log and slurmd will continue as normal.
70
71              It  is  suggested  to set LaunchParameters=slurmstepd_memlock in
72              slurm.conf(5) when setting -M.
73
74
75       -n <value>
76              Set the daemon's nice value to the specified value, typically  a
77              negative  number.  Also note the PropagatePrioProcess configura‐
78              tion parameter.
79
80
81       -N <nodename>
82              Run the daemon with the given nodename. Used to emulate a larger
83              system  with more than one slurmd daemon per node. Requires that
84              Slurm be built using the --enable-multiple-slurmd configure  op‐
85              tion.
86
87
88       -s     Change working directory of slurmd to SlurmdLogFile path if pos‐
89              sible, or to SlurmdSpoolDir otherwise. If both of them  fail  it
90              will fallback to /var/tmp.
91
92
93       -v     Verbose operation. Multiple -v's increase verbosity.
94
95       -V, --version
96              Print version information and exit.
97
98

ENVIRONMENT VARIABLES

100       The  following  environment  variables can be used to override settings
101       compiled into slurmd.
102
103       SLURM_CONF          The location of the Slurm configuration file.  This
104                           is  overridden by explicitly naming a configuration
105                           file on the command line.
106
107

SIGNALS

109       SIGTERM SIGINT
110              slurmd will shutdown cleanly, waiting for in-progress rollups to
111              finish.
112
113       SIGHUP Reloads  the slurm configuration files, similar to 'scontrol re‐
114              configure'.
115
116       SIGUSR2
117              Reread the log level from the configs, and then reopen  the  log
118              file.  This should be used when setting up logrotate(8).
119
120       SIGPIPE
121              This signal is explicitly ignored.
122
123

CORE FILE LOCATION

125       If  slurmd  is  started  with  the -D option then the core file will be
126       written to the current working directory.  Otherwise  if  SlurmdLogFile
127       is  a  fully qualified path name (starting with a slash), the core file
128       will be written to the same directory as the log file.   Otherwise  the
129       core   file   will  be  written  to  the  SlurmSpoolDir  directory,  or
130       "/var/tmp/" as a last resort. If none of the above directories  can  be
131       written, no core file will be produced.
132
133

NOTES

135       It  may be useful to experiment with different slurmd specific configu‐
136       ration parameters using a distinct configuration file (e.g.  timeouts).
137       However, this special configuration file will not be used by the slurm‐
138       ctld daemon or the Slurm programs, unless you specifically tell each of
139       them  to  use it. If you desire changing communication ports, the loca‐
140       tion of the temporary file system, or other parameters  used  by  other
141       Slurm components, change the common configuration file, slurm.conf.
142
143       If  you  are using configless mode with a login node that runs a lot of
144       client commands, you may consider running slurmd on that machine so  it
145       can manage a cached version of the configuration files. Otherwise, each
146       client command will use the DNS record to contact  the  controller  and
147       get the configuration information, which could place additional load on
148       the controller.
149
150

COPYING

152       Copyright (C) 2002-2007 The Regents of the  University  of  California.
153       Copyright  (C)  2008-2010  Lawrence Livermore National Security.  Copy‐
154       right (C) 2010-2021 SchedMD LLC.  Produced at  Lawrence  Livermore  Na‐
155       tional Laboratory (cf, DISCLAIMER).
156
157       This  file  is  part  of Slurm, a resource management program.  For de‐
158       tails, see <https://slurm.schedmd.com/>.
159
160       Slurm is free software; you can redistribute it and/or modify it  under
161       the  terms  of  the GNU General Public License as published by the Free
162       Software Foundation; either version 2 of the License, or (at  your  op‐
163       tion) any later version.
164
165       Slurm  is  distributed  in the hope that it will be useful, but WITHOUT
166       ANY WARRANTY; without even the implied warranty of  MERCHANTABILITY  or
167       FITNESS  FOR  A PARTICULAR PURPOSE.  See the GNU General Public License
168       for more details.
169
170

FILES

172       /etc/slurm.conf
173
174

SEE ALSO

176       slurm.conf(5), slurmctld(8)
177
178
179
180June 2021                        Slurm Daemon                        slurmd(8)
Impressum