1slurmd(8)                        Slurm Daemon                        slurmd(8)
2
3
4

NAME

6       slurmd - The compute node daemon for Slurm.
7
8

SYNOPSIS

10       slurmd [OPTIONS...]
11
12

DESCRIPTION

14       slurmd  is the compute node daemon of Slurm. It monitors all tasks run‐
15       ning on the compute node , accepts work (tasks),  launches  tasks,  and
16       kills running tasks upon request.
17
18

OPTIONS

20       -b     Report  node  rebooted  when  daemon restarted. Used for testing
21              purposes.
22
23       -c     Clear system locks as needed. This may  be  required  if  slurmd
24              terminated abnormally.
25
26       -C     Print  the  actual hardware configuration (not the configuration
27              from the slurm.conf file) and exit.  The format of output is the
28              same  as  used  in slurm.conf to describe a node's configuration
29              plus its uptime.
30
31       --conf-server <host>[:<port>]
32              Comma-separated list of controllers, the first being the primary
33              slurmctld.  A  port  can (optionally) be specified for each con‐
34              troller. These hosts are where the slurmd will fetch the config‐
35              uration from when running in "configless" mode.
36
37       -d <file>
38              Specify  the  fully qualified pathname to the slurmstepd program
39              to be used for shepherding user job steps. This  can  be  useful
40              for testing purposes.
41
42       -D     Run  slurmd  in the foreground. Error and debug messages will be
43              copied to stderr.
44
45       -f <file>
46              Read configuration from the specified file. See NOTES below.
47
48       -F[feature]
49              Start this node as a Dynamic Future node. It will try to match a
50              node  definition  with  a  state of FUTURE, optionally using the
51              specified feature to match the node definition.
52
53       -G     Print  Generic  RESource  (GRES)   configuration   (based   upon
54              slurm.conf  GRES  merged  with gres.conf contents for this node)
55              and exit.
56
57       -h     Help; print a brief summary of command options.
58
59       -L <file>
60              Write log messages to the specified file.
61
62       -M     Lock slurmd pages into system memory using mlockall (2) to  dis‐
63              able  paging of the slurmd process. This may help in cases where
64              nodes are marked DOWN during periods of heavy swap activity.  If
65              the  mlockall (2) system call is not available, an error will be
66              printed to the log and slurmd will continue as normal.
67
68              It is suggested to  set  LaunchParameters=slurmstepd_memlock  in
69              slurm.conf(5) when setting -M.
70
71       -n <value>
72              Set  the daemon's nice value to the specified value, typically a
73              negative number.  Also note the PropagatePrioProcess  configura‐
74              tion parameter.
75
76       -N <nodename>
77              Run the daemon with the given nodename. Used to emulate a larger
78              system with more than one slurmd daemon per node. Requires  that
79              Slurm  be built using the --enable-multiple-slurmd configure op‐
80              tion.
81
82       -s     Change working directory of slurmd to SlurmdLogFile path if pos‐
83              sible,  or  to SlurmdSpoolDir otherwise. If both of them fail it
84              will fallback to /var/tmp.
85
86       -v     Verbose operation. Multiple -v's increase verbosity.
87
88       -V, --version
89              Print version information and exit.
90

ENVIRONMENT VARIABLES

92       The following environment variables can be used  to  override  settings
93       compiled into slurmd.
94
95
96       SLURM_CONF          The location of the Slurm configuration file.  This
97                           is overridden by explicitly naming a  configuration
98                           file on the command line.
99

SIGNALS

101       SIGTERM SIGINT
102              slurmd will shutdown cleanly, waiting for in-progress rollups to
103              finish.
104
105       SIGHUP Reloads the slurm configuration files, similar to 'scontrol  re‐
106              configure'.
107
108       SIGUSR2
109              Reread  the  log level from the configs, and then reopen the log
110              file.  This should be used when setting up logrotate(8).
111
112       SIGPIPE
113              This signal is explicitly ignored.
114

CORE FILE LOCATION

116       If slurmd is started with the -D option then  the  core  file  will  be
117       written  to  the current working directory.  Otherwise if SlurmdLogFile
118       is a fully qualified path name (starting with a slash), the  core  file
119       will  be  written to the same directory as the log file.  Otherwise the
120       core  file  will  be  written  to  the  SlurmSpoolDir   directory,   or
121       "/var/tmp/"  as  a last resort. If none of the above directories can be
122       written, no core file will be produced.
123
124

NOTES

126       It may be useful to experiment with different slurmd specific  configu‐
127       ration  parameters using a distinct configuration file (e.g. timeouts).
128       However, this special configuration file will not be used by the slurm‐
129       ctld daemon or the Slurm programs, unless you specifically tell each of
130       them to use it. If you desire changing communication ports,  the  loca‐
131       tion  of  the  temporary file system, or other parameters used by other
132       Slurm components, change the common configuration file, slurm.conf.
133
134       If you are using configless mode with a login node that runs a  lot  of
135       client  commands, you may consider running slurmd on that machine so it
136       can manage a cached version of the configuration files. Otherwise, each
137       client  command  will  use the DNS record to contact the controller and
138       get the configuration information, which could place additional load on
139       the controller.
140
141

COPYING

143       Copyright  (C)  2002-2007  The Regents of the University of California.
144       Copyright (C) 2008-2010 Lawrence Livermore  National  Security.   Copy‐
145       right  (C)  2010-2022  SchedMD LLC.  Produced at Lawrence Livermore Na‐
146       tional Laboratory (cf, DISCLAIMER).
147
148       This file is part of Slurm, a resource  management  program.   For  de‐
149       tails, see <https://slurm.schedmd.com/>.
150
151       Slurm  is free software; you can redistribute it and/or modify it under
152       the terms of the GNU General Public License as published  by  the  Free
153       Software  Foundation;  either version 2 of the License, or (at your op‐
154       tion) any later version.
155
156       Slurm is distributed in the hope that it will be  useful,  but  WITHOUT
157       ANY  WARRANTY;  without even the implied warranty of MERCHANTABILITY or
158       FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General  Public  License
159       for more details.
160
161

FILES

163       /etc/slurm.conf
164
165

SEE ALSO

167       slurm.conf(5), slurmctld(8)
168
169
170
171May 2022                         Slurm Daemon                        slurmd(8)
Impressum