1job_container.conf(5)      Slurm Configuration File      job_container.conf(5)
2
3
4

NAME

6       job_container.conf  -  Slurm configuration file for job_container/tmpfs
7       plugin
8
9

DESCRIPTION

11       job_container.conf is an ASCII file which defines  parameters  used  by
12       Slurm's  job_container/tmpfs  plugin.  The  plugin  reads  the job_con‐
13       tainer.conf file to find out the configuration settings. Based on  them
14       it constructs a private mount namespace for the job and mounts /tmp and
15       /dev/shm inside it. This gives the job  a  private  view  of  /tmp  and
16       /dev/shm.  /tmp  is  mounted  inside  a  location  that is specified as
17       'BasePath' in job_container.conf file. To  make  use  of  this  plugin,
18       'PrologFlags=Contain' must be present in your slurm.conf file.
19
20       The  file  will  always  be  located  in  the  same  directory  as  the
21       slurm.conf.
22
23
24       If using the job_container.conf file to define a namespace available to
25       nodes  the first parameter on the line should be NodeName. If configur‐
26       ing a namespace without specifying nodes, the first  parameter  on  the
27       line should be BasePath.
28
29
30       Parameter  names are case insensitive.  Any text following a "#" in the
31       configuration file is treated as a comment  through  the  end  of  that
32       line.   Changes  to  the configuration file take effect upon restart of
33       Slurm daemons.
34
35
36       The following job_container.conf parameters are defined to control  the
37       behavior of the job_container/tmpfs plugin.
38
39
40       AutoBasePath
41              This  determines  if plugin should create the BasePath directory
42              or not. Set it to the directory is created with permission 0755.
43              Directory  is  not  deleted  during  slurm  shutdown.  If set to
44              'false' or not specified, plugin would expect directory  to  ex‐
45              ist.  This  option  can  be  used on a global or per-line basis.
46              This parameter is optional.
47
48       BasePath
49              Specify the PATH that the tmpfs plugin should use to mount  pri‐
50              vate  /tmp  to.  This  path must be readable and writable by the
51              plugin. The plugin also constructs a directory for each job  in‐
52              side  this  path,  which is then used for mounting. The BasePath
53              gets mounted  as  'private'  during  slurmd  start  and  remains
54              mounted until shutdown.
55
56              NOTE:  The  BasePath  parameter  should not be configured to use
57              /tmp or /dev/shm. Using these directories will  cause  conflicts
58              when trying to mount and unmount the private directories for the
59              job.
60
61       InitScript
62              Specify fully qualified pathname of an  optional  initialization
63              script.  This script is run before the namespace construction of
64              a job. It can be used to make the job join additional namespaces
65              prior  to  the  construction of /tmp namespace or it can be used
66              for any site-specific setup. This parameter is optional.
67
68       NodeName
69              A NodeName specification can be  used  to  permit  one  job_con‐
70              tainer.conf  file  to be used for all compute nodes in a cluster
71              by specifying the node(s) that each line should apply  to.   The
72              NodeName specification can use a Slurm hostlist specification as
73              shown in the example below. This parameter is optional.
74

EXAMPLE

76       /etc/slurm/job_container.conf:
77
78              ###
79              # Sample job_container.conf file 1
80              # Define 2 basepaths
81              # The first will only be on largemem[1-2] and it will automatically created.
82              # The second will only be on gpu[1-10], will be expected to exist and will run
83              #     an initscript before each job.
84              ###
85              NodeName=largemem[1-2] AutoBasePath=true BasePath=/var/nvme/storage
86              NodeName=gpu[1-10] BasePath=/var/nvme/storage InitScript=/etc/slurm/init.sh
87
88              ###
89              # Sample job_container.conf file 2
90              # Define 1 basepath that will be on all nodes and automatically created.
91              ###
92              AutoBasePath=true
93              BasePath=/var/nvme/storage
94
95
96       /etc/slurm/slurm.conf:
97              These are the entries required in  slurm.conf  to  activate  the
98              job_container/tmpfs plugin.
99
100              ###
101              # Slurm configuration need to use job_container/tmpfs plugin
102              ###
103              JobContainerType=job_container/tmpfs
104
105

NOTES

107       If any parameters in job_container.conf are changed while slurm is run‐
108       ning, then slurmd on the respective nodes will need to be restarted for
109       changes to take effect. Additionally this can be disruptive to jobs al‐
110       ready running on the node. So care must be taken to make sure  no  jobs
111       are running if any changes to job_container.conf are deployed.
112
113       Restarting  slurmd  is safe and non-disruptive to running jobs, as long
114       as job_container.conf is not changed between  restarts  in  which  case
115       above point applies.
116
117
118

COPYING

120       Copyright  (C) 2021 Regents of the University of California Produced at
121       Lawrence Berkeley National Laboratory
122       Copyright (C) 2021-2022 SchedMD LLC.
123
124
125       This file is part of Slurm, a resource  management  program.   For  de‐
126       tails, see <https://slurm.schedmd.com/>.
127
128       Slurm  is free software; you can redistribute it and/or modify it under
129       the terms of the GNU General Public License as published  by  the  Free
130       Software  Foundation;  either version 2 of the License, or (at your op‐
131       tion) any later version.
132
133       Slurm is distributed in the hope that it will be  useful,  but  WITHOUT
134       ANY  WARRANTY;  without even the implied warranty of MERCHANTABILITY or
135       FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General  Public  License
136       for more details.
137
138

SEE ALSO

140       slurm.conf(5)
141
142
143
144January 2022               Slurm Configuration File      job_container.conf(5)
Impressum