1job_container.conf(5)      Slurm Configuration File      job_container.conf(5)
2
3
4

NAME

6       job_container.conf  -  Slurm configuration file for job_container/tmpfs
7       plugin
8
9

DESCRIPTION

11       job_container.conf is an ASCII file which defines  parameters  used  by
12       Slurm's  job_container/tmpfs  plugin.  The  plugin  reads  the job_con‐
13       tainer.conf file to find out the configuration settings. Based on  them
14       it constructs a private mount namespace for the job and mounts /tmp and
15       /dev/shm inside it. This gives the job  a  private  view  of  /tmp  and
16       /dev/shm.  /tmp  is  mounted  inside  a  location  that is specified as
17       'BasePath' in job_container.conf file.  When  the  job  completes,  the
18       private  namespaces  are unmounted, and all files therein are automati‐
19       cally removed.  To make use of this plugin, 'PrologFlags=Contain'  must
20       also be present in your slurm.conf file, as shown:
21
22       JobContainerType=job_container/tmpfs
23       PrologFlags=Contain
24
25       The  file  will  always  be  located  in  the  same  directory  as  the
26       slurm.conf.
27
28
29       If using the job_container.conf file to define a namespace available to
30       nodes  the first parameter on the line should be NodeName. If configur‐
31       ing a namespace without specifying nodes, the first  parameter  on  the
32       line should be BasePath.
33
34
35       Parameter  names are case insensitive.  Any text following a "#" in the
36       configuration file is treated as a comment  through  the  end  of  that
37       line.   Changes  to  the configuration file take effect upon restart of
38       Slurm daemons.
39
40
41       The following job_container.conf parameters are defined to control  the
42       behavior of the job_container/tmpfs plugin.
43
44
45       AutoBasePath
46              This  determines  if plugin should create the BasePath directory
47              or not. Set it to 'true' if directory is not pre-created  before
48              slurm  startup.  If  set  to true, the directory is created with
49              permission 0755. Directory is not deleted during slurm shutdown.
50              If  set  to 'false' or not specified, plugin would expect direc‐
51              tory to exist. This option can be used on a global  or  per-line
52              basis.  This parameter is optional.
53
54       BasePath
55              Specify  the PATH that the tmpfs plugin should use to mount pri‐
56              vate /tmp to. This path must be readable  and  writable  by  the
57              plugin.  The plugin also constructs a directory for each job in‐
58              side this path, which is then used for  mounting.  The  BasePath
59              gets  mounted  as  'private'  during  slurmd  start  and remains
60              mounted until shutdown.
61
62              NOTE: The BasePath parameter should not  be  configured  to  use
63              /tmp  or  /dev/shm. Using these directories will cause conflicts
64              when trying to mount and unmount the private directories for the
65              job.
66
67       InitScript
68              Specify  fully  qualified pathname of an optional initialization
69              script. This script is run before the namespace construction  of
70              a job. It can be used to make the job join additional namespaces
71              prior to the construction of /tmp namespace or it  can  be  used
72              for any site-specific setup. This parameter is optional.
73
74       NodeName
75              A  NodeName  specification  can  be  used to permit one job_con‐
76              tainer.conf file to be used for all compute nodes in  a  cluster
77              by  specifying  the node(s) that each line should apply to.  The
78              NodeName specification can use a Slurm hostlist specification as
79              shown in the example below. This parameter is optional.
80

NOTES

82       If any parameters in job_container.conf are changed while slurm is run‐
83       ning, then slurmd on the respective nodes will need to be restarted for
84       changes to take effect (scontrol reconfigure is not sufficient).  Addi‐
85       tionally this can be disruptive to jobs already running on the node. So
86       care  must  be taken to make sure no jobs are running if any changes to
87       job_container.conf are deployed.
88
89       Restarting slurmd is safe and non-disruptive to running jobs,  as  long
90       as  job_container.conf  is  not  changed between restarts in which case
91       above point applies.
92
93

EXAMPLE

95       /etc/slurm/slurm.conf:
96              These are the entries required in  slurm.conf  to  activate  the
97              job_container/tmpfs plugin.
98
99              JobContainerType=job_container/tmpfs
100              PrologFlags=Contain
101
102
103       /etc/slurm/job_container.conf:
104              The  first  sample file will define 1 basepath for all nodes and
105              it will be automatically created.
106              AutoBasePath=true
107              BasePath=/var/nvme/storage
108
109              The second sample file will define 2 basepaths.  The first  will
110              only  be  on largemem[1-2] and it will be automatically created.
111              The second will only be on gpu[1-10], will be expected to  exist
112              and will run an initscript before each job.
113              NodeName=largemem[1-2] AutoBasePath=true BasePath=/var/nvme/storage_a
114              NodeName=gpu[1-10] BasePath=/var/nvme/storage_b InitScript=/etc/slurm/init.sh
115

COPYING

117       Copyright  (C) 2021 Regents of the University of California Produced at
118       Lawrence Berkeley National Laboratory
119       Copyright (C) 2021-2022 SchedMD LLC.
120
121
122       This file is part of Slurm, a resource  management  program.   For  de‐
123       tails, see <https://slurm.schedmd.com/>.
124
125       Slurm  is free software; you can redistribute it and/or modify it under
126       the terms of the GNU General Public License as published  by  the  Free
127       Software  Foundation;  either version 2 of the License, or (at your op‐
128       tion) any later version.
129
130       Slurm is distributed in the hope that it will be  useful,  but  WITHOUT
131       ANY  WARRANTY;  without even the implied warranty of MERCHANTABILITY or
132       FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General  Public  License
133       for more details.
134
135

SEE ALSO

137       slurm.conf(5)
138
139
140
141August 2022                Slurm Configuration File      job_container.conf(5)
Impressum