1Slurm(1)                         Slurm System                         Slurm(1)
2
3
4

NAME

6       Slurm - Slurm Workload Manager overview.
7
8

DESCRIPTION

10       The  Slurm  Workload  Manager  is  an  open source, fault-tolerant, and
11       highly scalable cluster management and job scheduling system for  large
12       and  small  Linux  clusters. Slurm requires no kernel modifications for
13       its operation and is relatively self-contained. As a  cluster  resource
14       manager,  Slurm  has three key functions. First, it allocates exclusive
15       and/or non-exclusive access to resources (compute nodes) to  users  for
16       some  duration  of time so they can perform work. Second, it provides a
17       framework for starting, executing, and monitoring work (normally a par‐
18       allel  job) on the set of allocated nodes.  Finally, it arbitrates con‐
19       tention for resources by managing a queue of  pending  work.   Optional
20       plugins can be used for accounting, advanced reservation, gang schedul‐
21       ing (time sharing for parallel  jobs),  backfill  scheduling,  resource
22       limits  by user or bank account, and sophisticated multifactor job pri‐
23       oritization algorithms.
24
25       Slurm has a centralized manager, slurmctld, to  monitor  resources  and
26       work.  There may also be a backup manager to assume those responsibili‐
27       ties in the event of failure. Each compute server (node) has  a  slurmd
28       daemon, which can be compared to a remote shell: it waits for work, ex‐
29       ecutes that work, returns status, and waits for more work. An  optional
30       slurmdbd  (Slurm  DataBase  Daemon) can be used for accounting purposes
31       and to maintain resource limit information.
32
33       Basic user tools include srun to initiate jobs,  scancel  to  terminate
34       queued  or  running  jobs, sinfo to report system status, and squeue to
35       report the status of jobs. There is also an administrative  tool  scon‐
36       trol  available to monitor and/or modify configuration and state infor‐
37       mation. APIs are available for all functions.
38
39       Slurm configuration is maintained in the slurm.conf file.
40
41       Man pages are available for all Slurm commands, daemons, APIs, plus the
42       slurm.conf  file.  Extensive documentation is also available on the in‐
43       ternet at <https://slurm.schedmd.com/>.
44
45

COPYING

47       Copyright (C) 2005-2007 The Regents of the  University  of  California.
48       Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER).
49       Copyright (C) 2008-2009 Lawrence Livermore National Security.
50       Copyright (C) 2010-2022 SchedMD LLC.
51
52       This  file  is  part  of Slurm, a resource management program.  For de‐
53       tails, see <https://slurm.schedmd.com/>.
54
55       Slurm is free software; you can redistribute it and/or modify it  under
56       the  terms  of  the GNU General Public License as published by the Free
57       Software Foundation; either version 2 of the License, or (at  your  op‐
58       tion) any later version.
59
60       Slurm  is  distributed  in the hope that it will be useful, but WITHOUT
61       ANY WARRANTY; without even the implied warranty of  MERCHANTABILITY  or
62       FITNESS  FOR  A PARTICULAR PURPOSE.  See the GNU General Public License
63       for more details.
64
65

SEE ALSO

67       sacct(1), sacctmgr(1),  salloc(1),  sattach(1),  sbatch(1),  sbcast(1),
68       scancel(1),  scontrol(1),  sinfo(1),  squeue(1),  sreport(1),  srun(1),
69       sshare(1),  sstat(1),  strigger(1),  sview(1),   slurm.conf(5),   slur‐
70       mdbd.conf(5),   slurmctld(8),  slurmd(8),  slurmdbd(8),  slurmstepd(8),
71       spank(8)
72
73
74
75
76June 2018                        Slurm System                         Slurm(1)
Impressum