1sinfo(1)                        Slurm Commands                        sinfo(1)
2
3
4

NAME

6       sinfo - view information about Slurm nodes and partitions.
7
8

SYNOPSIS

10       sinfo [OPTIONS...]
11

DESCRIPTION

13       sinfo  is used to view partition and node information for a system run‐
14       ning Slurm.
15
16

OPTIONS

18       -a, --all
19              Display information about all partitions. This  causes  informa‐
20              tion  to  be  displayed  about partitions that are configured as
21              hidden and partitions that are unavailable to user's group.
22
23
24       -d, --dead
25              If set only report state information for  non-responding  (dead)
26              nodes.
27
28
29       -e, --exact
30              If  set,  do not group node information on multiple nodes unless
31              their configurations to be reported are identical. Otherwise cpu
32              count, memory size, and disk space for nodes will be listed with
33              the minimum value followed by a "+" for nodes with the same par‐
34              tition and state (e.g., "250+").
35
36
37       --federation
38              Show all partitions from the federation if a member of one.
39
40
41       -h, --noheader
42              Do not print a header on the output.
43
44
45       --help Print a message describing all sinfo options.
46
47
48       --hide Do  not display information about hidden partitions. By default,
49              partitions that are configured as hidden or are not available to
50              the user's group will not be displayed (i.e. this is the default
51              behavior).
52
53
54       -i <seconds>, --iterate=<seconds>
55              Print the state on a periodic basis.  Sleep  for  the  indicated
56              number  of  seconds  between reports.  By default, prints a time
57              stamp with the header.
58
59
60       --local
61              Show only jobs local to this cluster. Ignore other  clusters  in
62              this federation (if any). Overrides --federation.
63
64
65       -l, --long
66              Print  more detailed information.  This is ignored if the --for‐
67              mat option is specified.
68
69
70       -M, --clusters=<string>
71              Clusters to issue commands to.  Multiple cluster  names  may  be
72              comma  separated.   A value of of 'all' will query to run on all
73              clusters.  Note that the SlurmDBD must be up for this option  to
74              work properly.  This option implicitly sets the --local option.
75
76
77       -n <nodes>, --nodes=<nodes>
78              Print  information  only  about the specified node(s).  Multiple
79              nodes may be comma separated or expressed  using  a  node  range
80              expression.  For  example  "linux[00-07]"  would  indicate eight
81              nodes, "linux00" through "linux07."  Performance of the  command
82              can  be  measurably  improved  for systems with large numbers of
83              nodes when a single node name is specified.
84
85
86       --noconvert
87              Don't convert units from their original type (e.g.  2048M  won't
88              be converted to 2G).
89
90
91       -N, --Node
92              Print  information  in  a node-oriented format with one line per
93              node and partition. That is, if a node belongs to more than  one
94              partition,  then  one  line for each node-partition pair will be
95              shown.  If --partition is also specified, then only one line per
96              node in this partition is shown.  The default is to print infor‐
97              mation in a partition-oriented format.  This is ignored  if  the
98              --format option is specified.
99
100
101       -o <output_format>, --format=<output_format>
102              Specify  the  information  to be displayed using an sinfo format
103              string. Format strings transparently used by sinfo when  running
104              with various options are
105
106              default        "%#P %.5a %.10l %.6D %.6t %N"
107
108              --summarize    "%#P %.5a %.10l %.16F  %N"
109
110              --long         "%#P  %.5a %.10l %.10s %.4r %.8h %.10g %.6D %.11T
111                             %N"
112
113              --Node         "%#N %.6D %#P %6t"
114
115              --long --Node  "%#N %.6D %#P %.11T %.4c %.8z %.6m %.8d %.6w %.8f
116                             %20E"
117
118              --list-reasons "%20E %9u %19H %N"
119
120              --long --list-reasons
121                             "%20E %12U %19H %6t %N"
122
123
124              In the above format strings, the use of "#" represents the maxi‐
125              mum length of any partition name or node list to be printed.   A
126              pass  is  made  over  the records to be printed to establish the
127              size in order to align the sinfo output, then a second  pass  is
128              made  over  the  records  to  print them.  Note that the literal
129              character "#" itself is not a valid field length  specification,
130              but is only used to document this behaviour.
131
132              The field specifications available include:
133
134              %all  Print  all fields available for this data type with a ver‐
135                    tical bar separating each field.
136
137              %a    State/availability of a partition
138
139              %A    Number of nodes by state in the  format  "allocated/idle".
140                    Do not use this with a node state option ("%t" or "%T") or
141                    the different node  states  will  be  placed  on  separate
142                    lines.
143
144              %b    Features currently active on the nodes, also see %f
145
146              %B    The  max  number of CPUs per node available to jobs in the
147                    partition.
148
149              %c    Number of CPUs per node
150
151              %C    Number  of  CPUs   by   state   in   the   format   "allo‐
152                    cated/idle/other/total". Do not use this with a node state
153                    option ("%t" or "%T") or the different node states will be
154                    placed on separate lines.
155
156              %d    Size of temporary disk space per node in megabytes
157
158              %D    Number of nodes
159
160              %e    Free memory of a node
161
162              %E    The reason a node is unavailable (down, drained, or drain‐
163                    ing states).
164
165              %f    Features available the nodes, also see %b
166
167              %F    Number  of  nodes  by   state   in   the   format   "allo‐
168                    cated/idle/other/total".   Note  the  use  of  this format
169                    option with a node state format option ("%t" or "%T") will
170                    result  in  the different node states being be reported on
171                    separate lines.
172
173              %g    Groups which may use the nodes
174
175              %G    Generic resources (gres) associated with the nodes
176
177              %h    Jobs may  oversubscribe  compute  resources  (i.e.  CPUs),
178                    "yes", "no", "exclusive" or "force"
179
180              %H    Print the timestamp of the reason a node is unavailable.
181
182              %I    Partition job priority weighting factor.
183
184              %l    Maximum  time  for  any job in the format "days-hours:min‐
185                    utes:seconds"
186
187              %L    Default time for any job in  the  format  "days-hours:min‐
188                    utes:seconds"
189
190              %m    Size of memory per node in megabytes
191
192              %M    PreemptionMode
193
194              %n    List of node hostnames
195
196              %N    List of node names
197
198              %o    List of node communication addresses
199
200              %O    CPU load of a node
201
202              %p    Partition scheduling tier priority.
203
204              %P    Partition  name followed by "*" for the default partition,
205                    also see %R
206
207              %r    Only user root may initiate jobs, "yes" or "no"
208
209              %R    Partition name, also see %P
210
211              %s    Maximum job size in nodes
212
213              %S    Allowed allocating nodes
214
215              %t    State of nodes, compact form
216
217              %T    State of nodes, extended form
218
219              %u    Print the user name of  who  set  the  reason  a  node  is
220                    unavailable.
221
222              %U    Print  the  user name and uid of who set the reason a node
223                    is unavailable.
224
225              %v    Print the version of the running slurmd daemon.
226
227              %V    Print the cluster name if running in a federation
228
229              %w    Scheduling weight of the nodes
230
231              %X    Number of sockets per node
232
233              %Y    Number of cores per socket
234
235              %Z    Number of threads per core
236
237              %z    Extended processor information: number of sockets,  cores,
238                    threads (S:C:T) per node
239
240              %.<*> right justification of the field
241
242              %<Number><*>
243                    size of field
244
245
246       -O <output_format>, --Format=<output_format>
247              Specify  the information to be displayed.  Also see the -o <out‐
248              put_format>,  --format=<output_format>  option  described  below
249              (which  supports greater flexibility in formatting, but does not
250              support access to all fields because we  ran  out  of  letters).
251              Requests  a  comma  separated list of job information to be dis‐
252              played.
253
254
255              The format of each field is "type[:[.]size]"
256
257              size    is the minimum field size.  If no size is specified,  20
258                      characters will be allocated to print the information.
259
260               .      indicates  the output should be right justified and size
261                      must be specified.  By default, output  is  left  justi‐
262                      fied.
263
264
265              Valid type specifications include:
266
267              all   Print  all fields available in the -o format for this data
268                    type with a vertical bar separating each field.
269
270              allocmem
271                    Prints the amount of allocated memory on a node.
272
273              allocnodes
274                    Allowed allocating nodes.
275
276              available
277                    State/availability of a partition.
278
279              cluster
280                    Print the cluster name if running in a federation
281
282              cpus  Number of CPUs per node.
283
284              cpusload
285                    CPU load of a node.
286
287              freemem
288                    Free memory of a node.
289
290              cpusstate
291                    Number  of  CPUs   by   state   in   the   format   "allo‐
292                    cated/idle/other/total". Do not use this with a node state
293                    option ("%t" or "%T") or the different node states will be
294                    placed on separate lines.
295
296              cores Number of cores per socket.
297
298              defaulttime
299                    Default  time  for  any job in the format "days-hours:min‐
300                    utes:seconds".
301
302              disk  Size of temporary disk space per node in megabytes.
303
304              features
305                    Features available on the nodes. Also see features_act.
306
307              features_act
308                    Features currently active on the nodes. Also see features.
309
310              groups
311                    Groups which may use the nodes.
312
313              gres  Generic resources (gres) associated with the nodes.
314
315              maxcpuspernode
316                    The max number of CPUs per node available to jobs  in  the
317                    partition.
318
319              memory
320                    Size of memory per node in megabytes.
321
322              nodes Number of nodes.
323
324              nodeaddr
325                    List of node communication addresses.
326
327              nodeai
328                    Number  of  nodes by state in the format "allocated/idle".
329                    Do not use this with a node state option ("%t" or "%T") or
330                    the  different  node  states  will  be  placed on separate
331                    lines.
332
333              nodeaiot
334                    Number  of  nodes  by   state   in   the   format   "allo‐
335                    cated/idle/other/total".   Do  not  use  this  with a node
336                    state option ("%t" or "%T") or the different  node  states
337                    will be placed on separate lines.
338
339              nodehost
340                    List of node hostnames.
341
342              nodelist
343                    List of node names.
344
345              oversubscribe
346                    Jobs  may  oversubscribe  compute  resources  (i.e. CPUs),
347                    "yes", "no", "exclusive" or "force".
348
349              partition
350                    Partition name followed by "*" for the default  partition,
351                    also see %R.
352
353              partitionname
354                    Partition name, also see %P.
355
356              port  Node TCP port.
357
358              preemptmode
359                    PreemptionMode.
360
361              priorityjobfactor
362                    Partition  factor  used  by priority/multifactor plugin in
363                    calculating job priority.
364
365              prioritytier or priority
366                    Partition scheduling tier priority.
367
368              reason
369                    The reason a node is unavailable (down, drained, or drain‐
370                    ing states).
371
372              root  Only user root may initiate jobs, "yes" or "no".
373
374              size  Maximum job size in nodes.
375
376              statecompact
377                    State of nodes, compact form.
378
379              statelong
380                    State of nodes, extended form.
381
382              sockets
383                    Number of sockets per node.
384
385              socketcorethread
386                    Extended  processor information: number of sockets, cores,
387                    threads (S:C:T) per node.
388
389              time  Maximum time for any job in  the  format  "days-hours:min‐
390                    utes:seconds".
391
392              timestamp
393                    Print the timestamp of the reason a node is unavailable.
394
395              threads
396                    Number of threads per core.
397
398              user  Print  the  user  name  of  who  set  the reason a node is
399                    unavailable.
400
401              userlong
402                    Print the user name and uid of who set the reason  a  node
403                    is unavailable.
404
405              version
406                    Print the version of the running slurmd daemon.
407
408              weight
409                    Scheduling weight of the nodes.
410
411
412       -p <partition>, --partition=<partition>
413              Print  information only about the specified partition(s). Multi‐
414              ple partitions are separated by commas.
415
416
417       -r, --responding
418              If set only report state information for responding nodes.
419
420
421       -R, --list-reasons
422              List reasons nodes are in the down,  drained,  fail  or  failing
423              state.   When  nodes are in these states Slurm supports optional
424              inclusion of a "reason" string by an administrator.  This option
425              will  display  the  first  20 characters of the reason field and
426              list of nodes with that  reason  for  all  nodes  that  are,  by
427              default, down, drained, draining or failing.  This option may be
428              used with other node filtering options (e.g. -r,  -d,  -t,  -n),
429              however,  combinations of these options that result in a list of
430              nodes that are not down or drained or failing will  not  produce
431              any  output.  When used with -l the output additionally includes
432              the current node state.
433
434
435       -s, --summarize
436              List only a partition state summary with no node state  details.
437              This is ignored if the --format option is specified.
438
439
440       -S <sort_list>, --sort=<sort_list>
441              Specification  of the order in which records should be reported.
442              This uses the same field specification as  the  <output_format>.
443              Multiple  sorts may be performed by listing multiple sort fields
444              separated by commas.  The field specifications may  be  preceded
445              by  "+"  or  "-"  for  ascending  (default) and descending order
446              respectively.  The partition field specification,  "P",  may  be
447              preceded  by  a  "#" to report partitions in the same order that
448              they appear in Slurm's   configuration  file,  slurm.conf.   For
449              example,  a  sort  value  of  "+P,-m"  requests  that records be
450              printed in order of increasing partition name and within a  par‐
451              tition  by decreasing memory size.  The default value of sort is
452              "#P,-t" (partitions ordered as configured then  decreasing  node
453              state).   If  the  --Node  option  is selected, the default sort
454              value is "N" (increasing node name).
455
456
457       -t <states> , --states=<states>
458              List nodes only having the given state(s).  Multiple states  may
459              be comma separated and the comparison is case insensitive.  Pos‐
460              sible values include (case insensitive): ALLOC, ALLOCATED, COMP,
461              COMPLETING,  DOWN,  DRAIN  (for  node  in  DRAINING  or  DRAINED
462              states), DRAINED, DRAINING, FAIL,  FUTURE,  FUTR,  IDLE,  MAINT,
463              MIX,  MIXED,  NO_RESPOND,  NPC,  PERFCTRS, POWER_DOWN, POWER_UP,
464              RESV, RESERVED, UNK, and UNKNOWN.  By default nodes in the spec‐
465              ified  state  are  reported  whether they are responding or not.
466              The --dead and --responding options may  be  used  to  filtering
467              nodes by the responding flag.
468
469
470       -T, --reservation
471              Only display information about Slurm reservations.
472
473
474       --usage
475              Print a brief message listing the sinfo options.
476
477
478       -v, --verbose
479              Provide detailed event logging through program execution.
480
481
482       -V, --version
483              Print version information and exit.
484
485

OUTPUT FIELD DESCRIPTIONS

487       AVAIL  Partition state: up or down.
488
489       CPUS   Count of CPUs (processors) on each node.
490
491       S:C:T  Count of sockets (S), cores (C), and threads (T) on these nodes.
492
493       SOCKETS
494              Count of sockets on these nodes.
495
496       CORES  Count of cores on these nodes.
497
498       THREADS
499              Count of threads on these nodes.
500
501       GROUPS Resource  allocations  in  this  partition are restricted to the
502              named groups.  all indicates that all groups may use this parti‐
503              tion.
504
505       JOB_SIZE
506              Minimum and maximum node count that can be allocated to any user
507              job.  A single number indicates the  minimum  and  maximum  node
508              count  are  the  same.   infinite is used to identify partitions
509              without a maximum node count.
510
511       TIMELIMIT
512              Maximum time limit for any user job  in  days-hours:minutes:sec‐
513              onds.   infinite  is  used  to identify partitions without a job
514              time limit.
515
516       MEMORY Size of real memory in megabytes on these nodes.
517
518       NODELIST
519              Names of nodes associated with this configuration/partition.
520
521       NODES  Count of nodes with this particular configuration.
522
523       NODES(A/I)
524              Count of nodes with this particular configuration by node  state
525              in the form "available/idle".
526
527       NODES(A/I/O/T)
528              Count  of nodes with this particular configuration by node state
529              in the form "available/idle/other/total".
530
531       PARTITION
532              Name of a partition.  Note that the suffix  "*"  identifies  the
533              default partition.
534
535       PORT   Local TCP port used by slurmd on the node.
536
537       ROOT   Is   the   ability  to  allocate  resources  in  this  partition
538              restricted to user root, yes or no.
539
540       OVERSUBSCRIBE
541              Will jobs allocated resources in  this  partition  oversubscribe
542              those compute resources (i.e. CPUs).  no indicates resources are
543              never oversubscribed.  exclusive indicates whole nodes are dedi‐
544              cated  to  jobs  (equivalent  to srun --exclusive option, may be
545              used even with select/cons_res managing individual  processors).
546              force  indicates  resources  are always available to be oversub‐
547              scribed.  yes indicates resource may be  oversubscribed  or  not
548              per job's resource allocation.
549
550       STATE  State  of  the  nodes.  Possible states include: allocated, com‐
551              pleting, down, drained, draining, fail, failing,  future,  idle,
552              maint,  mixed,  perfctrs,  power_down,  power_up,  reserved, and
553              unknown plus Their abbreviated forms: alloc, comp, down,  drain,
554              drng,  fail, failg, futr, idle, maint, mix, npc, pow_dn, pow_up,
555              resv, and unk respectively.  Note that the suffix "*" identifies
556              nodes that are presently not responding.
557
558       TMP_DISK
559              Size of temporary disk space in megabytes on these nodes.
560
561

NODE STATE CODES

563       Node  state  codes are shortened as required for the field size.  These
564       node states may be followed by a special character  to  identify  state
565       flags  associated  with  the  node.   The  following node sufficies and
566       states are used:
567
568       *   The node is presently not responding and will not be allocated  any
569           new work.  If the node remains non-responsive, it will be placed in
570           the DOWN state (except in the case of COMPLETING,  DRAINED,  DRAIN‐
571           ING, FAIL, FAILING nodes).
572
573       ~   The  node is presently in a power saving mode (typically running at
574           reduced frequency).
575
576       #   The node is presently being powered up or configured.
577
578       $   The node is currently in a reservation with a flag value of  "main‐
579           tenance".
580
581       @   The node is pending reboot.
582
583       ALLOCATED   The node has been allocated to one or more jobs.
584
585       ALLOCATED+  The  node  is allocated to one or more active jobs plus one
586                   or more jobs are in the process of COMPLETING.
587
588       COMPLETING  All jobs associated with this node are in  the  process  of
589                   COMPLETING.   This  node  state will be removed when all of
590                   the job's processes have terminated and  the  Slurm  epilog
591                   program  (if  any) has terminated. See the Epilog parameter
592                   description in the slurm.conf man page  for  more  informa‐
593                   tion.
594
595       DOWN        The  node  is  unavailable for use. Slurm can automatically
596                   place nodes in this state if some  failure  occurs.  System
597                   administrators  may  also  explicitly  place  nodes in this
598                   state. If a node resumes normal operation, Slurm can  auto‐
599                   matically return it to service. See the ReturnToService and
600                   SlurmdTimeout parameter descriptions in  the  slurm.conf(5)
601                   man page for more information.
602
603       DRAINED     The  node  is  unavailable for use per system administrator
604                   request.  See the update node command  in  the  scontrol(1)
605                   man  page  or  the slurm.conf(5) man page for more informa‐
606                   tion.
607
608       DRAINING    The node is currently executing a  job,  but  will  not  be
609                   allocated  to  additional  jobs.  The  node  state  will be
610                   changed to state DRAINED when the last job on it completes.
611                   Nodes  enter  this  state per system administrator request.
612                   See the update node command in the scontrol(1) man page  or
613                   the slurm.conf(5) man page for more information.
614
615       FAIL        The  node  is  expected to fail soon and is unavailable for
616                   use per system administrator request.  See the update  node
617                   command  in  the  scontrol(1) man page or the slurm.conf(5)
618                   man page for more information.
619
620       FAILING     The node is currently executing a job, but is  expected  to
621                   fail soon and is unavailable for use per system administra‐
622                   tor request.  See the update  node  command  in  the  scon‐
623                   trol(1)  man  page  or  the slurm.conf(5) man page for more
624                   information.
625
626       FUTURE      The node is currently not fully configured, but expected to
627                   be  available  at  some  point in the indefinite future for
628                   use.
629
630       IDLE        The node is not allocated to any jobs and is available  for
631                   use.
632
633       MAINT       The node is currently in a reservation with a flag value of
634                   "maintainence".
635
636       REBOOT      The node is currently scheduled to be rebooted.
637
638       MIXED       The node has some of its CPUs ALLOCATED  while  others  are
639                   IDLE.
640
641       PERFCTRS (NPC)
642                   Network  Performance Counters associated with this node are
643                   in use, rendering this node as not  usable  for  any  other
644                   jobs
645
646       POWER_DOWN  The  node is currently powered down and not capable of run‐
647                   ning any jobs.
648
649       POWER_UP    The node is currently in the process of being powered up.
650
651       RESERVED    The node is in an advanced reservation  and  not  generally
652                   available.
653
654       UNKNOWN     The  Slurm controller has just started and the node's state
655                   has not yet been determined.
656
657

ENVIRONMENT VARIABLES

659       Some sinfo options may be set via environment variables. These environ‐
660       ment  variables,  along  with  their  corresponding options, are listed
661       below. (Note: Commandline options will always override these settings.)
662
663       SINFO_ALL           -a, --all
664
665       SINFO_FEDERATION    Same as --federation
666
667       SINFO_FORMAT        -o <output_format>, --format=<output_format>
668
669       SINFO_LOCAL         Same as --local
670
671       SINFO_PARTITION     -p <partition>, --partition=<partition>
672
673       SINFO_SORT          -S <sort>, --sort=<sort>
674
675       SLURM_CLUSTERS      Same as --clusters
676
677       SLURM_CONF          The location of the Slurm configuration file.
678
679       SLURM_TIME_FORMAT   Specify the format used to report  time  stamps.  A
680                           value  of  standard,  the  default value, generates
681                           output            in            the            form
682                           "year-month-dateThour:minute:second".   A  value of
683                           relative returns only "hour:minute:second"  if  the
684                           current  day.   For other dates in the current year
685                           it prints the "hour:minute"  preceded  by  "Tomorr"
686                           (tomorrow),  "Ystday"  (yesterday), the name of the
687                           day for the coming week (e.g. "Mon", "Tue",  etc.),
688                           otherwise  the  date  (e.g.  "25  Apr").  For other
689                           years it returns a date month and  year  without  a
690                           time  (e.g.   "6 Jun 2012"). All of the time stamps
691                           use a 24 hour format.
692
693                           A valid strftime() format can  also  be  specified.
694                           For example, a value of "%a %T" will report the day
695                           of the week and a time stamp (e.g. "Mon 12:34:56").
696
697

EXAMPLES

699       Report basic node and partition configurations:
700
701
702       > sinfo
703       PARTITION AVAIL TIMELIMIT NODES STATE  NODELIST
704       batch     up     infinite     2 alloc  adev[8-9]
705       batch     up     infinite     6 idle   adev[10-15]
706       debug*    up        30:00     8 idle   adev[0-7]
707
708
709       Report partition summary information:
710
711       > sinfo -s
712       PARTITION AVAIL TIMELIMIT NODES(A/I/O/T) NODELIST
713       batch     up     infinite 2/6/0/8        adev[8-15]
714       debug*    up        30:00 0/8/0/8        adev[0-7]
715
716
717       Report more complete information about the partition debug:
718
719       > sinfo --long --partition=debug
720       PARTITION AVAIL TIMELIMIT JOB_SIZE ROOT OVERSUBS GROUPS NODES STATE NODELIST
721       debug*    up        30:00        8 no   no       all        8 idle  dev[0-7]
722
723       Report only those nodes that are in state DRAINED:
724
725       > sinfo --states=drained
726       PARTITION AVAIL NODES TIMELIMIT STATE  NODELIST
727       debug*    up        2     30:00 drain  adev[6-7]
728
729
730       Report node-oriented information with details and exact matches:
731
732       > sinfo -Nel
733       NODELIST    NODES PARTITION STATE  CPUS MEMORY TMP_DISK WEIGHT FEATURES REASON
734       adev[0-1]       2 debug*    idle      2   3448    38536     16 (null)   (null)
735       adev[2,4-7]     5 debug*    idle      2   3384    38536     16 (null)   (null)
736       adev3           1 debug*    idle      2   3394    38536     16 (null)   (null)
737       adev[8-9]       2 batch     allocated 2    246    82306     16 (null)   (null)
738       adev[10-15]     6 batch     idle      2    246    82306     16 (null)   (null)
739
740
741       Report only down, drained and draining nodes and their reason field:
742
743       > sinfo -R
744       REASON                              NODELIST
745       Memory errors                       dev[0,5]
746       Not Responding                      dev8
747
748
749

COPYING

751       Copyright (C) 2002-2007 The Regents of the  University  of  California.
752       Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER).
753       Copyright (C) 2008-2009 Lawrence Livermore National Security.
754       Copyright (C) 2010-2017 SchedMD LLC.
755
756       This  file  is  part  of  Slurm,  a  resource  management program.  For
757       details, see <https://slurm.schedmd.com/>.
758
759       Slurm is free software; you can redistribute it and/or modify it  under
760       the  terms  of  the GNU General Public License as published by the Free
761       Software Foundation; either version 2  of  the  License,  or  (at  your
762       option) any later version.
763
764       Slurm  is  distributed  in the hope that it will be useful, but WITHOUT
765       ANY WARRANTY; without even the implied warranty of  MERCHANTABILITY  or
766       FITNESS  FOR  A PARTICULAR PURPOSE.  See the GNU General Public License
767       for more details.
768
769

SEE ALSO

771       scontrol(1),    smap(1),    squeue(1),     slurm_load_ctl_conf     (3),
772       slurm_load_jobs  (3),  slurm_load_node  (3), slurm_load_partitions (3),
773       slurm_reconfigure  (3),  slurm_shutdown  (3),   slurm_update_job   (3),
774       slurm_update_node (3), slurm_update_partition (3), slurm.conf(5)
775
776
777
778November 2016                   Slurm Commands                        sinfo(1)
Impressum