1sinfo(1)                        Slurm Commands                        sinfo(1)
2
3
4

NAME

6       sinfo - View information about Slurm nodes and partitions.
7
8

SYNOPSIS

10       sinfo [OPTIONS...]
11

DESCRIPTION

13       sinfo  is used to view partition and node information for a system run‐
14       ning Slurm.
15
16

OPTIONS

18       -a, --all
19              Display information about all partitions. This  causes  informa‐
20              tion  to  be  displayed  about partitions that are configured as
21              hidden and partitions that are unavailable to the user's group.
22
23
24       -d, --dead
25              If set, only report state information for non-responding  (dead)
26              nodes.
27
28
29       -e, --exact
30              If  set,  do not group node information on multiple nodes unless
31              their configurations to be reported are identical. Otherwise cpu
32              count, memory size, and disk space for nodes will be listed with
33              the minimum value followed by a "+" for nodes with the same par‐
34              tition and state (e.g. "250+").
35
36
37       --federation
38              Show all partitions from the federation if a member of one.
39
40
41       -h, --noheader
42              Do not print a header on the output.
43
44
45       --help Print a message describing all sinfo options.
46
47
48       --hide Do  not  display information about hidden partitions. Partitions
49              that are configured as hidden or are not available to the user's
50              group will not be displayed. This is the default behavior.
51
52
53       -i <seconds>, --iterate=<seconds>
54              Print  the  state  on a periodic basis.  Sleep for the indicated
55              number of seconds between reports.  By  default  prints  a  time
56              stamp with the header.
57
58
59       --local
60              Show  only  jobs local to this cluster. Ignore other clusters in
61              this federation (if any). Overrides --federation.
62
63
64       -l, --long
65              Print more detailed information.  This is ignored if the  --for‐
66              mat option is specified.
67
68
69       -M, --clusters=<string>
70              Clusters  to  issue  commands to.  Multiple cluster names may be
71              comma separated.  A value of 'all' will query all clusters. Note
72              that  the  SlurmDBD must be up for this option to work properly.
73              This option implicitly sets the --local option.
74
75
76       -n <nodes>, --nodes=<nodes>
77              Print information about the specified node(s).   Multiple  nodes
78              may  be  comma separated or expressed using a node range expres‐
79              sion (e.g. "linux[00-17]") Limiting the query to just the  rele‐
80              vant nodes can measurably improve the performance of the command
81              for large clusters.
82
83
84       --noconvert
85              Don't convert units from their original type (e.g.  2048M  won't
86              be converted to 2G).
87
88
89       -N, --Node
90              Print  information  in  a node-oriented format with one line per
91              node and partition. That is, if a node belongs to more than  one
92              partition,  then  one  line for each node-partition pair will be
93              shown.  If --partition is also specified, then only one line per
94              node in this partition is shown.  The default is to print infor‐
95              mation in a partition-oriented format.  This is ignored  if  the
96              --format option is specified.
97
98
99       -o <output_format>, --format=<output_format>
100              Specify  the  information  to be displayed using an sinfo format
101              string.  If the command is executed in a federated cluster envi‐
102              ronment  and  information  about  more than one cluster is to be
103              displayed and the -h, --noheader option is used, then the  clus‐
104              ter  name  will  be  displayed before the default output formats
105              shown below.  Format strings transparently used  by  sinfo  when
106              running with various options are:
107
108              default        "%#P %.5a %.10l %.6D %.6t %N"
109
110              --summarize    "%#P %.5a %.10l %.16F  %N"
111
112              --long         "%#P  %.5a %.10l %.10s %.4r %.8h %.10g %.6D %.11T
113                             %N"
114
115              --Node         "%#N %.6D %#P %6t"
116
117              --long --Node  "%#N %.6D %#P %.11T %.4c %.8z %.6m %.8d %.6w %.8f
118                             %20E"
119
120              --list-reasons "%20E %9u %19H %N"
121
122              --long --list-reasons
123                             "%20E %12U %19H %6t %N"
124
125
126              In the above format strings, the use of "#" represents the maxi‐
127              mum length of any partition name or node list to be printed.   A
128              pass  is  made  over  the records to be printed to establish the
129              size in order to align the sinfo output, then a second  pass  is
130              made  over  the  records  to  print them.  Note that the literal
131              character "#" itself is not a valid field length  specification,
132              but is only used to document this behaviour.
133
134
135              The format of each field is "%[[.]size]type[suffix]"
136
137                 size   Minimum  field size. If no size is specified, whatever
138                        is needed to print the information will be used.
139
140                 .      Indicates the output should  be  right  justified  and
141                        size  must  be  specified.   By default output is left
142                        justified.
143
144                 suffix Arbitrary string to append to the end of the field.
145
146
147              Valid type specifications include:
148
149              %all  Print all fields available for this data type with a  ver‐
150                    tical bar separating each field.
151
152              %a    State/availability of a partition.
153
154              %A    Number  of  nodes by state in the format "allocated/idle".
155                    Do not use this with a node state option ("%t" or "%T") or
156                    the  different  node  states  will  be  placed on separate
157                    lines.
158
159              %b    Features currently active on the nodes, also see %f.
160
161              %B    The max number of CPUs per node available to jobs  in  the
162                    partition.
163
164              %c    Number of CPUs per node.
165
166              %C    Number   of   CPUs   by   state   in   the  format  "allo‐
167                    cated/idle/other/total". Do not use this with a node state
168                    option ("%t" or "%T") or the different node states will be
169                    placed on separate lines.
170
171              %d    Size of temporary disk space per node in megabytes.
172
173              %D    Number of nodes.
174
175              %e    Free memory of a node.
176
177              %E    The reason a node is unavailable (down, drained, or drain‐
178                    ing states).
179
180              %f    Features available the nodes, also see %b.
181
182              %F    Number   of   nodes   by   state   in  the  format  "allo‐
183                    cated/idle/other/total".  Note  the  use  of  this  format
184                    option with a node state format option ("%t" or "%T") will
185                    result in the different node states being be  reported  on
186                    separate lines.
187
188              %g    Groups which may use the nodes.
189
190              %G    Generic resources (gres) associated with the nodes.
191
192              %h    Print the OverSubscribe setting for the partition.
193
194              %H    Print the timestamp of the reason a node is unavailable.
195
196              %I    Partition job priority weighting factor.
197
198              %l    Maximum  time  for  any job in the format "days-hours:min‐
199                    utes:seconds"
200
201              %L    Default time for any job in  the  format  "days-hours:min‐
202                    utes:seconds"
203
204              %m    Size of memory per node in megabytes.
205
206              %M    PreemptionMode.
207
208              %n    List of node hostnames.
209
210              %N    List of node names.
211
212              %o    List of node communication addresses.
213
214              %O    CPU load of a node.
215
216              %p    Partition scheduling tier priority.
217
218              %P    Partition  name followed by "*" for the default partition,
219                    also see %R.
220
221              %r    Only user root may initiate jobs, "yes" or "no".
222
223              %R    Partition name, also see %P.
224
225              %s    Maximum job size in nodes.
226
227              %S    Allowed allocating nodes.
228
229              %t    State of nodes, compact form.
230
231              %T    State of nodes, extended form.
232
233              %u    Print the user name of  who  set  the  reason  a  node  is
234                    unavailable.
235
236              %U    Print  the  user name and uid of who set the reason a node
237                    is unavailable.
238
239              %v    Print the version of the running slurmd daemon.
240
241              %V    Print the cluster name if running in a federation.
242
243              %w    Scheduling weight of the nodes.
244
245              %X    Number of sockets per node.
246
247              %Y    Number of cores per socket.
248
249              %Z    Number of threads per core.
250
251              %z    Extended processor information: number of sockets,  cores,
252                    threads (S:C:T) per node.
253
254
255       -O <output_format>, --Format=<output_format>
256              Specify  the information to be displayed.  Also see the -o <out‐
257              put_format>,  --format=<output_format>  option  (which  supports
258              greater  flexibility  in formatting, but does not support access
259              to all fields because we ran out of letters).  Requests a  comma
260              separated list of job information to be displayed.
261
262
263              The format of each field is "type[:[.][size][suffix]]"
264
265                 size   The  minimum  field size.  If no size is specified, 20
266                        characters will be allocated to print the information.
267
268                 .      Indicates the output should  be  right  justified  and
269                        size  must  be  specified.  By default, output is left
270                        justified.
271
272                 suffix Arbitrary string to append to the end of the field.
273
274
275              Valid type specifications include:
276
277              All
278                     Print all fields available in the -o format for this data
279                     type with a vertical bar separating each field.
280
281              AllocMem
282                     Prints the amount of allocated memory on a node.
283
284              AllocNodes
285                     Allowed allocating nodes.
286
287              Available
288                     State/availability of a partition.
289
290              Cluster
291                     Print the cluster name if running in a federation.
292
293              Comment
294                     Comment. (Arbitrary descriptive string)
295
296              Cores
297                     Number of cores per socket.
298
299              CPUs
300                     Number of CPUs per node.
301
302              CPUsLoad
303                     CPU load of a node.
304
305              CPUsState
306                     Number   of   CPUs   by   state   in  the  format  "allo‐
307                     cated/idle/other/total". Do not  use  this  with  a  node
308                     state  option ("%t" or "%T") or the different node states
309                     will be placed on separate lines.
310
311              DefaultTime
312                     Default time for any job in the  format  "days-hours:min‐
313                     utes:seconds".
314
315              Disk
316                     Size of temporary disk space per node in megabytes.
317
318              Features
319                     Features available on the nodes. Also see features_act.
320
321              features_act
322                     Features  currently  active  on  the nodes. Also see fea‐
323                     tures.
324
325              FreeMem
326                     Free memory of a node.
327
328              Gres
329                     Generic resources (gres) associated with the nodes.
330
331              GresUsed
332                     Generic resources (gres) currently in use on the nodes.
333
334              Groups
335                     Groups which may use the nodes.
336
337              MaxCPUsPerNode
338                     The max number of CPUs per node available to jobs in  the
339                     partition.
340
341              Memory
342                     Size of memory per node in megabytes.
343
344              NodeAddr
345                     List of node communication addresses.
346
347              NodeAI
348                     Number  of nodes by state in the format "allocated/idle".
349                     Do not use this with a node state option ("%t"  or  "%T")
350                     or  the  different node states will be placed on separate
351                     lines.
352
353              NodeAIOT
354                     Number  of  nodes  by  state   in   the   format   "allo‐
355                     cated/idle/other/total".   Do  not  use  this with a node
356                     state option ("%t" or "%T") or the different node  states
357                     will be placed on separate lines.
358
359              NodeHost
360                     List of node hostnames.
361
362              NodeList
363                     List of node names.
364
365              Nodes
366                     Number of nodes.
367
368              OverSubscribe
369                     Whether  jobs  may  oversubscribe compute resources (e.g.
370                     CPUs).
371
372              Partition
373                     Partition name followed by "*" for the default partition,
374                     also see %R.
375
376              PartitionName
377                     Partition name, also see %P.
378
379              Port
380                     Node TCP port.
381
382              PreemptMode
383                     Preemption mode.
384
385              PriorityJobFactor
386                     Partition  factor  used by priority/multifactor plugin in
387                     calculating job priority.
388
389              PriorityTier or Priority
390                     Partition scheduling tier priority.
391
392              Reason
393                     The reason a  node  is  unavailable  (down,  drained,  or
394                     draining states).
395
396              Root
397                     Only user root may initiate jobs, "yes" or "no".
398
399              Size
400                     Maximum job size in nodes.
401
402              SocketCoreThread
403                     Extended processor information: number of sockets, cores,
404                     threads (S:C:T) per node.
405
406              Sockets
407                     Number of sockets per node.
408
409              StateCompact
410                     State of nodes, compact form.
411
412              StateLong
413                     State of nodes, extended form.
414
415              Threads
416                     Number of threads per core.
417
418              Time
419                     Maximum time for any job in the  format  "days-hours:min‐
420                     utes:seconds".
421
422              TimeStamp
423                     Print the timestamp of the reason a node is unavailable.
424
425              User
426                     Print  the  user  name  of  who  set the reason a node is
427                     unavailable.
428
429              UserLong
430                     Print the user name and uid of who set the reason a  node
431                     is unavailable.
432
433              Version
434                     Print the version of the running slurmd daemon.
435
436              Weight
437                     Scheduling weight of the nodes.
438
439
440       -p <partition>, --partition=<partition>
441              Print  information only about the specified partition(s). Multi‐
442              ple partitions are separated by commas.
443
444
445       -r, --responding
446              If set only report state information for responding nodes.
447
448
449       -R, --list-reasons
450              List reasons nodes are in the down,  drained,  fail  or  failing
451              state.  When nodes are in these states Slurm supports the inclu‐
452              sion of a "reason" string by an administrator.  This option will
453              display  the first 20 characters of the reason field and list of
454              nodes with that reason for all nodes that are, by default, down,
455              drained,  draining  or  failing.   This  option may be used with
456              other node filtering options (e.g. -r,  -d,  -t,  -n),  however,
457              combinations  of  these  options  that result in a list of nodes
458              that are not down or drained or failing  will  not  produce  any
459              output.   When used with -l the output additionally includes the
460              current node state.
461
462
463       -s, --summarize
464              List only a partition state summary with no node state  details.
465              This is ignored if the --format option is specified.
466
467
468       -S <sort_list>, --sort=<sort_list>
469              Specification  of the order in which records should be reported.
470              This uses the same field specification as  the  <output_format>.
471              Multiple  sorts may be performed by listing multiple sort fields
472              separated by commas.  The field specifications may  be  preceded
473              by  "+"  or  "-"  for  ascending  (default) and descending order
474              respectively.  The partition field specification,  "P",  may  be
475              preceded  by  a  "#" to report partitions in the same order that
476              they appear in Slurm's   configuration  file,  slurm.conf.   For
477              example,  a  sort  value  of  "+P,-m"  requests  that records be
478              printed in order of increasing partition name and within a  par‐
479              tition  by decreasing memory size.  The default value of sort is
480              "#P,-t" (partitions ordered as configured then  decreasing  node
481              state).   If  the  --Node  option  is selected, the default sort
482              value is "N" (increasing node name).
483
484
485       -t <states> , --states=<states>
486              List nodes only having the given state(s).  Multiple states  may
487              be  comma  separated and the comparison is case insensitive.  If
488              the states are separated by '&', then the nodes must be  in  all
489              states.   Possible  values  include  (case  insensitive): ALLOC,
490              ALLOCATED, CLOUD, COMP, COMPLETING, DOWN,  DRAIN  (for  node  in
491              DRAINING  or  DRAINED  states), DRAINED, DRAINING, FAIL, FUTURE,
492              FUTR,  IDLE,  MAINT,  MIX,  MIXED,  NO_RESPOND,  NPC,  PERFCTRS,
493              POWER_DOWN,  POWERING_DOWN,  POWER_UP,  RESV, RESERVED, UNK, and
494              UNKNOWN.  By default nodes in the specified state  are  reported
495              whether they are responding or not.  The --dead and --responding
496              options may be used to filter nodes by the corresponding flag.
497
498
499       -T, --reservation
500              Only display information about Slurm reservations.
501
502              NOTE: This option causes sinfo to  ignore  most  other  options,
503              which are focused on partition and node information.
504
505
506       --usage
507              Print a brief message listing the sinfo options.
508
509
510       -v, --verbose
511              Provide detailed event logging through program execution.
512
513
514       -V, --version
515              Print version information and exit.
516
517

OUTPUT FIELD DESCRIPTIONS

519       AVAIL  Partition  state.  Can  be either up, down, drain, or inact (for
520              INACTIVE). See the partition definition's State parameter in the
521              slurm.conf(5) man page for more information.
522
523       CPUS   Count of CPUs (processors) on these nodes.
524
525       S:C:T  Count of sockets (S), cores (C), and threads (T) on these nodes.
526
527       SOCKETS
528              Count of sockets on these nodes.
529
530       CORES  Count of cores on these nodes.
531
532       THREADS
533              Count of threads on these nodes.
534
535       GROUPS Resource  allocations  in  this  partition are restricted to the
536              named groups.  all indicates that all groups may use this parti‐
537              tion.
538
539       JOB_SIZE
540              Minimum and maximum node count that can be allocated to any user
541              job.  A single number indicates the  minimum  and  maximum  node
542              count  are  the  same.   infinite is used to identify partitions
543              without a maximum node count.
544
545       TIMELIMIT
546              Maximum time limit for any user job  in  days-hours:minutes:sec‐
547              onds.   infinite  is  used  to identify partitions without a job
548              time limit.
549
550       MEMORY Size of real memory in megabytes on these nodes.
551
552       NODELIST
553              Names of nodes associated with this particular configuration.
554
555       NODES  Count of nodes with this particular configuration.
556
557       NODES(A/I)
558              Count of nodes with this particular configuration by node  state
559              in the form "available/idle".
560
561       NODES(A/I/O/T)
562              Count  of nodes with this particular configuration by node state
563              in the form "available/idle/other/total".
564
565       PARTITION
566              Name of a partition.  Note that the suffix  "*"  identifies  the
567              default partition.
568
569       PORT   Local TCP port used by slurmd on the node.
570
571       ROOT   Is   the   ability  to  allocate  resources  in  this  partition
572              restricted to user root, yes or no.
573
574       OVERSUBSCRIBE
575              Whether jobs allocated  resources  in  this  partition  can/will
576              oversubscribe those compute resources (e.g. CPUs).  NO indicates
577              resources are never oversubscribed.  EXCLUSIVE  indicates  whole
578              nodes  are  dedicated  to  jobs  (equivalent to srun --exclusive
579              option, may be used even with select/cons_res managing  individ‐
580              ual processors).  FORCE indicates resources are always available
581              to be oversubscribed.  YES indicates resource  may  be  oversub‐
582              scribed, if requested by the job's resource allocation.
583
584              NOTE: If OverSubscribe is set to FORCE or YES, the OversubScribe
585              value will be appended to the output.
586
587       STATE  State of the nodes.  Possible states  include:  allocated,  com‐
588              pleting,  down,  drained, draining, fail, failing, future, idle,
589              maint, mixed,  perfctrs,  power_down,  power_up,  reserved,  and
590              unknown.  Their abbreviated forms are: alloc, comp, down, drain,
591              drng, fail, failg, futr, idle, maint, mix, npc, pow_dn,  pow_up,
592              resv, and unk respectively.
593
594              NOTE:  The  suffix  "*"  identifies nodes that are presently not
595              responding.
596
597       TMP_DISK
598              Size of temporary disk space in megabytes on these nodes.
599
600

NODE STATE CODES

602       Node state codes are shortened as required for the field  size.   These
603       node  states  may  be followed by a special character to identify state
604       flags associated with the node.  The following node suffixes and states
605       are used:
606
607       *   The  node is presently not responding and will not be allocated any
608           new work.  If the node remains non-responsive, it will be placed in
609           the  DOWN  state (except in the case of COMPLETING, DRAINED, DRAIN‐
610           ING, FAIL, FAILING nodes).
611
612       ~   The node is presently in a power saving mode (typically running  at
613           reduced frequency).
614
615       #   The node is presently being powered up or configured.
616
617       %   The node is presently being powered down.
618
619       $   The  node is currently in a reservation with a flag value of "main‐
620           tenance".
621
622       @   The node is pending reboot.
623
624       ALLOCATED   The node has been allocated to one or more jobs.
625
626       ALLOCATED+  The node is allocated to one or more active jobs  plus  one
627                   or more jobs are in the process of COMPLETING.
628
629       COMPLETING  All  jobs  associated  with this node are in the process of
630                   COMPLETING.  This node state will be removed  when  all  of
631                   the  job's  processes  have terminated and the Slurm epilog
632                   program (if any) has terminated. See the  Epilog  parameter
633                   description in the slurm.conf(5) man page for more informa‐
634                   tion.
635
636       DOWN        The node is unavailable for use.  Slurm  can  automatically
637                   place  nodes  in  this state if some failure occurs. System
638                   administrators may also  explicitly  place  nodes  in  this
639                   state.  If a node resumes normal operation, Slurm can auto‐
640                   matically return it to service. See the ReturnToService and
641                   SlurmdTimeout  parameter  descriptions in the slurm.conf(5)
642                   man page for more information.
643
644       DRAINED     The node is unavailable for use  per  system  administrator
645                   request.   See  the  update node command in the scontrol(1)
646                   man page or the slurm.conf(5) man page  for  more  informa‐
647                   tion.
648
649       DRAINING    The  node  is  currently  executing  a job, but will not be
650                   allocated additional jobs. The node state will  be  changed
651                   to  state  DRAINED when the last job on it completes. Nodes
652                   enter this state per system administrator request. See  the
653                   update  node  command  in  the  scontrol(1) man page or the
654                   slurm.conf(5) man page for more information.
655
656       FAIL        The node is expected to fail soon and  is  unavailable  for
657                   use  per system administrator request.  See the update node
658                   command in the scontrol(1) man page  or  the  slurm.conf(5)
659                   man page for more information.
660
661       FAILING     The  node  is currently executing a job, but is expected to
662                   fail soon and is unavailable for use per system administra‐
663                   tor  request.   See  the  update  node command in the scon‐
664                   trol(1) man page or the slurm.conf(5)  man  page  for  more
665                   information.
666
667       FUTURE      The node is currently not fully configured, but expected to
668                   be available at some point in  the  indefinite  future  for
669                   use.
670
671       IDLE        The  node is not allocated to any jobs and is available for
672                   use.
673
674       MAINT       The node is currently in a reservation with a flag value of
675                   "maintenance".
676
677       REBOOT      The node is currently scheduled to be rebooted.
678
679       MIXED       The  node  has  some of its CPUs ALLOCATED while others are
680                   IDLE.
681
682       PERFCTRS (NPC)
683                   Network Performance Counters associated with this node  are
684                   in  use,  rendering  this  node as not usable for any other
685                   jobs
686
687       POWER_DOWN  The node is currently powered down and not capable of  run‐
688                   ning any jobs.
689
690       POWERING_DOWN
691                   The node is in the process of powering down and not capable
692                   of running any jobs.
693
694       POWER_UP    The node is in the process of being powered up.
695
696       RESERVED    The node is in an advanced reservation  and  not  generally
697                   available.
698
699       UNKNOWN     The  Slurm controller has just started and the node's state
700                   has not yet been determined.
701
702

PERFORMANCE

704       Executing sinfo sends a remote procedure call to slurmctld.  If  enough
705       calls from sinfo or other Slurm client commands that send remote proce‐
706       dure calls to the slurmctld daemon come in at once, it can result in  a
707       degradation  of performance of the slurmctld daemon, possibly resulting
708       in a denial of service.
709
710       Do not run sinfo or other Slurm client commands that send remote proce‐
711       dure  calls to slurmctld from loops in shell scripts or other programs.
712       Ensure that programs limit calls to sinfo to the minimum necessary  for
713       the information you are trying to gather.
714
715

ENVIRONMENT VARIABLES

717       Some sinfo options may be set via environment variables. These environ‐
718       ment variables, along with  their  corresponding  options,  are  listed
719       below.  NOTE: Command line options will always override these settings.
720
721       SINFO_ALL           Same as -a, --all
722
723       SINFO_FEDERATION    Same as --federation
724
725       SINFO_FORMAT        Same  as  -o <output_format>, --format=<output_for‐
726                           mat>
727
728       SINFO_LOCAL         Same as --local
729
730       SINFO_PARTITION     Same as -p <partition>, --partition=<partition>
731
732       SINFO_SORT          Same as -S <sort>, --sort=<sort>
733
734       SLURM_CLUSTERS      Same as --clusters
735
736       SLURM_CONF          The location of the Slurm configuration file.
737
738       SLURM_TIME_FORMAT   Specify the format used to report  time  stamps.  A
739                           value  of  standard,  the  default value, generates
740                           output            in            the            form
741                           "year-month-dateThour:minute:second".   A  value of
742                           relative returns only "hour:minute:second"  if  the
743                           current  day.   For other dates in the current year
744                           it prints the "hour:minute"  preceded  by  "Tomorr"
745                           (tomorrow),  "Ystday"  (yesterday), the name of the
746                           day for the coming week (e.g. "Mon", "Tue",  etc.),
747                           otherwise  the  date  (e.g.  "25  Apr").  For other
748                           years it returns a date month and  year  without  a
749                           time  (e.g.   "6 Jun 2012"). All of the time stamps
750                           use a 24 hour format.
751
752                           A valid strftime() format can  also  be  specified.
753                           For example, a value of "%a %T" will report the day
754                           of the week and a time stamp (e.g. "Mon 12:34:56").
755
756

EXAMPLES

758       Report basic node and partition configurations:
759
760
761       > sinfo
762       PARTITION AVAIL TIMELIMIT NODES STATE  NODELIST
763       batch     up     infinite     2 alloc  adev[8-9]
764       batch     up     infinite     6 idle   adev[10-15]
765       debug*    up        30:00     8 idle   adev[0-7]
766
767
768       Report partition summary information:
769
770       > sinfo -s
771       PARTITION AVAIL TIMELIMIT NODES(A/I/O/T) NODELIST
772       batch     up     infinite 2/6/0/8        adev[8-15]
773       debug*    up        30:00 0/8/0/8        adev[0-7]
774
775
776       Report more complete information about the partition debug:
777
778       > sinfo --long --partition=debug
779       PARTITION AVAIL TIMELIMIT JOB_SIZE ROOT OVERSUBS GROUPS NODES STATE NODELIST
780       debug*    up        30:00        8 no   no       all        8 idle  dev[0-7]
781
782       Report only those nodes that are in state DRAINED:
783
784       > sinfo --states=drained
785       PARTITION AVAIL NODES TIMELIMIT STATE  NODELIST
786       debug*    up        2     30:00 drain  adev[6-7]
787
788
789       Report node-oriented information with details and exact matches:
790
791       > sinfo -Nel
792       NODELIST    NODES PARTITION STATE  CPUS MEMORY TMP_DISK WEIGHT FEATURES REASON
793       adev[0-1]       2 debug*    idle      2   3448    38536     16 (null)   (null)
794       adev[2,4-7]     5 debug*    idle      2   3384    38536     16 (null)   (null)
795       adev3           1 debug*    idle      2   3394    38536     16 (null)   (null)
796       adev[8-9]       2 batch     allocated 2    246    82306     16 (null)   (null)
797       adev[10-15]     6 batch     idle      2    246    82306     16 (null)   (null)
798
799
800       Report only down, drained and draining nodes and their reason field:
801
802       > sinfo -R
803       REASON                              NODELIST
804       Memory errors                       dev[0,5]
805       Not Responding                      dev8
806
807
808

COPYING

810       Copyright (C) 2002-2007 The Regents of the  University  of  California.
811       Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER).
812       Copyright (C) 2008-2009 Lawrence Livermore National Security.
813       Copyright (C) 2010-2017 SchedMD LLC.
814
815       This  file  is  part  of  Slurm,  a  resource  management program.  For
816       details, see <https://slurm.schedmd.com/>.
817
818       Slurm is free software; you can redistribute it and/or modify it  under
819       the  terms  of  the GNU General Public License as published by the Free
820       Software Foundation; either version 2  of  the  License,  or  (at  your
821       option) any later version.
822
823       Slurm  is  distributed  in the hope that it will be useful, but WITHOUT
824       ANY WARRANTY; without even the implied warranty of  MERCHANTABILITY  or
825       FITNESS  FOR  A PARTICULAR PURPOSE.  See the GNU General Public License
826       for more details.
827
828

SEE ALSO

830       scontrol(1), squeue(1), slurm_load_ctl_conf (3),  slurm_load_jobs  (3),
831       slurm_load_node  (3), slurm_load_partitions (3), slurm_reconfigure (3),
832       slurm_shutdown  (3),  slurm_update_job  (3),   slurm_update_node   (3),
833       slurm_update_partition (3), slurm.conf(5)
834
835
836
837October 2020                    Slurm Commands                        sinfo(1)
Impressum