1sinfo(1)                        Slurm Commands                        sinfo(1)
2
3
4

NAME

6       sinfo - View information about Slurm nodes and partitions.
7
8

SYNOPSIS

10       sinfo [OPTIONS...]
11

DESCRIPTION

13       sinfo  is used to view partition and node information for a system run‐
14       ning Slurm.
15
16

OPTIONS

18       -a, --all
19              Display information about all partitions. This  causes  informa‐
20              tion  to  be  displayed  about partitions that are configured as
21              hidden and partitions that are unavailable to the user's group.
22
23
24       -d, --dead
25              If set, only report state information for non-responding  (dead)
26              nodes.
27
28
29       -e, --exact
30              If  set,  do not group node information on multiple nodes unless
31              their configurations to be reported are identical. Otherwise cpu
32              count, memory size, and disk space for nodes will be listed with
33              the minimum value followed by a "+" for nodes with the same par‐
34              tition and state (e.g. "250+").
35
36
37       --federation
38              Show all partitions from the federation if a member of one.
39
40
41       -h, --noheader
42              Do not print a header on the output.
43
44
45       --help Print a message describing all sinfo options.
46
47
48       --hide Do  not  display information about hidden partitions. Partitions
49              that are configured as hidden or are not available to the user's
50              group will not be displayed. This is the default behavior.
51
52
53       -i <seconds>, --iterate=<seconds>
54              Print  the  state  on a periodic basis.  Sleep for the indicated
55              number of seconds between reports.  By  default  prints  a  time
56              stamp with the header.
57
58
59       --local
60              Show  only  jobs local to this cluster. Ignore other clusters in
61              this federation (if any). Overrides --federation.
62
63
64       -l, --long
65              Print more detailed information.  This is ignored if the  --for‐
66              mat option is specified.
67
68
69       -M, --clusters=<string>
70              Clusters  to  issue  commands to.  Multiple cluster names may be
71              comma separated.  A value of 'all' will query all clusters. Note
72              that  the  SlurmDBD must be up for this option to work properly.
73              This option implicitly sets the --local option.
74
75
76       -n <nodes>, --nodes=<nodes>
77              Print information about the specified node(s).   Multiple  nodes
78              may  be  comma separated or expressed using a node range expres‐
79              sion (e.g. "linux[00-17]") Limiting the query to just the  rele‐
80              vant nodes can measurably improve the performance of the command
81              for large clusters.
82
83
84       --noconvert
85              Don't convert units from their original type (e.g.  2048M  won't
86              be converted to 2G).
87
88
89       -N, --Node
90              Print  information  in  a node-oriented format with one line per
91              node and partition. That is, if a node belongs to more than  one
92              partition,  then  one  line for each node-partition pair will be
93              shown.  If --partition is also specified, then only one line per
94              node in this partition is shown.  The default is to print infor‐
95              mation in a partition-oriented format.  This is ignored  if  the
96              --format option is specified.
97
98
99       -o <output_format>, --format=<output_format>
100              Specify  the  information  to be displayed using an sinfo format
101              string.  If the command is executed in a federated cluster envi‐
102              ronment  and  information  about  more than one cluster is to be
103              displayed and the -h, --noheader option is used, then the  clus‐
104              ter  name  will  be  displayed before the default output formats
105              shown below.  Format strings transparently used  by  sinfo  when
106              running with various options are:
107
108              default        "%#P %.5a %.10l %.6D %.6t %N"
109
110              --summarize    "%#P %.5a %.10l %.16F  %N"
111
112              --long         "%#P  %.5a %.10l %.10s %.4r %.8h %.10g %.6D %.11T
113                             %N"
114
115              --Node         "%#N %.6D %#P %6t"
116
117              --long --Node  "%#N %.6D %#P %.11T %.4c %.8z %.6m %.8d %.6w %.8f
118                             %20E"
119
120              --list-reasons "%20E %9u %19H %N"
121
122              --long --list-reasons
123                             "%20E %12U %19H %6t %N"
124
125
126              In the above format strings, the use of "#" represents the maxi‐
127              mum length of any partition name or node list to be printed.   A
128              pass  is  made  over  the records to be printed to establish the
129              size in order to align the sinfo output, then a second  pass  is
130              made  over  the  records  to  print them.  Note that the literal
131              character "#" itself is not a valid field length  specification,
132              but is only used to document this behaviour.
133
134
135              The format of each field is "%[[.]size]type[suffix]"
136
137                 size   Minimum  field size. If no size is specified, whatever
138                        is needed to print the information will be used.
139
140                 .      Indicates the output should  be  right  justified  and
141                        size  must  be  specified.   By default output is left
142                        justified.
143
144                 suffix Arbitrary string to append to the end of the field.
145
146
147              Valid type specifications include:
148
149              %all  Print all fields available for this data type with a  ver‐
150                    tical bar separating each field.
151
152              %a    State/availability of a partition.
153
154              %A    Number  of  nodes by state in the format "allocated/idle".
155                    Do not use this with a node state option ("%t" or "%T") or
156                    the  different  node  states  will  be  placed on separate
157                    lines.
158
159              %b    Features currently active on the nodes, also see %f.
160
161              %B    The max number of CPUs per node available to jobs  in  the
162                    partition.
163
164              %c    Number of CPUs per node.
165
166              %C    Number   of   CPUs   by   state   in   the  format  "allo‐
167                    cated/idle/other/total". Do not use this with a node state
168                    option ("%t" or "%T") or the different node states will be
169                    placed on separate lines.
170
171              %d    Size of temporary disk space per node in megabytes.
172
173              %D    Number of nodes.
174
175              %e    Free memory of a node.
176
177              %E    The reason a node is unavailable (down, drained, or drain‐
178                    ing states).
179
180              %f    Features available the nodes, also see %b.
181
182              %F    Number   of   nodes   by   state   in  the  format  "allo‐
183                    cated/idle/other/total".  Note the use of this format  op‐
184                    tion  with  a node state format option ("%t" or "%T") will
185                    result in the different node states being be  reported  on
186                    separate lines.
187
188              %g    Groups which may use the nodes.
189
190              %G    Generic resources (gres) associated with the nodes.
191
192              %h    Print the OverSubscribe setting for the partition.
193
194              %H    Print the timestamp of the reason a node is unavailable.
195
196              %I    Partition job priority weighting factor.
197
198              %l    Maximum  time  for  any job in the format "days-hours:min‐
199                    utes:seconds"
200
201              %L    Default time for any job in  the  format  "days-hours:min‐
202                    utes:seconds"
203
204              %m    Size of memory per node in megabytes.
205
206              %M    PreemptionMode.
207
208              %n    List of node hostnames.
209
210              %N    List of node names.
211
212              %o    List of node communication addresses.
213
214              %O    CPU load of a node.
215
216              %p    Partition scheduling tier priority.
217
218              %P    Partition  name followed by "*" for the default partition,
219                    also see %R.
220
221              %r    Only user root may initiate jobs, "yes" or "no".
222
223              %R    Partition name, also see %P.
224
225              %s    Maximum job size in nodes.
226
227              %S    Allowed allocating nodes.
228
229              %t    State of nodes, compact form.
230
231              %T    State of nodes, extended form.
232
233              %u    Print the user name of who set the reason a  node  is  un‐
234                    available.
235
236              %U    Print  the  user name and uid of who set the reason a node
237                    is unavailable.
238
239              %v    Print the version of the running slurmd daemon.
240
241              %V    Print the cluster name if running in a federation.
242
243              %w    Scheduling weight of the nodes.
244
245              %X    Number of sockets per node.
246
247              %Y    Number of cores per socket.
248
249              %Z    Number of threads per core.
250
251              %z    Extended processor information: number of sockets,  cores,
252                    threads (S:C:T) per node.
253
254
255       -O <output_format>, --Format=<output_format>
256              Specify  the information to be displayed.  Also see the -o <out‐
257              put_format>,  --format=<output_format>  option  (which  supports
258              greater  flexibility  in formatting, but does not support access
259              to all fields because we ran out of letters).  Requests a  comma
260              separated list of job information to be displayed.
261
262
263              The format of each field is "type[:[.][size][suffix]]"
264
265                 size   The  minimum  field size.  If no size is specified, 20
266                        characters will be allocated to print the information.
267
268                 .      Indicates the output should  be  right  justified  and
269                        size  must  be  specified.  By default, output is left
270                        justified.
271
272                 suffix Arbitrary string to append to the end of the field.
273
274
275              Valid type specifications include:
276
277              All    Print all fields available in the -o format for this data
278                     type with a vertical bar separating each field.
279
280              AllocMem
281                     Prints the amount of allocated memory on a node.
282
283              AllocNodes
284                     Allowed allocating nodes.
285
286              Available
287                     State/availability of a partition.
288
289              Cluster
290                     Print the cluster name if running in a federation.
291
292              Comment
293                     Comment. (Arbitrary descriptive string)
294
295              Cores  Number of cores per socket.
296
297              CPUs   Number of CPUs per node.
298
299              CPUsLoad
300                     CPU load of a node.
301
302              CPUsState
303                     Number   of   CPUs   by   state   in  the  format  "allo‐
304                     cated/idle/other/total". Do not  use  this  with  a  node
305                     state  option ("%t" or "%T") or the different node states
306                     will be placed on separate lines.
307
308              DefaultTime
309                     Default time for any job in the  format  "days-hours:min‐
310                     utes:seconds".
311
312              Disk   Size of temporary disk space per node in megabytes.
313
314              Features
315                     Features available on the nodes. Also see features_act.
316
317              features_act
318                     Features  currently  active  on  the nodes. Also see fea‐
319                     tures.
320
321              FreeMem
322                     Free memory of a node.
323
324              Gres   Generic resources (gres) associated with the nodes.
325
326              GresUsed
327                     Generic resources (gres) currently in use on the nodes.
328
329              Groups Groups which may use the nodes.
330
331              MaxCPUsPerNode
332                     The max number of CPUs per node available to jobs in  the
333                     partition.
334
335              Memory Size of memory per node in megabytes.
336
337              NodeAddr
338                     List of node communication addresses.
339
340              NodeAI Number  of nodes by state in the format "allocated/idle".
341                     Do not use this with a node state option ("%t"  or  "%T")
342                     or  the  different node states will be placed on separate
343                     lines.
344
345              NodeAIOT
346                     Number  of  nodes  by  state   in   the   format   "allo‐
347                     cated/idle/other/total".   Do  not  use  this with a node
348                     state option ("%t" or "%T") or the different node  states
349                     will be placed on separate lines.
350
351              NodeHost
352                     List of node hostnames.
353
354              NodeList
355                     List of node names.
356
357              Nodes  Number of nodes.
358
359              OverSubscribe
360                     Whether  jobs  may  oversubscribe compute resources (e.g.
361                     CPUs).
362
363              Partition
364                     Partition name followed by "*" for the default partition,
365                     also see %R.
366
367              PartitionName
368                     Partition name, also see %P.
369
370              Port   Node TCP port.
371
372              PreemptMode
373                     Preemption mode.
374
375              PriorityJobFactor
376                     Partition  factor  used by priority/multifactor plugin in
377                     calculating job priority.
378
379              PriorityTier or Priority
380                     Partition scheduling tier priority.
381
382              Reason The reason a  node  is  unavailable  (down,  drained,  or
383                     draining states).
384
385              Root   Only user root may initiate jobs, "yes" or "no".
386
387              Size   Maximum job size in nodes.
388
389              SocketCoreThread
390                     Extended processor information: number of sockets, cores,
391                     threads (S:C:T) per node.
392
393              Sockets
394                     Number of sockets per node.
395
396              StateCompact
397                     State of nodes, compact form.
398
399              StateLong
400                     State of nodes, extended form.
401
402              Threads
403                     Number of threads per core.
404
405              Time   Maximum time for any job in the  format  "days-hours:min‐
406                     utes:seconds".
407
408              TimeStamp
409                     Print the timestamp of the reason a node is unavailable.
410
411              User   Print  the  user name of who set the reason a node is un‐
412                     available.
413
414              UserLong
415                     Print the user name and uid of who set the reason a  node
416                     is unavailable.
417
418              Version
419                     Print the version of the running slurmd daemon.
420
421              Weight Scheduling weight of the nodes.
422
423
424       -p <partition>, --partition=<partition>
425              Print  information only about the specified partition(s). Multi‐
426              ple partitions are separated by commas.
427
428
429       -r, --responding
430              If set only report state information for responding nodes.
431
432
433       -R, --list-reasons
434              List reasons nodes are in the down,  drained,  fail  or  failing
435              state.  When nodes are in these states Slurm supports the inclu‐
436              sion of a "reason" string by an administrator.  This option will
437              display  the first 20 characters of the reason field and list of
438              nodes with that reason for all nodes that are, by default, down,
439              drained,  draining  or  failing.   This  option may be used with
440              other node filtering options (e.g. -r,  -d,  -t,  -n),  however,
441              combinations  of  these  options  that result in a list of nodes
442              that are not down or drained or failing  will  not  produce  any
443              output.   When used with -l the output additionally includes the
444              current node state.
445
446
447       -s, --summarize
448              List only a partition state summary with no node state  details.
449              This is ignored if the --format option is specified.
450
451
452       -S <sort_list>, --sort=<sort_list>
453              Specification  of the order in which records should be reported.
454              This uses the same field specification as  the  <output_format>.
455              Multiple  sorts may be performed by listing multiple sort fields
456              separated by commas.  The field specifications may  be  preceded
457              by  "+"  or "-" for ascending (default) and descending order re‐
458              spectively.  The partition field specification, "P", may be pre‐
459              ceded  by a "#" to report partitions in the same order that they
460              appear in Slurm's  configuration file, slurm.conf.  For example,
461              a  sort value of "+P,-m" requests that records be printed in or‐
462              der of increasing partition name and within a partition  by  de‐
463              creasing  memory  size.   The  default  value of sort is "#P,-t"
464              (partitions ordered as configured then decreasing  node  state).
465              If  the --Node option is selected, the default sort value is "N"
466              (increasing node name).
467
468
469       -t <states> , --states=<states>
470              List nodes only having the given state(s).  Multiple states  may
471              be  comma  separated and the comparison is case insensitive.  If
472              the states are separated by '&', then the nodes must be  in  all
473              states.   Possible values include (case insensitive): ALLOC, AL‐
474              LOCATED, CLOUD, COMP,  COMPLETING,  DOWN,  DRAIN  (for  node  in
475              DRAINING  or  DRAINED  states), DRAINED, DRAINING, FAIL, FUTURE,
476              FUTR,  IDLE,  MAINT,  MIX,  MIXED,  NO_RESPOND,  NPC,  PERFCTRS,
477              POWER_DOWN,  POWERING_DOWN,  POWER_UP,  RESV, RESERVED, UNK, and
478              UNKNOWN.  By default nodes in the specified state  are  reported
479              whether they are responding or not.  The --dead and --responding
480              options may be used to filter nodes by the corresponding flag.
481
482
483       -T, --reservation
484              Only display information about Slurm reservations.
485
486              NOTE: This option causes sinfo to  ignore  most  other  options,
487              which are focused on partition and node information.
488
489
490       --usage
491              Print a brief message listing the sinfo options.
492
493
494       -v, --verbose
495              Provide detailed event logging through program execution.
496
497
498       -V, --version
499              Print version information and exit.
500
501

OUTPUT FIELD DESCRIPTIONS

503       AVAIL  Partition  state.  Can  be either up, down, drain, or inact (for
504              INACTIVE). See the partition definition's State parameter in the
505              slurm.conf(5) man page for more information.
506
507       CPUS   Count of CPUs (processors) on these nodes.
508
509       S:C:T  Count of sockets (S), cores (C), and threads (T) on these nodes.
510
511       SOCKETS
512              Count of sockets on these nodes.
513
514       CORES  Count of cores on these nodes.
515
516       THREADS
517              Count of threads on these nodes.
518
519       GROUPS Resource  allocations  in  this  partition are restricted to the
520              named groups.  all indicates that all groups may use this parti‐
521              tion.
522
523       JOB_SIZE
524              Minimum and maximum node count that can be allocated to any user
525              job.  A single number indicates the  minimum  and  maximum  node
526              count  are  the  same.   infinite is used to identify partitions
527              without a maximum node count.
528
529       TIMELIMIT
530              Maximum time limit for any user job  in  days-hours:minutes:sec‐
531              onds.   infinite  is  used  to identify partitions without a job
532              time limit.
533
534       MEMORY Size of real memory in megabytes on these nodes.
535
536       NODELIST
537              Names of nodes associated with this particular configuration.
538
539       NODES  Count of nodes with this particular configuration.
540
541       NODES(A/I)
542              Count of nodes with this particular configuration by node  state
543              in the form "allocated/idle".
544
545       NODES(A/I/O/T)
546              Count  of nodes with this particular configuration by node state
547              in the form "allocated/idle/other/total".
548
549       PARTITION
550              Name of a partition.  Note that the suffix  "*"  identifies  the
551              default partition.
552
553       PORT   Local TCP port used by slurmd on the node.
554
555       ROOT   Is  the  ability  to  allocate  resources  in this partition re‐
556              stricted to user root, yes or no.
557
558       OVERSUBSCRIBE
559              Whether jobs allocated  resources  in  this  partition  can/will
560              oversubscribe those compute resources (e.g. CPUs).  NO indicates
561              resources are never oversubscribed.  EXCLUSIVE  indicates  whole
562              nodes  are dedicated to jobs (equivalent to srun --exclusive op‐
563              tion, may be used even with select/cons_res managing  individual
564              processors).   FORCE indicates resources are always available to
565              be oversubscribed.   YES  indicates  resource  may  be  oversub‐
566              scribed, if requested by the job's resource allocation.
567
568              NOTE: If OverSubscribe is set to FORCE or YES, the OversubScribe
569              value will be appended to the output.
570
571       STATE  State of the nodes.  Possible states  include:  allocated,  com‐
572              pleting,  down,  drained, draining, fail, failing, future, idle,
573              maint, mixed, perfctrs, power_down, power_up, reserved, and  un‐
574              known.   Their  abbreviated forms are: alloc, comp, down, drain,
575              drng, fail, failg, futr, idle, maint, mix, npc, pow_dn,  pow_up,
576              resv, and unk respectively.
577
578              NOTE: The suffix "*" identifies nodes that are presently not re‐
579              sponding.
580
581       TMP_DISK
582              Size of temporary disk space in megabytes on these nodes.
583
584

NODE STATE CODES

586       Node state codes are shortened as required for the field  size.   These
587       node  states  may  be followed by a special character to identify state
588       flags associated with the node.  The following node suffixes and states
589       are used:
590
591       *   The  node is presently not responding and will not be allocated any
592           new work.  If the node remains non-responsive, it will be placed in
593           the  DOWN  state (except in the case of COMPLETING, DRAINED, DRAIN‐
594           ING, FAIL, FAILING nodes).
595
596       ~   The node is presently in a power saving mode (typically running  at
597           reduced frequency).
598
599       #   The node is presently being powered up or configured.
600
601       %   The node is presently being powered down.
602
603       $   The  node is currently in a reservation with a flag value of "main‐
604           tenance".
605
606       @   The node is pending reboot.
607
608       ALLOCATED   The node has been allocated to one or more jobs.
609
610       ALLOCATED+  The node is allocated to one or more active jobs  plus  one
611                   or more jobs are in the process of COMPLETING.
612
613       COMPLETING  All  jobs  associated  with this node are in the process of
614                   COMPLETING.  This node state will be removed  when  all  of
615                   the  job's  processes  have terminated and the Slurm epilog
616                   program (if any) has terminated. See the  Epilog  parameter
617                   description in the slurm.conf(5) man page for more informa‐
618                   tion.
619
620       DOWN        The node is unavailable for use.  Slurm  can  automatically
621                   place  nodes  in  this state if some failure occurs. System
622                   administrators may also  explicitly  place  nodes  in  this
623                   state.  If a node resumes normal operation, Slurm can auto‐
624                   matically return it to service. See the ReturnToService and
625                   SlurmdTimeout  parameter  descriptions in the slurm.conf(5)
626                   man page for more information.
627
628       DRAINED     The node is unavailable for use  per  system  administrator
629                   request.   See  the  update node command in the scontrol(1)
630                   man page or the slurm.conf(5) man page  for  more  informa‐
631                   tion.
632
633       DRAINING    The  node is currently executing a job, but will not be al‐
634                   located additional jobs. The node state will be changed  to
635                   state  DRAINED when the last job on it completes. Nodes en‐
636                   ter this state per system administrator  request.  See  the
637                   update  node  command  in  the  scontrol(1) man page or the
638                   slurm.conf(5) man page for more information.
639
640       FAIL        The node is expected to fail soon and  is  unavailable  for
641                   use  per system administrator request.  See the update node
642                   command in the scontrol(1) man page  or  the  slurm.conf(5)
643                   man page for more information.
644
645       FAILING     The  node  is currently executing a job, but is expected to
646                   fail soon and is unavailable for use per system administra‐
647                   tor  request.   See  the  update  node command in the scon‐
648                   trol(1) man page or the slurm.conf(5) man page for more in‐
649                   formation.
650
651       FUTURE      The node is currently not fully configured, but expected to
652                   be available at some point in  the  indefinite  future  for
653                   use.
654
655       IDLE        The  node is not allocated to any jobs and is available for
656                   use.
657
658       MAINT       The node is currently in a reservation with a flag value of
659                   "maintenance".
660
661       REBOOT      The node is currently scheduled to be rebooted.
662
663       MIXED       The  node  has  some of its CPUs ALLOCATED while others are
664                   IDLE.
665
666       PERFCTRS (NPC)
667                   Network Performance Counters associated with this node  are
668                   in  use,  rendering  this  node as not usable for any other
669                   jobs
670
671       POWER_DOWN  The node is currently powered down and not capable of  run‐
672                   ning any jobs.
673
674       POWERING_DOWN
675                   The node is in the process of powering down and not capable
676                   of running any jobs.
677
678       POWER_UP    The node is in the process of being powered up.
679
680       RESERVED    The node is in an advanced reservation  and  not  generally
681                   available.
682
683       UNKNOWN     The  Slurm controller has just started and the node's state
684                   has not yet been determined.
685
686

PERFORMANCE

688       Executing sinfo sends a remote procedure call to slurmctld.  If  enough
689       calls from sinfo or other Slurm client commands that send remote proce‐
690       dure calls to the slurmctld daemon come in at once, it can result in  a
691       degradation  of performance of the slurmctld daemon, possibly resulting
692       in a denial of service.
693
694       Do not run sinfo or other Slurm client commands that send remote proce‐
695       dure  calls to slurmctld from loops in shell scripts or other programs.
696       Ensure that programs limit calls to sinfo to the minimum necessary  for
697       the information you are trying to gather.
698
699

ENVIRONMENT VARIABLES

701       Some sinfo options may be set via environment variables. These environ‐
702       ment variables, along with their corresponding options, are listed  be‐
703       low.  NOTE: Command line options will always override these settings.
704
705       SINFO_ALL           Same as -a, --all
706
707       SINFO_FEDERATION    Same as --federation
708
709       SINFO_FORMAT        Same  as  -o <output_format>, --format=<output_for‐
710                           mat>
711
712       SINFO_LOCAL         Same as --local
713
714       SINFO_PARTITION     Same as -p <partition>, --partition=<partition>
715
716       SINFO_SORT          Same as -S <sort>, --sort=<sort>
717
718       SLURM_CLUSTERS      Same as --clusters
719
720       SLURM_CONF          The location of the Slurm configuration file.
721
722       SLURM_TIME_FORMAT   Specify the format used to report  time  stamps.  A
723                           value  of  standard,  the  default value, generates
724                           output            in            the            form
725                           "year-month-dateThour:minute:second".   A  value of
726                           relative returns only "hour:minute:second"  if  the
727                           current  day.   For other dates in the current year
728                           it prints the "hour:minute"  preceded  by  "Tomorr"
729                           (tomorrow),  "Ystday"  (yesterday), the name of the
730                           day for the coming week (e.g. "Mon", "Tue",  etc.),
731                           otherwise  the  date  (e.g.  "25  Apr").  For other
732                           years it returns a date month and  year  without  a
733                           time  (e.g.   "6 Jun 2012"). All of the time stamps
734                           use a 24 hour format.
735
736                           A valid strftime() format can  also  be  specified.
737                           For example, a value of "%a %T" will report the day
738                           of the week and a time stamp (e.g. "Mon 12:34:56").
739
740

EXAMPLES

742       Report basic node and partition configurations:
743
744              $ sinfo
745              PARTITION AVAIL TIMELIMIT NODES STATE  NODELIST
746              batch     up     infinite     2 alloc  adev[8-9]
747              batch     up     infinite     6 idle   adev[10-15]
748              debug*    up        30:00     8 idle   adev[0-7]
749
750
751       Report partition summary information:
752
753              $ sinfo -s
754              PARTITION AVAIL TIMELIMIT NODES(A/I/O/T) NODELIST
755              batch     up     infinite 2/6/0/8        adev[8-15]
756              debug*    up        30:00 0/8/0/8        adev[0-7]
757
758
759       Report more complete information about the partition debug:
760
761              $ sinfo --long --partition=debug
762              PARTITION AVAIL TIMELIMIT JOB_SIZE ROOT OVERSUBS GROUPS NODES STATE NODELIST
763              debug*    up        30:00        8 no   no       all        8 idle  dev[0-7]
764
765
766       Report only those nodes that are in state DRAINED:
767
768              $ sinfo --states=drained
769              PARTITION AVAIL NODES TIMELIMIT STATE  NODELIST
770              debug*    up        2     30:00 drain  adev[6-7]
771
772
773       Report node-oriented information with details and exact matches:
774
775              $ sinfo -Nel
776              NODELIST    NODES PARTITION STATE  CPUS MEMORY TMP_DISK WEIGHT FEATURES REASON
777              adev[0-1]       2 debug*    idle      2   3448    38536     16 (null)   (null)
778              adev[2,4-7]     5 debug*    idle      2   3384    38536     16 (null)   (null)
779              adev3           1 debug*    idle      2   3394    38536     16 (null)   (null)
780              adev[8-9]       2 batch     allocated 2    246    82306     16 (null)   (null)
781              adev[10-15]     6 batch     idle      2    246    82306     16 (null)   (null)
782
783
784       Report only down, drained and draining nodes and their reason field:
785
786              $ sinfo -R
787              REASON                              NODELIST
788              Memory errors                       dev[0,5]
789              Not Responding                      dev8
790
791

COPYING

793       Copyright (C) 2002-2007 The Regents of the  University  of  California.
794       Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER).
795       Copyright (C) 2008-2009 Lawrence Livermore National Security.
796       Copyright (C) 2010-2017 SchedMD LLC.
797
798       This  file  is  part  of Slurm, a resource management program.  For de‐
799       tails, see <https://slurm.schedmd.com/>.
800
801       Slurm is free software; you can redistribute it and/or modify it  under
802       the  terms  of  the GNU General Public License as published by the Free
803       Software Foundation; either version 2 of the License, or (at  your  op‐
804       tion) any later version.
805
806       Slurm  is  distributed  in the hope that it will be useful, but WITHOUT
807       ANY WARRANTY; without even the implied warranty of  MERCHANTABILITY  or
808       FITNESS  FOR  A PARTICULAR PURPOSE.  See the GNU General Public License
809       for more details.
810
811

SEE ALSO

813       scontrol(1), squeue(1), slurm_load_ctl_conf (3),  slurm_load_jobs  (3),
814       slurm_load_node  (3), slurm_load_partitions (3), slurm_reconfigure (3),
815       slurm_shutdown  (3),  slurm_update_job  (3),   slurm_update_node   (3),
816       slurm_update_partition (3), slurm.conf(5)
817
818
819
820April 2021                      Slurm Commands                        sinfo(1)
Impressum