1sinfo(1)                        Slurm Commands                        sinfo(1)
2
3
4

NAME

6       sinfo - View information about Slurm nodes and partitions.
7
8

SYNOPSIS

10       sinfo [OPTIONS...]
11

DESCRIPTION

13       sinfo  is used to view partition and node information for a system run‐
14       ning Slurm.
15
16

OPTIONS

18       -a, --all
19              Display information about all partitions. This  causes  informa‐
20              tion  to  be  displayed  about partitions that are configured as
21              hidden and partitions that are unavailable to the user's group.
22
23       -M, --clusters=<string>
24              Clusters to issue commands to.  Multiple cluster  names  may  be
25              comma separated.  A value of 'all' will query all clusters. Note
26              that the SlurmDBD must be up for this option to  work  properly.
27              This option implicitly sets the --local option.
28
29       -d, --dead
30              If  set, only report state information for non-responding (dead)
31              nodes.
32
33       -e, --exact
34              If set, do not group node information on multiple  nodes  unless
35              their configurations to be reported are identical. Otherwise cpu
36              count, memory size, and disk space for nodes will be listed with
37              the minimum value followed by a "+" for nodes with the same par‐
38              tition and state (e.g. "250+").
39
40       --federation
41              Show all partitions from the federation if a member of one.
42
43       -o, --format=<output_format>
44              Specify the information to be displayed using  an  sinfo  format
45              string.  If the command is executed in a federated cluster envi‐
46              ronment and information about more than one  cluster  is  to  be
47              displayed  and the -h, --noheader option is used, then the clus‐
48              ter name will be displayed before  the  default  output  formats
49              shown  below.   Format  strings transparently used by sinfo when
50              running with various options are:
51
52              default        "%#P %.5a %.10l %.6D %.6t %N"
53
54              --summarize    "%#P %.5a %.10l %.16F  %N"
55
56              --long         "%#P %.5a %.10l %.10s %.4r %.8h %.10g %.6D  %.11T
57                             %N"
58
59              --Node         "%#N %.6D %#P %6t"
60
61              --long --Node  "%#N %.6D %#P %.11T %.4c %.8z %.6m %.8d %.6w %.8f
62                             %20E"
63
64              --list-reasons "%20E %9u %19H %N"
65
66              --long --list-reasons
67                             "%20E %12U %19H %6t %N"
68
69              In the above format strings, the use of "#" represents the maxi‐
70              mum  length of any partition name or node list to be printed.  A
71              pass is made over the records to be  printed  to  establish  the
72              size  in  order to align the sinfo output, then a second pass is
73              made over the records to print  them.   Note  that  the  literal
74              character  "#" itself is not a valid field length specification,
75              but is only used to document this behaviour.
76
77              The format of each field is "%[[.]size]type[suffix]"
78
79                 size   Minimum field size. If no size is specified,  whatever
80                        is needed to print the information will be used.
81
82                 .      Indicates  the  output  should  be right justified and
83                        size must be specified.  By  default  output  is  left
84                        justified.
85
86                 suffix Arbitrary string to append to the end of the field.
87
88       Valid type specifications include:
89
90              %all  Print  all fields available for this data type with a ver‐
91                    tical bar separating each field.
92
93              %a    State/availability of a partition.
94
95              %A    Number of nodes by state in the  format  "allocated/idle".
96                    Do not use this with a node state option ("%t" or "%T") or
97                    the different node  states  will  be  placed  on  separate
98                    lines.
99
100              %b    Features currently active on the nodes, also see %f.
101
102              %B    The  max  number of CPUs per node available to jobs in the
103                    partition.
104
105              %c    Number of CPUs per node.
106
107              %C    Number  of  CPUs   by   state   in   the   format   "allo‐
108                    cated/idle/other/total". Do not use this with a node state
109                    option ("%t" or "%T") or the different node states will be
110                    placed on separate lines.
111
112              %d    Size of temporary disk space per node in megabytes.
113
114              %D    Number of nodes.
115
116              %e    Free memory of a node.
117
118              %E    The reason a node is unavailable (down, drained, or drain‐
119                    ing states).
120
121              %f    Features available the nodes, also see %b.
122
123              %F    Number  of  nodes  by   state   in   the   format   "allo‐
124                    cated/idle/other/total".   Note the use of this format op‐
125                    tion with a node state format option ("%t" or  "%T")  will
126                    result  in  the different node states being be reported on
127                    separate lines.
128
129              %g    Groups which may use the nodes.
130
131              %G    Generic resources (gres) associated with the nodes.
132
133              %h    Print the OverSubscribe setting for the partition.
134
135              %H    Print the timestamp of the reason a node is unavailable.
136
137              %I    Partition job priority weighting factor.
138
139              %l    Maximum time for any job in  the  format  "days-hours:min‐
140                    utes:seconds"
141
142              %L    Default  time  for  any job in the format "days-hours:min‐
143                    utes:seconds"
144
145              %m    Size of memory per node in megabytes.
146
147              %M    PreemptionMode.
148
149              %n    List of node hostnames.
150
151              %N    List of node names.
152
153              %o    List of node communication addresses.
154
155              %O    CPU load of a node.
156
157              %p    Partition scheduling tier priority.
158
159              %P    Partition name followed by "*" for the default  partition,
160                    also see %R.
161
162              %r    Only user root may initiate jobs, "yes" or "no".
163
164              %R    Partition name, also see %P.
165
166              %s    Maximum job size in nodes.
167
168              %S    Allowed allocating nodes.
169
170              %t    State of nodes, compact form.
171
172              %T    State of nodes, extended form.
173
174              %u    Print  the  user  name of who set the reason a node is un‐
175                    available.
176
177              %U    Print the user name and uid of who set the reason  a  node
178                    is unavailable.
179
180              %v    Print the version of the running slurmd daemon.
181
182              %V    Print the cluster name if running in a federation.
183
184              %w    Scheduling weight of the nodes.
185
186              %X    Number of sockets per node.
187
188              %Y    Number of cores per socket.
189
190              %Z    Number of threads per core.
191
192              %z    Extended  processor information: number of sockets, cores,
193                    threads (S:C:T) per node.
194
195       -O, --Format=<output_format>
196              Specify the information to be displayed.  Also see the -o  <out‐
197              put_format>,  --format=<output_format>  option  (which  supports
198              greater flexibility in formatting, but does not  support  access
199              to  all fields because we ran out of letters).  Requests a comma
200              separated list of job information to be displayed.
201
202              The format of each field is "type[:[.][size][suffix]]"
203
204                 size   The minimum field size.  If no size is  specified,  20
205                        characters will be allocated to print the information.
206
207                 .      Indicates  the  output  should  be right justified and
208                        size must be specified.  By default,  output  is  left
209                        justified.
210
211                 suffix Arbitrary string to append to the end of the field.
212
213       Valid type specifications include:
214
215              All    Print all fields available in the -o format for this data
216                     type with a vertical bar separating each field.
217
218              AllocMem
219                     Prints the amount of allocated memory on a node.
220
221              AllocNodes
222                     Allowed allocating nodes.
223
224              Available
225                     State/availability of a partition.
226
227              Cluster
228                     Print the cluster name if running in a federation.
229
230              Comment
231                     Comment. (Arbitrary descriptive string)
232
233              Cores  Number of cores per socket.
234
235              CPUs   Number of CPUs per node.
236
237              CPUsLoad
238                     CPU load of a node.
239
240              CPUsState
241                     Number  of  CPUs  by   state   in   the   format   "allo‐
242                     cated/idle/other/total".  Do  not  use  this  with a node
243                     state option ("%t" or "%T") or the different node  states
244                     will be placed on separate lines.
245
246              DefaultTime
247                     Default  time  for any job in the format "days-hours:min‐
248                     utes:seconds".
249
250              Disk   Size of temporary disk space per node in megabytes.
251
252              Extra  Arbitrary string on the node.
253
254              Features
255                     Features available on the nodes. Also see features_act.
256
257              features_act
258                     Features currently active on the  nodes.  Also  see  fea‐
259                     tures.
260
261              FreeMem
262                     Free memory of a node.
263
264              Gres   Generic resources (gres) associated with the nodes.
265
266              GresUsed
267                     Generic resources (gres) currently in use on the nodes.
268
269              Groups Groups which may use the nodes.
270
271              MaxCPUsPerNode
272                     The  max number of CPUs per node available to jobs in the
273                     partition.
274
275              Memory Size of memory per node in megabytes.
276
277              NodeAddr
278                     List of node communication addresses.
279
280              NodeAI Number of nodes by state in the format  "allocated/idle".
281                     Do  not  use this with a node state option ("%t" or "%T")
282                     or the different node states will be placed  on  separate
283                     lines.
284
285              NodeAIOT
286                     Number   of   nodes   by   state  in  the  format  "allo‐
287                     cated/idle/other/total".  Do not use  this  with  a  node
288                     state  option ("%t" or "%T") or the different node states
289                     will be placed on separate lines.
290
291              NodeHost
292                     List of node hostnames.
293
294              NodeList
295                     List of node names.
296
297              Nodes  Number of nodes.
298
299              OverSubscribe
300                     Whether jobs may oversubscribe  compute  resources  (e.g.
301                     CPUs).
302
303              Partition
304                     Partition name followed by "*" for the default partition,
305                     also see %R.
306
307              PartitionName
308                     Partition name, also see %P.
309
310              Port   Node TCP port.
311
312              PreemptMode
313                     Preemption mode.
314
315              PriorityJobFactor
316                     Partition factor used by priority/multifactor  plugin  in
317                     calculating job priority.
318
319              PriorityTier or Priority
320                     Partition scheduling tier priority.
321
322              Reason The  reason  a  node  is  unavailable  (down, drained, or
323                     draining states).
324
325              Root   Only user root may initiate jobs, "yes" or "no".
326
327              Size   Maximum job size in nodes.
328
329              SocketCoreThread
330                     Extended processor information: number of sockets, cores,
331                     threads (S:C:T) per node.
332
333              Sockets
334                     Number of sockets per node.
335
336              StateCompact
337                     State of nodes, compact form.
338
339              StateLong
340                     State of nodes, extended form.
341
342              StateComplete
343                     State  of  nodes,  including  all  node  state flags. eg.
344                     "idle+cloud+power"
345
346              Threads
347                     Number of threads per core.
348
349              Time   Maximum time for any job in the  format  "days-hours:min‐
350                     utes:seconds".
351
352              TimeStamp
353                     Print the timestamp of the reason a node is unavailable.
354
355              User   Print  the  user name of who set the reason a node is un‐
356                     available.
357
358              UserLong
359                     Print the user name and uid of who set the reason a  node
360                     is unavailable.
361
362              Version
363                     Print the version of the running slurmd daemon.
364
365              Weight Scheduling weight of the nodes.
366
367       --help Print a message describing all sinfo options.
368
369       --hide Do  not  display information about hidden partitions. Partitions
370              that are configured as hidden or are not available to the user's
371              group will not be displayed. This is the default behavior.
372
373       -i, --iterate=<seconds>
374              Print  the  state  on a periodic basis.  Sleep for the indicated
375              number of seconds between reports.  By  default  prints  a  time
376              stamp with the header.
377
378       --json Dump  node information as JSON. All other formatting and filter‐
379              ing arguments will be ignored.
380
381       -R, --list-reasons
382              List reasons nodes are in the down,  drained,  fail  or  failing
383              state.  When nodes are in these states Slurm supports the inclu‐
384              sion of a "reason" string by an administrator.  This option will
385              display  the first 20 characters of the reason field and list of
386              nodes with that reason for all nodes that are, by default, down,
387              drained,  draining  or  failing.   This  option may be used with
388              other node filtering options (e.g. -r,  -d,  -t,  -n),  however,
389              combinations  of  these  options  that result in a list of nodes
390              that are not down or drained or failing  will  not  produce  any
391              output.   When used with -l the output additionally includes the
392              current node state.
393
394       --local
395              Show only jobs local to this cluster. Ignore other  clusters  in
396              this federation (if any). Overrides --federation.
397
398       -l, --long
399              Print  more detailed information.  This is ignored if the --for‐
400              mat option is specified.
401
402       --noconvert
403              Don't convert units from their original type (e.g.  2048M  won't
404              be converted to 2G).
405
406       -N, --Node
407              Print  information  in  a node-oriented format with one line per
408              node and partition. That is, if a node belongs to more than  one
409              partition,  then  one  line for each node-partition pair will be
410              shown.  If --partition is also specified, then only one line per
411              node in this partition is shown.  The default is to print infor‐
412              mation in a partition-oriented format.  This is ignored  if  the
413              --format option is specified.
414
415       -n, --nodes=<nodes>
416              Print  information  about the specified node(s).  Multiple nodes
417              may be comma separated or expressed using a node  range  expres‐
418              sion  (e.g. "linux[00-17]") Limiting the query to just the rele‐
419              vant nodes can measurably improve the performance of the command
420              for large clusters.
421
422       -h, --noheader
423              Do not print a header on the output.
424
425       -p, --partition=<partition>
426              Print  information only about the specified partition(s). Multi‐
427              ple partitions are separated by commas.
428
429       -T, --reservation
430              Only display information about Slurm reservations.
431
432              NOTE: This option causes sinfo to  ignore  most  other  options,
433              which are focused on partition and node information.
434
435       -r, --responding
436              If set only report state information for responding nodes.
437
438       -S, --sort=<sort_list>
439              Specification  of the order in which records should be reported.
440              This uses the same field specification as  the  <output_format>.
441              Multiple  sorts may be performed by listing multiple sort fields
442              separated by commas.  The field specifications may  be  preceded
443              by  "+"  or "-" for ascending (default) and descending order re‐
444              spectively.  The partition field specification, "P", may be pre‐
445              ceded  by a "#" to report partitions in the same order that they
446              appear in Slurm's  configuration file, slurm.conf.  For example,
447              a  sort value of "+P,-m" requests that records be printed in or‐
448              der of increasing partition name and within a partition  by  de‐
449              creasing  memory  size.   The  default  value of sort is "#P,-t"
450              (partitions ordered as configured then decreasing  node  state).
451              If  the --Node option is selected, the default sort value is "N"
452              (increasing node name).
453
454       -t, --states=<states>
455              List nodes only having the given state(s).  Multiple states  may
456              be  comma  separated and the comparison is case insensitive.  If
457              the states are separated by '&', then the nodes must be  in  all
458              states.   Possible values include (case insensitive): ALLOC, AL‐
459              LOCATED, CLOUD, COMP,  COMPLETING,  DOWN,  DRAIN  (for  node  in
460              DRAINING  or  DRAINED  states), DRAINED, DRAINING, FAIL, FUTURE,
461              FUTR,  IDLE,  MAINT,  MIX,  MIXED,  NO_RESPOND,  NPC,  PERFCTRS,
462              PLANNED,  POWER_DOWN,  POWERING_DOWN, POWERED_DOWN, POWERING_UP,
463              REBOOT_ISSUED, REBOOT_REQUESTED, RESV, RESERVED,  UNK,  and  UN‐
464              KNOWN.   By  default  nodes  in the specified state are reported
465              whether they are responding or not.  The --dead and --responding
466              options may be used to filter nodes by the corresponding flag.
467
468       -s, --summarize
469              List  only a partition state summary with no node state details.
470              This is ignored if the --format option is specified.
471
472       --usage
473              Print a brief message listing the sinfo options.
474
475       -v, --verbose
476              Provide detailed event logging through program execution.
477
478       -V, --version
479              Print version information and exit.
480
481       --yaml Dump node information as YAML. All other formatting and  filter‐
482              ing arguments will be ignored.
483

OUTPUT FIELD DESCRIPTIONS

485       AVAIL  Partition  state.  Can  be either up, down, drain, or inact (for
486              INACTIVE). See the partition definition's State parameter in the
487              slurm.conf(5) man page for more information.
488
489       CPUS   Count of CPUs (processors) on these nodes.
490
491       S:C:T  Count of sockets (S), cores (C), and threads (T) on these nodes.
492
493       SOCKETS
494              Count of sockets on these nodes.
495
496       CORES  Count of cores on these nodes.
497
498       THREADS
499              Count of threads on these nodes.
500
501       GROUPS Resource  allocations  in  this  partition are restricted to the
502              named groups.  all indicates that all groups may use this parti‐
503              tion.
504
505       JOB_SIZE
506              Minimum and maximum node count that can be allocated to any user
507              job.  A single number indicates the  minimum  and  maximum  node
508              count  are  the  same.   infinite is used to identify partitions
509              without a maximum node count.
510
511       TIMELIMIT
512              Maximum time limit for any user job  in  days-hours:minutes:sec‐
513              onds.   infinite  is  used  to identify partitions without a job
514              time limit.
515
516       MEMORY Size of real memory in megabytes on these nodes.
517
518       NODELIST
519              Names of nodes associated with this particular configuration.
520
521       NODES  Count of nodes with this particular configuration.
522
523       NODES(A/I)
524              Count of nodes with this particular configuration by node  state
525              in the form "allocated/idle".
526
527       NODES(A/I/O/T)
528              Count  of nodes with this particular configuration by node state
529              in the form "allocated/idle/other/total".
530
531       PARTITION
532              Name of a partition.  Note that the suffix  "*"  identifies  the
533              default partition.
534
535       PORT   Local TCP port used by slurmd on the node.
536
537       ROOT   Is  the  ability  to  allocate  resources  in this partition re‐
538              stricted to user root, yes or no.
539
540       OVERSUBSCRIBE
541              Whether jobs allocated  resources  in  this  partition  can/will
542              oversubscribe those compute resources (e.g. CPUs).  NO indicates
543              resources are never oversubscribed.  EXCLUSIVE  indicates  whole
544              nodes  are dedicated to jobs (equivalent to srun --exclusive op‐
545              tion, may be used even with select/cons_res managing  individual
546              processors).   FORCE indicates resources are always available to
547              be oversubscribed.   YES  indicates  resource  may  be  oversub‐
548              scribed, if requested by the job's resource allocation.
549
550              NOTE: If OverSubscribe is set to FORCE or YES, the OversubScribe
551              value will be appended to the output.
552
553       STATE  State of the nodes.  Possible states  include:  allocated,  com‐
554              pleting,  down,  drained, draining, fail, failing, future, idle,
555              maint, mixed, perfctrs, planned, power_down, power_up, reserved,
556              and  unknown.   Their  abbreviated forms are: alloc, comp, down,
557              drain, drng, fail, failg, futr, idle,  maint,  mix,  npc,  plnd,
558              pow_dn, pow_up, resv, and unk respectively.
559
560              NOTE: The suffix "*" identifies nodes that are presently not re‐
561              sponding.
562
563       TMP_DISK
564              Size of temporary disk space in megabytes on these nodes.
565

NODE STATE CODES

567       Node state codes are shortened as required for the field  size.   These
568       node  states  may  be followed by a special character to identify state
569       flags associated with the node.  The following node suffixes and states
570       are used:
571
572
573       *   The  node is presently not responding and will not be allocated any
574           new work.  If the node remains non-responsive, it will be placed in
575           the  DOWN  state (except in the case of COMPLETING, DRAINED, DRAIN‐
576           ING, FAIL, FAILING nodes).
577
578       ~   The node is presently in powered off.
579
580       #   The node is presently being powered up or configured.
581
582       !   The node is pending power down.
583
584       %   The node is presently being powered down.
585
586       $   The node is currently in a reservation with a flag value of  "main‐
587           tenance".
588
589       @   The node is pending reboot.
590
591       ^   The node reboot was issued.
592
593       -   The node is planned by the backfill scheduler for a higher priority
594           job.
595
596       ALLOCATED   The node has been allocated to one or more jobs.
597
598       ALLOCATED+  The node is allocated to one or more active jobs  plus  one
599                   or more jobs are in the process of COMPLETING.
600
601       COMPLETING  All  jobs  associated  with this node are in the process of
602                   COMPLETING.  This node state will be removed  when  all  of
603                   the  job's  processes  have terminated and the Slurm epilog
604                   program (if any) has terminated. See the  Epilog  parameter
605                   description in the slurm.conf(5) man page for more informa‐
606                   tion.
607
608       DOWN        The node is unavailable for use.  Slurm  can  automatically
609                   place  nodes  in  this state if some failure occurs. System
610                   administrators may also  explicitly  place  nodes  in  this
611                   state.  If a node resumes normal operation, Slurm can auto‐
612                   matically return it to service. See the ReturnToService and
613                   SlurmdTimeout  parameter  descriptions in the slurm.conf(5)
614                   man page for more information.
615
616       DRAINED     The node is unavailable for use  per  system  administrator
617                   request.   See  the  update node command in the scontrol(1)
618                   man page or the slurm.conf(5) man page  for  more  informa‐
619                   tion.
620
621       DRAINING    The  node is currently executing a job, but will not be al‐
622                   located additional jobs. The node state will be changed  to
623                   state  DRAINED when the last job on it completes. Nodes en‐
624                   ter this state per system administrator  request.  See  the
625                   update  node  command  in  the  scontrol(1) man page or the
626                   slurm.conf(5) man page for more information.
627
628       FAIL        The node is expected to fail soon and  is  unavailable  for
629                   use  per system administrator request.  See the update node
630                   command in the scontrol(1) man page  or  the  slurm.conf(5)
631                   man page for more information.
632
633       FAILING     The  node  is currently executing a job, but is expected to
634                   fail soon and is unavailable for use per system administra‐
635                   tor  request.   See  the  update  node command in the scon‐
636                   trol(1) man page or the slurm.conf(5) man page for more in‐
637                   formation.
638
639       FUTURE      The node is currently not fully configured, but expected to
640                   be available at some point in  the  indefinite  future  for
641                   use.
642
643       IDLE        The  node is not allocated to any jobs and is available for
644                   use.
645
646       INVAL       The node did not register correctly  with  the  controller.
647                   This happens when a node registers with less resources than
648                   configured in the slurm.conf file.   The  node  will  clear
649                   from  this  state  with a valid registration (i.e. a slurmd
650                   restart is required).
651
652       MAINT       The node is currently in a reservation with a flag value of
653                   "maintenance".
654
655       REBOOT_ISSUED
656                   A  reboot  request has been sent to the agent configured to
657                   handle this request.
658
659       REBOOT_REQUESTED
660                   A request to reboot this node has  been  made,  but  hasn't
661                   been handled yet.
662
663       MIXED       The  node  has  some of its CPUs ALLOCATED while others are
664                   IDLE.
665
666       PERFCTRS (NPC)
667                   Network Performance Counters associated with this node  are
668                   in  use,  rendering  this  node as not usable for any other
669                   jobs
670
671       PLANNED     The node is planned by the backfill scheduler for a  higher
672                   priority job.
673
674       POWER_DOWN  The node is pending power down.
675
676       POWERED_DOWN
677                   The  node is currently powered down and not capable of run‐
678                   ning any jobs.
679
680       POWERING_DOWN
681                   The node is in the process of powering down and not capable
682                   of running any jobs.
683
684       POWERING_UP The node is in the process of being powered up.
685
686       RESERVED    The  node  is  in an advanced reservation and not generally
687                   available.
688
689       UNKNOWN     The Slurm controller has just started and the node's  state
690                   has not yet been determined.
691

PERFORMANCE

693       Executing  sinfo  sends a remote procedure call to slurmctld. If enough
694       calls from sinfo or other Slurm client commands that send remote proce‐
695       dure  calls to the slurmctld daemon come in at once, it can result in a
696       degradation of performance of the slurmctld daemon, possibly  resulting
697       in a denial of service.
698
699       Do not run sinfo or other Slurm client commands that send remote proce‐
700       dure calls to slurmctld from loops in shell scripts or other  programs.
701       Ensure  that programs limit calls to sinfo to the minimum necessary for
702       the information you are trying to gather.
703
704

ENVIRONMENT VARIABLES

706       Some sinfo options may be set via environment variables. These environ‐
707       ment  variables, along with their corresponding options, are listed be‐
708       low.  NOTE: Command line options will always override these settings.
709
710
711       SINFO_ALL           Same as -a, --all
712
713       SINFO_FEDERATION    Same as --federation
714
715       SINFO_FORMAT        Same as -o  <output_format>,  --format=<output_for‐
716                           mat>
717
718       SINFO_LOCAL         Same as --local
719
720       SINFO_PARTITION     Same as -p <partition>, --partition=<partition>
721
722       SINFO_SORT          Same as -S <sort>, --sort=<sort>
723
724       SLURM_CLUSTERS      Same as --clusters
725
726       SLURM_CONF          The location of the Slurm configuration file.
727
728       SLURM_TIME_FORMAT   Specify  the  format  used to report time stamps. A
729                           value of standard,  the  default  value,  generates
730                           output            in            the            form
731                           "year-month-dateThour:minute:second".  A  value  of
732                           relative  returns  only "hour:minute:second" if the
733                           current day.  For other dates in the  current  year
734                           it  prints  the  "hour:minute" preceded by "Tomorr"
735                           (tomorrow), "Ystday" (yesterday), the name  of  the
736                           day  for the coming week (e.g. "Mon", "Tue", etc.),
737                           otherwise the date  (e.g.  "25  Apr").   For  other
738                           years  it  returns  a date month and year without a
739                           time (e.g.  "6 Jun 2012"). All of the  time  stamps
740                           use a 24 hour format.
741
742                           A  valid  strftime()  format can also be specified.
743                           For example, a value of "%a %T" will report the day
744                           of the week and a time stamp (e.g. "Mon 12:34:56").
745

EXAMPLES

747       Report basic node and partition configurations:
748
749              $ sinfo
750              PARTITION AVAIL TIMELIMIT NODES STATE  NODELIST
751              batch     up     infinite     2 alloc  adev[8-9]
752              batch     up     infinite     6 idle   adev[10-15]
753              debug*    up        30:00     8 idle   adev[0-7]
754
755
756       Report partition summary information:
757
758              $ sinfo -s
759              PARTITION AVAIL TIMELIMIT NODES(A/I/O/T) NODELIST
760              batch     up     infinite 2/6/0/8        adev[8-15]
761              debug*    up        30:00 0/8/0/8        adev[0-7]
762
763
764       Report more complete information about the partition debug:
765
766              $ sinfo --long --partition=debug
767              PARTITION AVAIL TIMELIMIT JOB_SIZE ROOT OVERSUBS GROUPS NODES STATE NODELIST
768              debug*    up        30:00        8 no   no       all        8 idle  dev[0-7]
769
770
771       Report only those nodes that are in state DRAINED:
772
773              $ sinfo --states=drained
774              PARTITION AVAIL NODES TIMELIMIT STATE  NODELIST
775              debug*    up        2     30:00 drain  adev[6-7]
776
777
778       Report node-oriented information with details and exact matches:
779
780              $ sinfo -Nel
781              NODELIST    NODES PARTITION STATE  CPUS MEMORY TMP_DISK WEIGHT FEATURES REASON
782              adev[0-1]       2 debug*    idle      2   3448    38536     16 (null)   (null)
783              adev[2,4-7]     5 debug*    idle      2   3384    38536     16 (null)   (null)
784              adev3           1 debug*    idle      2   3394    38536     16 (null)   (null)
785              adev[8-9]       2 batch     allocated 2    246    82306     16 (null)   (null)
786              adev[10-15]     6 batch     idle      2    246    82306     16 (null)   (null)
787
788
789       Report only down, drained and draining nodes and their reason field:
790
791              $ sinfo -R
792              REASON                              NODELIST
793              Memory errors                       dev[0,5]
794              Not Responding                      dev8
795
796

COPYING

798       Copyright  (C)  2002-2007  The Regents of the University of California.
799       Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER).
800       Copyright (C) 2008-2009 Lawrence Livermore National Security.
801       Copyright (C) 2010-2022 SchedMD LLC.
802
803       This file is part of Slurm, a resource  management  program.   For  de‐
804       tails, see <https://slurm.schedmd.com/>.
805
806       Slurm  is free software; you can redistribute it and/or modify it under
807       the terms of the GNU General Public License as published  by  the  Free
808       Software  Foundation;  either version 2 of the License, or (at your op‐
809       tion) any later version.
810
811       Slurm is distributed in the hope that it will be  useful,  but  WITHOUT
812       ANY  WARRANTY;  without even the implied warranty of MERCHANTABILITY or
813       FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General  Public  License
814       for more details.
815
816

SEE ALSO

818       scontrol(1),  squeue(1),  slurm_load_ctl_conf (3), slurm_load_jobs (3),
819       slurm_load_node (3), slurm_load_partitions (3), slurm_reconfigure  (3),
820       slurm_shutdown   (3),   slurm_update_job  (3),  slurm_update_node  (3),
821       slurm_update_partition (3), slurm.conf(5)
822
823
824
825March 2022                      Slurm Commands                        sinfo(1)
Impressum