1sinfo(1)                        Slurm Commands                        sinfo(1)
2
3
4

NAME

6       sinfo - View information about Slurm nodes and partitions.
7
8

SYNOPSIS

10       sinfo [OPTIONS...]
11

DESCRIPTION

13       sinfo  is used to view partition and node information for a system run‐
14       ning Slurm.
15
16

OPTIONS

18       -a, --all
19              Display information about all partitions. This  causes  informa‐
20              tion  to  be  displayed  about partitions that are configured as
21              hidden and partitions that are unavailable to the user's group.
22
23       -M, --clusters=<string>
24              Clusters to issue commands to.  Multiple cluster  names  may  be
25              comma separated.  A value of 'all' will query all clusters. Note
26              that the SlurmDBD must be up for this option to  work  properly.
27              This option implicitly sets the --local option.
28
29       -d, --dead
30              If  set, only report state information for non-responding (dead)
31              nodes.
32
33       -e, --exact
34              If set, do not group node information on multiple  nodes  unless
35              their configurations to be reported are identical. Otherwise cpu
36              count, memory size, and disk space for nodes will be listed with
37              the minimum value followed by a "+" for nodes with the same par‐
38              tition and state (e.g. "250+").
39
40       --federation
41              Show all partitions from the federation if a member of one.
42
43       -o, --format=<output_format>
44              Specify the information to be displayed using  an  sinfo  format
45              string.  If the command is executed in a federated cluster envi‐
46              ronment and information about more than one  cluster  is  to  be
47              displayed  and the -h, --noheader option is used, then the clus‐
48              ter name will be displayed before  the  default  output  formats
49              shown  below.   Format  strings transparently used by sinfo when
50              running with various options are:
51
52              default        "%#P %.5a %.10l %.6D %.6t %N"
53
54              --summarize    "%#P %.5a %.10l %.16F  %N"
55
56              --long         "%#P %.5a %.10l %.10s %.4r %.8h %.10g %.6D  %.11T
57                             %N"
58
59              --Node         "%#N %.6D %#P %6t"
60
61              --long --Node  "%#N %.6D %#P %.11T %.4c %.8z %.6m %.8d %.6w %.8f
62                             %20E"
63
64              --list-reasons "%20E %9u %19H %N"
65
66              --long --list-reasons
67                             "%20E %12U %19H %6t %N"
68
69              In the above format strings, the use of "#" represents the maxi‐
70              mum  length of any partition name or node list to be printed.  A
71              pass is made over the records to be  printed  to  establish  the
72              size  in  order to align the sinfo output, then a second pass is
73              made over the records to print  them.   Note  that  the  literal
74              character  "#" itself is not a valid field length specification,
75              but is only used to document this behaviour.
76
77              The format of each field is "%[[.]size]type[suffix]"
78
79                 size   Minimum field size. If no size is specified,  whatever
80                        is needed to print the information will be used.
81
82                 .      Indicates  the  output  should  be right justified and
83                        size must be specified.  By  default  output  is  left
84                        justified.
85
86                 suffix Arbitrary string to append to the end of the field.
87
88       Valid type specifications include:
89
90              %all  Print  all fields available for this data type with a ver‐
91                    tical bar separating each field.
92
93              %a    State/availability of a partition.
94
95              %A    Number of nodes by state in the  format  "allocated/idle".
96                    Do not use this with a node state option ("%t" or "%T") or
97                    the different node  states  will  be  placed  on  separate
98                    lines.
99
100              %b    Features currently active on the nodes, also see %f.
101
102              %B    The  max  number of CPUs per node available to jobs in the
103                    partition.
104
105              %c    Number of CPUs per node.
106
107              %C    Number  of  CPUs   by   state   in   the   format   "allo‐
108                    cated/idle/other/total". Do not use this with a node state
109                    option ("%t" or "%T") or the different node states will be
110                    placed on separate lines.
111
112              %d    Size of temporary disk space per node in megabytes.
113
114              %D    Number of nodes.
115
116              %e    The total memory, in MB, currently free on the node as re‐
117                    ported by the OS. This value is for informational use only
118                    and is not used for scheduling.
119
120              %E    The reason a node is unavailable (down, drained, or drain‐
121                    ing states).
122
123              %f    Features available the nodes, also see %b.
124
125              %F    Number  of  nodes  by   state   in   the   format   "allo‐
126                    cated/idle/other/total".   Note the use of this format op‐
127                    tion with a node state format option ("%t" or  "%T")  will
128                    result  in  the different node states being be reported on
129                    separate lines.
130
131              %g    Groups which may use the nodes.
132
133              %G    Generic resources (gres) associated with the nodes.
134
135              %h    Print the OverSubscribe setting for the partition.
136
137              %H    Print the timestamp of the reason a node is unavailable.
138
139              %I    Partition job priority weighting factor.
140
141              %l    Maximum time for any job in  the  format  "days-hours:min‐
142                    utes:seconds"
143
144              %L    Default  time  for  any job in the format "days-hours:min‐
145                    utes:seconds"
146
147              %m    Size of memory per node in megabytes.
148
149              %M    PreemptionMode.
150
151              %n    List of node hostnames.
152
153              %N    List of node names.
154
155              %o    List of node communication addresses.
156
157              %O    CPU load of a node as reported by the OS.
158
159              %p    Partition scheduling tier priority.
160
161              %P    Partition name followed by "*" for the default  partition,
162                    also see %R.
163
164              %r    Only user root may initiate jobs, "yes" or "no".
165
166              %R    Partition name, also see %P.
167
168              %s    Maximum job size in nodes.
169
170              %S    Allowed allocating nodes.
171
172              %t    State of nodes, compact form.
173
174              %T    State of nodes, extended form.
175
176              %u    Print  the  user  name of who set the reason a node is un‐
177                    available.
178
179              %U    Print the user name and uid of who set the reason  a  node
180                    is unavailable.
181
182              %v    Print the version of the running slurmd daemon.
183
184              %V    Print the cluster name if running in a federation.
185
186              %w    Scheduling weight of the nodes.
187
188              %X    Number of sockets per node.
189
190              %Y    Number of cores per socket.
191
192              %Z    Number of threads per core.
193
194              %z    Extended  processor information: number of sockets, cores,
195                    threads (S:C:T) per node.
196
197       -O, --Format=<output_format>
198              Specify the information to be displayed.  Also see the -o  <out‐
199              put_format>,  --format=<output_format>  option  (which  supports
200              greater flexibility in formatting, but does not  support  access
201              to  all fields because we ran out of letters).  Requests a comma
202              separated list of job information to be displayed.
203
204              The format of each field is "type[:[.][size][suffix]]"
205
206                 size   The minimum field size.  If no size is  specified,  20
207                        characters will be allocated to print the information.
208
209                 .      Indicates  the  output  should  be right justified and
210                        size must be specified.  By default,  output  is  left
211                        justified.
212
213                 suffix Arbitrary string to append to the end of the field.
214
215       Valid type specifications include:
216
217              All    Print all fields available in the -o format for this data
218                     type with a vertical bar separating each field.
219
220              AllocMem
221                     Prints the amount of allocated memory on a node.
222
223              AllocNodes
224                     Allowed allocating nodes.
225
226              Available
227                     State/availability of a partition.
228
229              Cluster
230                     Print the cluster name if running in a federation.
231
232              Comment
233                     Comment. (Arbitrary descriptive string)
234
235              Cores  Number of cores per socket.
236
237              CPUs   Number of CPUs per node.
238
239              CPUsLoad
240                     CPU load of a node as reported by the OS.
241
242              CPUsState
243                     Number  of  CPUs  by   state   in   the   format   "allo‐
244                     cated/idle/other/total".  Do  not  use  this  with a node
245                     state option ("%t" or "%T") or the different node  states
246                     will be placed on separate lines.
247
248              DefaultTime
249                     Default  time  for any job in the format "days-hours:min‐
250                     utes:seconds".
251
252              Disk   Size of temporary disk space per node in megabytes.
253
254              Extra  Arbitrary string on the node.
255
256              Features
257                     Features available on the nodes. Also see features_act.
258
259              features_act
260                     Features currently active on the  nodes.  Also  see  fea‐
261                     tures.
262
263              FreeMem
264                     The  total  memory,  in MB, currently free on the node as
265                     reported by the OS. This value is for  informational  use
266                     only and is not used for scheduling.
267
268              Gres   Generic resources (gres) associated with the nodes.
269
270              GresUsed
271                     Generic resources (gres) currently in use on the nodes.
272
273              Groups Groups which may use the nodes.
274
275              MaxCPUsPerNode
276                     The  max number of CPUs per node available to jobs in the
277                     partition.
278
279              Memory Size of memory per node in megabytes.
280
281              NodeAddr
282                     List of node communication addresses.
283
284              NodeAI Number of nodes by state in the format  "allocated/idle".
285                     Do  not  use this with a node state option ("%t" or "%T")
286                     or the different node states will be placed  on  separate
287                     lines.
288
289              NodeAIOT
290                     Number   of   nodes   by   state  in  the  format  "allo‐
291                     cated/idle/other/total".  Do not use  this  with  a  node
292                     state  option ("%t" or "%T") or the different node states
293                     will be placed on separate lines.
294
295              NodeHost
296                     List of node hostnames.
297
298              NodeList
299                     List of node names.
300
301              Nodes  Number of nodes.
302
303              OverSubscribe
304                     Whether jobs may oversubscribe  compute  resources  (e.g.
305                     CPUs).
306
307              Partition
308                     Partition name followed by "*" for the default partition,
309                     also see %R.
310
311              PartitionName
312                     Partition name, also see %P.
313
314              Port   Node TCP port.
315
316              PreemptMode
317                     Preemption mode.
318
319              PriorityJobFactor
320                     Partition factor used by priority/multifactor  plugin  in
321                     calculating job priority.
322
323              PriorityTier or Priority
324                     Partition scheduling tier priority.
325
326              Reason The  reason  a  node  is  unavailable  (down, drained, or
327                     draining states).
328
329              Root   Only user root may initiate jobs, "yes" or "no".
330
331              Size   Maximum job size in nodes.
332
333              SocketCoreThread
334                     Extended processor information: number of sockets, cores,
335                     threads (S:C:T) per node.
336
337              Sockets
338                     Number of sockets per node.
339
340              StateCompact
341                     State of nodes, compact form.
342
343              StateLong
344                     State of nodes, extended form.
345
346              StateComplete
347                     State  of  nodes,  including  all  node  state flags. eg.
348                     "idle+cloud+power"
349
350              Threads
351                     Number of threads per core.
352
353              Time   Maximum time for any job in the  format  "days-hours:min‐
354                     utes:seconds".
355
356              TimeStamp
357                     Print the timestamp of the reason a node is unavailable.
358
359              User   Print  the  user name of who set the reason a node is un‐
360                     available.
361
362              UserLong
363                     Print the user name and uid of who set the reason a  node
364                     is unavailable.
365
366              Version
367                     Print the version of the running slurmd daemon.
368
369              Weight Scheduling weight of the nodes.
370
371       --help Print a message describing all sinfo options.
372
373       --hide Do  not  display information about hidden partitions. Partitions
374              that are configured as hidden or are not available to the user's
375              group will not be displayed. This is the default behavior.
376
377       -i, --iterate=<seconds>
378              Print  the  state  on a periodic basis.  Sleep for the indicated
379              number of seconds between reports.  By  default  prints  a  time
380              stamp with the header.
381
382       --json Dump  node information as JSON. All other formatting and filter‐
383              ing arguments will be ignored.
384
385       -R, --list-reasons
386              List reasons nodes are in the down,  drained,  fail  or  failing
387              state.  When nodes are in these states Slurm supports the inclu‐
388              sion of a "reason" string by an administrator.  This option will
389              display  the first 20 characters of the reason field and list of
390              nodes with that reason for all nodes that are, by default, down,
391              drained,  draining  or  failing.   This  option may be used with
392              other node filtering options (e.g. -r,  -d,  -t,  -n),  however,
393              combinations  of  these  options  that result in a list of nodes
394              that are not down or drained or failing  will  not  produce  any
395              output.   When used with -l the output additionally includes the
396              current node state.
397
398       --local
399              Show only jobs local to this cluster. Ignore other  clusters  in
400              this federation (if any). Overrides --federation.
401
402       -l, --long
403              Print  more detailed information.  This is ignored if the --for‐
404              mat option is specified.
405
406       --noconvert
407              Don't convert units from their original type (e.g.  2048M  won't
408              be converted to 2G).
409
410       -N, --Node
411              Print  information  in  a node-oriented format with one line per
412              node and partition. That is, if a node belongs to more than  one
413              partition,  then  one  line for each node-partition pair will be
414              shown.  If --partition is also specified, then only one line per
415              node in this partition is shown.  The default is to print infor‐
416              mation in a partition-oriented format.  This is ignored  if  the
417              --format option is specified.
418
419       -n, --nodes=<nodes>
420              Print  information  about the specified node(s).  Multiple nodes
421              may be comma separated or expressed using a node  range  expres‐
422              sion  (e.g. "linux[00-17]") Limiting the query to just the rele‐
423              vant nodes can measurably improve the performance of the command
424              for large clusters.
425
426       -h, --noheader
427              Do not print a header on the output.
428
429       -p, --partition=<partition>
430              Print  information only about the specified partition(s). Multi‐
431              ple partitions are separated by commas.
432
433       -T, --reservation
434              Only display information about Slurm reservations.
435
436              NOTE: This option causes sinfo to  ignore  most  other  options,
437              which are focused on partition and node information.
438
439       -r, --responding
440              If set only report state information for responding nodes.
441
442       -S, --sort=<sort_list>
443              Specification  of the order in which records should be reported.
444              This uses the same field specification as  the  <output_format>.
445              Multiple  sorts may be performed by listing multiple sort fields
446              separated by commas.  The field specifications may  be  preceded
447              by  "+"  or "-" for ascending (default) and descending order re‐
448              spectively.  The partition field specification, "P", may be pre‐
449              ceded  by a "#" to report partitions in the same order that they
450              appear in Slurm's  configuration file, slurm.conf.  For example,
451              a  sort value of "+P,-m" requests that records be printed in or‐
452              der of increasing partition name and within a partition  by  de‐
453              creasing  memory  size.   The  default  value of sort is "#P,-t"
454              (partitions ordered as configured then decreasing  node  state).
455              If  the --Node option is selected, the default sort value is "N"
456              (increasing node name).
457
458       -t, --states=<states>
459              List nodes only having the given state(s).  Multiple states  may
460              be  comma  separated and the comparison is case insensitive.  If
461              the states are separated by '&', then the nodes must be  in  all
462              states.   Possible values include (case insensitive): ALLOC, AL‐
463              LOCATED, CLOUD, COMP,  COMPLETING,  DOWN,  DRAIN  (for  node  in
464              DRAINING  or  DRAINED  states), DRAINED, DRAINING, FAIL, FUTURE,
465              FUTR,  IDLE,  MAINT,  MIX,  MIXED,  NO_RESPOND,  NPC,  PERFCTRS,
466              PLANNED,  POWER_DOWN,  POWERING_DOWN, POWERED_DOWN, POWERING_UP,
467              REBOOT_ISSUED, REBOOT_REQUESTED, RESV, RESERVED,  UNK,  and  UN‐
468              KNOWN.   By  default  nodes  in the specified state are reported
469              whether they are responding or not.  The --dead and --responding
470              options may be used to filter nodes by the corresponding flag.
471
472       -s, --summarize
473              List  only a partition state summary with no node state details.
474              This is ignored if the --format option is specified.
475
476       --usage
477              Print a brief message listing the sinfo options.
478
479       -v, --verbose
480              Provide detailed event logging through program execution.
481
482       -V, --version
483              Print version information and exit.
484
485       --yaml Dump node information as YAML. All other formatting and  filter‐
486              ing arguments will be ignored.
487

OUTPUT FIELD DESCRIPTIONS

489       AVAIL  Partition  state.  Can  be either up, down, drain, or inact (for
490              INACTIVE). See the partition definition's State parameter in the
491              slurm.conf(5) man page for more information.
492
493       CPUS   Count of CPUs (processors) on these nodes.
494
495       S:C:T  Count of sockets (S), cores (C), and threads (T) on these nodes.
496
497       SOCKETS
498              Count of sockets on these nodes.
499
500       CORES  Count of cores on these nodes.
501
502       THREADS
503              Count of threads on these nodes.
504
505       GROUPS Resource  allocations  in  this  partition are restricted to the
506              named groups.  all indicates that all groups may use this parti‐
507              tion.
508
509       JOB_SIZE
510              Minimum and maximum node count that can be allocated to any user
511              job.  A single number indicates the  minimum  and  maximum  node
512              count  are  the  same.   infinite is used to identify partitions
513              without a maximum node count.
514
515       TIMELIMIT
516              Maximum time limit for any user job  in  days-hours:minutes:sec‐
517              onds.   infinite  is  used  to identify partitions without a job
518              time limit.
519
520       MEMORY Size of real memory in megabytes on these nodes.
521
522       NODELIST
523              Names of nodes associated with this particular configuration.
524
525       NODES  Count of nodes with this particular configuration.
526
527       NODES(A/I)
528              Count of nodes with this particular configuration by node  state
529              in the form "allocated/idle".
530
531       NODES(A/I/O/T)
532              Count  of nodes with this particular configuration by node state
533              in the form "allocated/idle/other/total".
534
535       PARTITION
536              Name of a partition.  Note that the suffix  "*"  identifies  the
537              default partition.
538
539       PORT   Local TCP port used by slurmd on the node.
540
541       ROOT   Is  the  ability  to  allocate  resources  in this partition re‐
542              stricted to user root, yes or no.
543
544       OVERSUBSCRIBE
545              Whether jobs allocated  resources  in  this  partition  can/will
546              oversubscribe those compute resources (e.g. CPUs).  NO indicates
547              resources are never oversubscribed.  EXCLUSIVE  indicates  whole
548              nodes  are dedicated to jobs (equivalent to srun --exclusive op‐
549              tion, may be used even with select/cons_res managing  individual
550              processors).   FORCE indicates resources are always available to
551              be oversubscribed.   YES  indicates  resource  may  be  oversub‐
552              scribed, if requested by the job's resource allocation.
553
554              NOTE: If OverSubscribe is set to FORCE or YES, the OversubScribe
555              value will be appended to the output.
556
557       STATE  State of the nodes.  Possible states  include:  allocated,  com‐
558              pleting,  down,  drained, draining, fail, failing, future, idle,
559              maint, mixed, perfctrs, planned, power_down, power_up, reserved,
560              and  unknown.   Their  abbreviated forms are: alloc, comp, down,
561              drain, drng, fail, failg, futr, idle,  maint,  mix,  npc,  plnd,
562              pow_dn, pow_up, resv, and unk respectively.
563
564              NOTE: The suffix "*" identifies nodes that are presently not re‐
565              sponding.
566
567       TMP_DISK
568              Size of temporary disk space in megabytes on these nodes.
569

NODE STATE CODES

571       Node state codes are shortened as required for the field  size.   These
572       node  states  may  be followed by a special character to identify state
573       flags associated with the node.  The following node suffixes and states
574       are used:
575
576
577       *   The  node is presently not responding and will not be allocated any
578           new work.  If the node remains non-responsive, it will be placed in
579           the  DOWN  state (except in the case of COMPLETING, DRAINED, DRAIN‐
580           ING, FAIL, FAILING nodes).
581
582       ~   The node is presently in powered off.
583
584       #   The node is presently being powered up or configured.
585
586       !   The node is pending power down.
587
588       %   The node is presently being powered down.
589
590       $   The node is currently in a reservation with a flag value of  "main‐
591           tenance".
592
593       @   The node is pending reboot.
594
595       ^   The node reboot was issued.
596
597       -   The node is planned by the backfill scheduler for a higher priority
598           job.
599
600       ALLOCATED   The node has been allocated to one or more jobs.
601
602       ALLOCATED+  The node is allocated to one or more active jobs  plus  one
603                   or more jobs are in the process of COMPLETING.
604
605       COMPLETING  All  jobs  associated  with this node are in the process of
606                   COMPLETING.  This node state will be removed  when  all  of
607                   the  job's  processes  have terminated and the Slurm epilog
608                   program (if any) has terminated. See the  Epilog  parameter
609                   description in the slurm.conf(5) man page for more informa‐
610                   tion.
611
612       DOWN        The node is unavailable for use.  Slurm  can  automatically
613                   place  nodes  in  this state if some failure occurs. System
614                   administrators may also  explicitly  place  nodes  in  this
615                   state.  If a node resumes normal operation, Slurm can auto‐
616                   matically return it to service. See the ReturnToService and
617                   SlurmdTimeout  parameter  descriptions in the slurm.conf(5)
618                   man page for more information.
619
620       DRAINED     The node is unavailable for use  per  system  administrator
621                   request.   See  the  update node command in the scontrol(1)
622                   man page or the slurm.conf(5) man page  for  more  informa‐
623                   tion.
624
625       DRAINING    The  node is currently executing a job, but will not be al‐
626                   located additional jobs. The node state will be changed  to
627                   state  DRAINED when the last job on it completes. Nodes en‐
628                   ter this state per system administrator  request.  See  the
629                   update  node  command  in  the  scontrol(1) man page or the
630                   slurm.conf(5) man page for more information.
631
632       FAIL        The node is expected to fail soon and  is  unavailable  for
633                   use  per system administrator request.  See the update node
634                   command in the scontrol(1) man page  or  the  slurm.conf(5)
635                   man page for more information.
636
637       FAILING     The  node  is currently executing a job, but is expected to
638                   fail soon and is unavailable for use per system administra‐
639                   tor  request.   See  the  update  node command in the scon‐
640                   trol(1) man page or the slurm.conf(5) man page for more in‐
641                   formation.
642
643       FUTURE      The node is currently not fully configured, but expected to
644                   be available at some point in  the  indefinite  future  for
645                   use.
646
647       IDLE        The  node is not allocated to any jobs and is available for
648                   use.
649
650       INVAL       The node did not register correctly  with  the  controller.
651                   This happens when a node registers with less resources than
652                   configured in the slurm.conf file.   The  node  will  clear
653                   from  this  state  with a valid registration (i.e. a slurmd
654                   restart is required).
655
656       MAINT       The node is currently in a reservation with a flag value of
657                   "maintenance".
658
659       REBOOT_ISSUED
660                   A  reboot  request has been sent to the agent configured to
661                   handle this request.
662
663       REBOOT_REQUESTED
664                   A request to reboot this node has  been  made,  but  hasn't
665                   been handled yet.
666
667       MIXED       The  node  has  some of its CPUs ALLOCATED while others are
668                   IDLE.
669
670       PERFCTRS (NPC)
671                   Network Performance Counters associated with this node  are
672                   in  use,  rendering  this  node as not usable for any other
673                   jobs
674
675       PLANNED     The node is planned by the backfill scheduler for a  higher
676                   priority job.
677
678       POWER_DOWN  The node is pending power down.
679
680       POWERED_DOWN
681                   The  node is currently powered down and not capable of run‐
682                   ning any jobs.
683
684       POWERING_DOWN
685                   The node is in the process of powering down and not capable
686                   of running any jobs.
687
688       POWERING_UP The node is in the process of being powered up.
689
690       RESERVED    The  node  is  in an advanced reservation and not generally
691                   available.
692
693       UNKNOWN     The Slurm controller has just started and the node's  state
694                   has not yet been determined.
695

PERFORMANCE

697       Executing  sinfo  sends a remote procedure call to slurmctld. If enough
698       calls from sinfo or other Slurm client commands that send remote proce‐
699       dure  calls to the slurmctld daemon come in at once, it can result in a
700       degradation of performance of the slurmctld daemon, possibly  resulting
701       in a denial of service.
702
703       Do not run sinfo or other Slurm client commands that send remote proce‐
704       dure calls to slurmctld from loops in shell scripts or other  programs.
705       Ensure  that programs limit calls to sinfo to the minimum necessary for
706       the information you are trying to gather.
707
708

ENVIRONMENT VARIABLES

710       Some sinfo options may be set via environment variables. These environ‐
711       ment  variables, along with their corresponding options, are listed be‐
712       low.  NOTE: Command line options will always override these settings.
713
714
715       SINFO_ALL           Same as -a, --all
716
717       SINFO_FEDERATION    Same as --federation
718
719       SINFO_FORMAT        Same as -o  <output_format>,  --format=<output_for‐
720                           mat>
721
722       SINFO_LOCAL         Same as --local
723
724       SINFO_PARTITION     Same as -p <partition>, --partition=<partition>
725
726       SINFO_SORT          Same as -S <sort>, --sort=<sort>
727
728       SLURM_CLUSTERS      Same as --clusters
729
730       SLURM_CONF          The location of the Slurm configuration file.
731
732       SLURM_DEBUG_FLAGS   Specify  debug  flags  for  sinfo  to  use. See De‐
733                           bugFlags in the slurm.conf(5) man page for  a  full
734                           list  of  flags.  The  environment  variable  takes
735                           precedence over the setting in the slurm.conf.
736
737       SLURM_TIME_FORMAT   Specify the format used to report  time  stamps.  A
738                           value  of  standard,  the  default value, generates
739                           output            in            the            form
740                           "year-month-dateThour:minute:second".   A  value of
741                           relative returns only "hour:minute:second"  if  the
742                           current  day.   For other dates in the current year
743                           it prints the "hour:minute"  preceded  by  "Tomorr"
744                           (tomorrow),  "Ystday"  (yesterday), the name of the
745                           day for the coming week (e.g. "Mon", "Tue",  etc.),
746                           otherwise  the  date  (e.g.  "25  Apr").  For other
747                           years it returns a date month and  year  without  a
748                           time  (e.g.   "6 Jun 2012"). All of the time stamps
749                           use a 24 hour format.
750
751                           A valid strftime() format can  also  be  specified.
752                           For example, a value of "%a %T" will report the day
753                           of the week and a time stamp (e.g. "Mon 12:34:56").
754

EXAMPLES

756       Report basic node and partition configurations:
757
758              $ sinfo
759              PARTITION AVAIL TIMELIMIT NODES STATE  NODELIST
760              batch     up     infinite     2 alloc  adev[8-9]
761              batch     up     infinite     6 idle   adev[10-15]
762              debug*    up        30:00     8 idle   adev[0-7]
763
764
765       Report partition summary information:
766
767              $ sinfo -s
768              PARTITION AVAIL TIMELIMIT NODES(A/I/O/T) NODELIST
769              batch     up     infinite 2/6/0/8        adev[8-15]
770              debug*    up        30:00 0/8/0/8        adev[0-7]
771
772
773       Report more complete information about the partition debug:
774
775              $ sinfo --long --partition=debug
776              PARTITION AVAIL TIMELIMIT JOB_SIZE ROOT OVERSUBS GROUPS NODES STATE NODELIST
777              debug*    up        30:00        8 no   no       all        8 idle  dev[0-7]
778
779
780       Report only those nodes that are in state DRAINED:
781
782              $ sinfo --states=drained
783              PARTITION AVAIL NODES TIMELIMIT STATE  NODELIST
784              debug*    up        2     30:00 drain  adev[6-7]
785
786
787       Report node-oriented information with details and exact matches:
788
789              $ sinfo -Nel
790              NODELIST    NODES PARTITION STATE  CPUS MEMORY TMP_DISK WEIGHT FEATURES REASON
791              adev[0-1]       2 debug*    idle      2   3448    38536     16 (null)   (null)
792              adev[2,4-7]     5 debug*    idle      2   3384    38536     16 (null)   (null)
793              adev3           1 debug*    idle      2   3394    38536     16 (null)   (null)
794              adev[8-9]       2 batch     allocated 2    246    82306     16 (null)   (null)
795              adev[10-15]     6 batch     idle      2    246    82306     16 (null)   (null)
796
797
798       Report only down, drained and draining nodes and their reason field:
799
800              $ sinfo -R
801              REASON                              NODELIST
802              Memory errors                       dev[0,5]
803              Not Responding                      dev8
804
805

COPYING

807       Copyright (C) 2002-2007 The Regents of the  University  of  California.
808       Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER).
809       Copyright (C) 2008-2009 Lawrence Livermore National Security.
810       Copyright (C) 2010-2022 SchedMD LLC.
811
812       This  file  is  part  of Slurm, a resource management program.  For de‐
813       tails, see <https://slurm.schedmd.com/>.
814
815       Slurm is free software; you can redistribute it and/or modify it  under
816       the  terms  of  the GNU General Public License as published by the Free
817       Software Foundation; either version 2 of the License, or (at  your  op‐
818       tion) any later version.
819
820       Slurm  is  distributed  in the hope that it will be useful, but WITHOUT
821       ANY WARRANTY; without even the implied warranty of  MERCHANTABILITY  or
822       FITNESS  FOR  A PARTICULAR PURPOSE.  See the GNU General Public License
823       for more details.
824
825

SEE ALSO

827       scontrol(1), squeue(1), slurm_load_ctl_conf (3),  slurm_load_jobs  (3),
828       slurm_load_node  (3), slurm_load_partitions (3), slurm_reconfigure (3),
829       slurm_shutdown  (3),  slurm_update_job  (3),   slurm_update_node   (3),
830       slurm_update_partition (3), slurm.conf(5)
831
832
833
834August 2022                     Slurm Commands                        sinfo(1)
Impressum