1sinfo(1) Slurm Commands sinfo(1)
2
3
4
6 sinfo - View information about Slurm nodes and partitions.
7
8
10 sinfo [OPTIONS...]
11
13 sinfo is used to view partition and node information for a system run‐
14 ning Slurm.
15
16
18 -a, --all
19 Display information about all partitions. This causes informa‐
20 tion to be displayed about partitions that are configured as
21 hidden and partitions that are unavailable to the user's group.
22
23
24 -M, --clusters=<string>
25 Clusters to issue commands to. Multiple cluster names may be
26 comma separated. A value of 'all' will query all clusters. Note
27 that the SlurmDBD must be up for this option to work properly.
28 This option implicitly sets the --local option.
29
30
31 -d, --dead
32 If set, only report state information for non-responding (dead)
33 nodes.
34
35
36 -e, --exact
37 If set, do not group node information on multiple nodes unless
38 their configurations to be reported are identical. Otherwise cpu
39 count, memory size, and disk space for nodes will be listed with
40 the minimum value followed by a "+" for nodes with the same par‐
41 tition and state (e.g. "250+").
42
43
44 --federation
45 Show all partitions from the federation if a member of one.
46
47
48 -o, --format=<output_format>
49 Specify the information to be displayed using an sinfo format
50 string. If the command is executed in a federated cluster envi‐
51 ronment and information about more than one cluster is to be
52 displayed and the -h, --noheader option is used, then the clus‐
53 ter name will be displayed before the default output formats
54 shown below. Format strings transparently used by sinfo when
55 running with various options are:
56
57 default "%#P %.5a %.10l %.6D %.6t %N"
58
59 --summarize "%#P %.5a %.10l %.16F %N"
60
61 --long "%#P %.5a %.10l %.10s %.4r %.8h %.10g %.6D %.11T
62 %N"
63
64 --Node "%#N %.6D %#P %6t"
65
66 --long --Node "%#N %.6D %#P %.11T %.4c %.8z %.6m %.8d %.6w %.8f
67 %20E"
68
69 --list-reasons "%20E %9u %19H %N"
70
71 --long --list-reasons
72 "%20E %12U %19H %6t %N"
73
74
75 In the above format strings, the use of "#" represents the maxi‐
76 mum length of any partition name or node list to be printed. A
77 pass is made over the records to be printed to establish the
78 size in order to align the sinfo output, then a second pass is
79 made over the records to print them. Note that the literal
80 character "#" itself is not a valid field length specification,
81 but is only used to document this behaviour.
82
83
84 The format of each field is "%[[.]size]type[suffix]"
85
86 size Minimum field size. If no size is specified, whatever
87 is needed to print the information will be used.
88
89 . Indicates the output should be right justified and
90 size must be specified. By default output is left
91 justified.
92
93 suffix Arbitrary string to append to the end of the field.
94
95
96 Valid type specifications include:
97
98 %all Print all fields available for this data type with a ver‐
99 tical bar separating each field.
100
101 %a State/availability of a partition.
102
103 %A Number of nodes by state in the format "allocated/idle".
104 Do not use this with a node state option ("%t" or "%T") or
105 the different node states will be placed on separate
106 lines.
107
108 %b Features currently active on the nodes, also see %f.
109
110 %B The max number of CPUs per node available to jobs in the
111 partition.
112
113 %c Number of CPUs per node.
114
115 %C Number of CPUs by state in the format "allo‐
116 cated/idle/other/total". Do not use this with a node state
117 option ("%t" or "%T") or the different node states will be
118 placed on separate lines.
119
120 %d Size of temporary disk space per node in megabytes.
121
122 %D Number of nodes.
123
124 %e Free memory of a node.
125
126 %E The reason a node is unavailable (down, drained, or drain‐
127 ing states).
128
129 %f Features available the nodes, also see %b.
130
131 %F Number of nodes by state in the format "allo‐
132 cated/idle/other/total". Note the use of this format op‐
133 tion with a node state format option ("%t" or "%T") will
134 result in the different node states being be reported on
135 separate lines.
136
137 %g Groups which may use the nodes.
138
139 %G Generic resources (gres) associated with the nodes.
140
141 %h Print the OverSubscribe setting for the partition.
142
143 %H Print the timestamp of the reason a node is unavailable.
144
145 %I Partition job priority weighting factor.
146
147 %l Maximum time for any job in the format "days-hours:min‐
148 utes:seconds"
149
150 %L Default time for any job in the format "days-hours:min‐
151 utes:seconds"
152
153 %m Size of memory per node in megabytes.
154
155 %M PreemptionMode.
156
157 %n List of node hostnames.
158
159 %N List of node names.
160
161 %o List of node communication addresses.
162
163 %O CPU load of a node.
164
165 %p Partition scheduling tier priority.
166
167 %P Partition name followed by "*" for the default partition,
168 also see %R.
169
170 %r Only user root may initiate jobs, "yes" or "no".
171
172 %R Partition name, also see %P.
173
174 %s Maximum job size in nodes.
175
176 %S Allowed allocating nodes.
177
178 %t State of nodes, compact form.
179
180 %T State of nodes, extended form.
181
182 %u Print the user name of who set the reason a node is un‐
183 available.
184
185 %U Print the user name and uid of who set the reason a node
186 is unavailable.
187
188 %v Print the version of the running slurmd daemon.
189
190 %V Print the cluster name if running in a federation.
191
192 %w Scheduling weight of the nodes.
193
194 %X Number of sockets per node.
195
196 %Y Number of cores per socket.
197
198 %Z Number of threads per core.
199
200 %z Extended processor information: number of sockets, cores,
201 threads (S:C:T) per node.
202
203
204 -O, --Format=<output_format>
205 Specify the information to be displayed. Also see the -o <out‐
206 put_format>, --format=<output_format> option (which supports
207 greater flexibility in formatting, but does not support access
208 to all fields because we ran out of letters). Requests a comma
209 separated list of job information to be displayed.
210
211
212 The format of each field is "type[:[.][size][suffix]]"
213
214 size The minimum field size. If no size is specified, 20
215 characters will be allocated to print the information.
216
217 . Indicates the output should be right justified and
218 size must be specified. By default, output is left
219 justified.
220
221 suffix Arbitrary string to append to the end of the field.
222
223
224 Valid type specifications include:
225
226 All Print all fields available in the -o format for this data
227 type with a vertical bar separating each field.
228
229 AllocMem
230 Prints the amount of allocated memory on a node.
231
232 AllocNodes
233 Allowed allocating nodes.
234
235 Available
236 State/availability of a partition.
237
238 Cluster
239 Print the cluster name if running in a federation.
240
241 Comment
242 Comment. (Arbitrary descriptive string)
243
244 Cores Number of cores per socket.
245
246 CPUs Number of CPUs per node.
247
248 CPUsLoad
249 CPU load of a node.
250
251 CPUsState
252 Number of CPUs by state in the format "allo‐
253 cated/idle/other/total". Do not use this with a node
254 state option ("%t" or "%T") or the different node states
255 will be placed on separate lines.
256
257 DefaultTime
258 Default time for any job in the format "days-hours:min‐
259 utes:seconds".
260
261 Disk Size of temporary disk space per node in megabytes.
262
263 Extra Arbitray string on the node.
264
265 Features
266 Features available on the nodes. Also see features_act.
267
268 features_act
269 Features currently active on the nodes. Also see fea‐
270 tures.
271
272 FreeMem
273 Free memory of a node.
274
275 Gres Generic resources (gres) associated with the nodes.
276
277 GresUsed
278 Generic resources (gres) currently in use on the nodes.
279
280 Groups Groups which may use the nodes.
281
282 MaxCPUsPerNode
283 The max number of CPUs per node available to jobs in the
284 partition.
285
286 Memory Size of memory per node in megabytes.
287
288 NodeAddr
289 List of node communication addresses.
290
291 NodeAI Number of nodes by state in the format "allocated/idle".
292 Do not use this with a node state option ("%t" or "%T")
293 or the different node states will be placed on separate
294 lines.
295
296 NodeAIOT
297 Number of nodes by state in the format "allo‐
298 cated/idle/other/total". Do not use this with a node
299 state option ("%t" or "%T") or the different node states
300 will be placed on separate lines.
301
302 NodeHost
303 List of node hostnames.
304
305 NodeList
306 List of node names.
307
308 Nodes Number of nodes.
309
310 OverSubscribe
311 Whether jobs may oversubscribe compute resources (e.g.
312 CPUs).
313
314 Partition
315 Partition name followed by "*" for the default partition,
316 also see %R.
317
318 PartitionName
319 Partition name, also see %P.
320
321 Port Node TCP port.
322
323 PreemptMode
324 Preemption mode.
325
326 PriorityJobFactor
327 Partition factor used by priority/multifactor plugin in
328 calculating job priority.
329
330 PriorityTier or Priority
331 Partition scheduling tier priority.
332
333 Reason The reason a node is unavailable (down, drained, or
334 draining states).
335
336 Root Only user root may initiate jobs, "yes" or "no".
337
338 Size Maximum job size in nodes.
339
340 SocketCoreThread
341 Extended processor information: number of sockets, cores,
342 threads (S:C:T) per node.
343
344 Sockets
345 Number of sockets per node.
346
347 StateCompact
348 State of nodes, compact form.
349
350 StateLong
351 State of nodes, extended form.
352
353 StateComplete
354 State of nodes, including all node state flags. eg.
355 "idle+cloud+power"
356
357 Threads
358 Number of threads per core.
359
360 Time Maximum time for any job in the format "days-hours:min‐
361 utes:seconds".
362
363 TimeStamp
364 Print the timestamp of the reason a node is unavailable.
365
366 User Print the user name of who set the reason a node is un‐
367 available.
368
369 UserLong
370 Print the user name and uid of who set the reason a node
371 is unavailable.
372
373 Version
374 Print the version of the running slurmd daemon.
375
376 Weight Scheduling weight of the nodes.
377
378
379 --help Print a message describing all sinfo options.
380
381
382 --hide Do not display information about hidden partitions. Partitions
383 that are configured as hidden or are not available to the user's
384 group will not be displayed. This is the default behavior.
385
386
387 -i, --iterate=<seconds>
388 Print the state on a periodic basis. Sleep for the indicated
389 number of seconds between reports. By default prints a time
390 stamp with the header.
391
392
393 --json Dump node information as JSON. All other formating and filtering
394 arugments will be ignored.
395
396
397 -R, --list-reasons
398 List reasons nodes are in the down, drained, fail or failing
399 state. When nodes are in these states Slurm supports the inclu‐
400 sion of a "reason" string by an administrator. This option will
401 display the first 20 characters of the reason field and list of
402 nodes with that reason for all nodes that are, by default, down,
403 drained, draining or failing. This option may be used with
404 other node filtering options (e.g. -r, -d, -t, -n), however,
405 combinations of these options that result in a list of nodes
406 that are not down or drained or failing will not produce any
407 output. When used with -l the output additionally includes the
408 current node state.
409
410
411 --local
412 Show only jobs local to this cluster. Ignore other clusters in
413 this federation (if any). Overrides --federation.
414
415
416 -l, --long
417 Print more detailed information. This is ignored if the --for‐
418 mat option is specified.
419
420
421 --noconvert
422 Don't convert units from their original type (e.g. 2048M won't
423 be converted to 2G).
424
425
426 -N, --Node
427 Print information in a node-oriented format with one line per
428 node and partition. That is, if a node belongs to more than one
429 partition, then one line for each node-partition pair will be
430 shown. If --partition is also specified, then only one line per
431 node in this partition is shown. The default is to print infor‐
432 mation in a partition-oriented format. This is ignored if the
433 --format option is specified.
434
435
436 -n, --nodes=<nodes>
437 Print information about the specified node(s). Multiple nodes
438 may be comma separated or expressed using a node range expres‐
439 sion (e.g. "linux[00-17]") Limiting the query to just the rele‐
440 vant nodes can measurably improve the performance of the command
441 for large clusters.
442
443
444 -h, --noheader
445 Do not print a header on the output.
446
447
448 -p, --partition=<partition>
449 Print information only about the specified partition(s). Multi‐
450 ple partitions are separated by commas.
451
452
453 -T, --reservation
454 Only display information about Slurm reservations.
455
456 NOTE: This option causes sinfo to ignore most other options,
457 which are focused on partition and node information.
458
459
460 -r, --responding
461 If set only report state information for responding nodes.
462
463
464 -S, --sort=<sort_list>
465 Specification of the order in which records should be reported.
466 This uses the same field specification as the <output_format>.
467 Multiple sorts may be performed by listing multiple sort fields
468 separated by commas. The field specifications may be preceded
469 by "+" or "-" for ascending (default) and descending order re‐
470 spectively. The partition field specification, "P", may be pre‐
471 ceded by a "#" to report partitions in the same order that they
472 appear in Slurm's configuration file, slurm.conf. For example,
473 a sort value of "+P,-m" requests that records be printed in or‐
474 der of increasing partition name and within a partition by de‐
475 creasing memory size. The default value of sort is "#P,-t"
476 (partitions ordered as configured then decreasing node state).
477 If the --Node option is selected, the default sort value is "N"
478 (increasing node name).
479
480
481 -t, --states=<states>
482 List nodes only having the given state(s). Multiple states may
483 be comma separated and the comparison is case insensitive. If
484 the states are separated by '&', then the nodes must be in all
485 states. Possible values include (case insensitive): ALLOC, AL‐
486 LOCATED, CLOUD, COMP, COMPLETING, DOWN, DRAIN (for node in
487 DRAINING or DRAINED states), DRAINED, DRAINING, FAIL, FUTURE,
488 FUTR, IDLE, MAINT, MIX, MIXED, NO_RESPOND, NPC, PERFCTRS,
489 PLANNED, POWER_DOWN, POWERING_DOWN, POWERED_DOWN, POWERING_UP,
490 REBOOT_ISSUED, REBOOT_REQUESTED, RESV, RESERVED, UNK, and UN‐
491 KNOWN. By default nodes in the specified state are reported
492 whether they are responding or not. The --dead and --responding
493 options may be used to filter nodes by the corresponding flag.
494
495
496 -s, --summarize
497 List only a partition state summary with no node state details.
498 This is ignored if the --format option is specified.
499
500
501 --usage
502 Print a brief message listing the sinfo options.
503
504
505 -v, --verbose
506 Provide detailed event logging through program execution.
507
508
509 -V, --version
510 Print version information and exit.
511
512
513 --yaml Dump node information as YAML. All other formating and filtering
514 arugments will be ignored.
515
516
518 AVAIL Partition state. Can be either up, down, drain, or inact (for
519 INACTIVE). See the partition definition's State parameter in the
520 slurm.conf(5) man page for more information.
521
522 CPUS Count of CPUs (processors) on these nodes.
523
524 S:C:T Count of sockets (S), cores (C), and threads (T) on these nodes.
525
526 SOCKETS
527 Count of sockets on these nodes.
528
529 CORES Count of cores on these nodes.
530
531 THREADS
532 Count of threads on these nodes.
533
534 GROUPS Resource allocations in this partition are restricted to the
535 named groups. all indicates that all groups may use this parti‐
536 tion.
537
538 JOB_SIZE
539 Minimum and maximum node count that can be allocated to any user
540 job. A single number indicates the minimum and maximum node
541 count are the same. infinite is used to identify partitions
542 without a maximum node count.
543
544 TIMELIMIT
545 Maximum time limit for any user job in days-hours:minutes:sec‐
546 onds. infinite is used to identify partitions without a job
547 time limit.
548
549 MEMORY Size of real memory in megabytes on these nodes.
550
551 NODELIST
552 Names of nodes associated with this particular configuration.
553
554 NODES Count of nodes with this particular configuration.
555
556 NODES(A/I)
557 Count of nodes with this particular configuration by node state
558 in the form "allocated/idle".
559
560 NODES(A/I/O/T)
561 Count of nodes with this particular configuration by node state
562 in the form "allocated/idle/other/total".
563
564 PARTITION
565 Name of a partition. Note that the suffix "*" identifies the
566 default partition.
567
568 PORT Local TCP port used by slurmd on the node.
569
570 ROOT Is the ability to allocate resources in this partition re‐
571 stricted to user root, yes or no.
572
573 OVERSUBSCRIBE
574 Whether jobs allocated resources in this partition can/will
575 oversubscribe those compute resources (e.g. CPUs). NO indicates
576 resources are never oversubscribed. EXCLUSIVE indicates whole
577 nodes are dedicated to jobs (equivalent to srun --exclusive op‐
578 tion, may be used even with select/cons_res managing individual
579 processors). FORCE indicates resources are always available to
580 be oversubscribed. YES indicates resource may be oversub‐
581 scribed, if requested by the job's resource allocation.
582
583 NOTE: If OverSubscribe is set to FORCE or YES, the OversubScribe
584 value will be appended to the output.
585
586 STATE State of the nodes. Possible states include: allocated, com‐
587 pleting, down, drained, draining, fail, failing, future, idle,
588 maint, mixed, perfctrs, planned, power_down, power_up, reserved,
589 and unknown. Their abbreviated forms are: alloc, comp, down,
590 drain, drng, fail, failg, futr, idle, maint, mix, npc, pow_dn,
591 pow_up, resv, and unk respectively.
592
593 NOTE: The suffix "*" identifies nodes that are presently not re‐
594 sponding.
595
596 TMP_DISK
597 Size of temporary disk space in megabytes on these nodes.
598
599
601 Node state codes are shortened as required for the field size. These
602 node states may be followed by a special character to identify state
603 flags associated with the node. The following node suffixes and states
604 are used:
605
606 * The node is presently not responding and will not be allocated any
607 new work. If the node remains non-responsive, it will be placed in
608 the DOWN state (except in the case of COMPLETING, DRAINED, DRAIN‐
609 ING, FAIL, FAILING nodes).
610
611 ~ The node is presently in powered off.
612
613 # The node is presently being powered up or configured.
614
615 ! The node is pending power down.
616
617 % The node is presently being powered down.
618
619 $ The node is currently in a reservation with a flag value of "main‐
620 tenance".
621
622 @ The node is pending reboot.
623
624 ^ The node reboot was issued.
625
626 - The node is planned by the backfill scheduler for a higher priority
627 job.
628
629 ALLOCATED The node has been allocated to one or more jobs.
630
631 ALLOCATED+ The node is allocated to one or more active jobs plus one
632 or more jobs are in the process of COMPLETING.
633
634 COMPLETING All jobs associated with this node are in the process of
635 COMPLETING. This node state will be removed when all of
636 the job's processes have terminated and the Slurm epilog
637 program (if any) has terminated. See the Epilog parameter
638 description in the slurm.conf(5) man page for more informa‐
639 tion.
640
641 DOWN The node is unavailable for use. Slurm can automatically
642 place nodes in this state if some failure occurs. System
643 administrators may also explicitly place nodes in this
644 state. If a node resumes normal operation, Slurm can auto‐
645 matically return it to service. See the ReturnToService and
646 SlurmdTimeout parameter descriptions in the slurm.conf(5)
647 man page for more information.
648
649 DRAINED The node is unavailable for use per system administrator
650 request. See the update node command in the scontrol(1)
651 man page or the slurm.conf(5) man page for more informa‐
652 tion.
653
654 DRAINING The node is currently executing a job, but will not be al‐
655 located additional jobs. The node state will be changed to
656 state DRAINED when the last job on it completes. Nodes en‐
657 ter this state per system administrator request. See the
658 update node command in the scontrol(1) man page or the
659 slurm.conf(5) man page for more information.
660
661 FAIL The node is expected to fail soon and is unavailable for
662 use per system administrator request. See the update node
663 command in the scontrol(1) man page or the slurm.conf(5)
664 man page for more information.
665
666 FAILING The node is currently executing a job, but is expected to
667 fail soon and is unavailable for use per system administra‐
668 tor request. See the update node command in the scon‐
669 trol(1) man page or the slurm.conf(5) man page for more in‐
670 formation.
671
672 FUTURE The node is currently not fully configured, but expected to
673 be available at some point in the indefinite future for
674 use.
675
676 IDLE The node is not allocated to any jobs and is available for
677 use.
678
679 INVAL The node registered with an invalid configuration. The node
680 will clear from this state with a valid registration (ie. a
681 slurmd restart is required).
682
683 MAINT The node is currently in a reservation with a flag value of
684 "maintenance".
685
686 REBOOT_ISSUED
687 A reboot request has been sent to the agent configured to
688 handle this request.
689
690 REBOOT_REQUESTED
691 A request to reboot this node has been made, but hasn't
692 been handled yet.
693
694 MIXED The node has some of its CPUs ALLOCATED while others are
695 IDLE.
696
697 PERFCTRS (NPC)
698 Network Performance Counters associated with this node are
699 in use, rendering this node as not usable for any other
700 jobs
701
702 PLANNED The node is planned by the backfill scheduler for a higher
703 priority job.
704
705 POWER_DOWN The node is pending power down.
706
707 POWERED_DOWN
708 The node is currently powered down and not capable of run‐
709 ning any jobs.
710
711 POWERING_DOWN
712 The node is in the process of powering down and not capable
713 of running any jobs.
714
715 POWERING_UP The node is in the process of being powered up.
716
717 RESERVED The node is in an advanced reservation and not generally
718 available.
719
720 UNKNOWN The Slurm controller has just started and the node's state
721 has not yet been determined.
722
723
725 Executing sinfo sends a remote procedure call to slurmctld. If enough
726 calls from sinfo or other Slurm client commands that send remote proce‐
727 dure calls to the slurmctld daemon come in at once, it can result in a
728 degradation of performance of the slurmctld daemon, possibly resulting
729 in a denial of service.
730
731 Do not run sinfo or other Slurm client commands that send remote proce‐
732 dure calls to slurmctld from loops in shell scripts or other programs.
733 Ensure that programs limit calls to sinfo to the minimum necessary for
734 the information you are trying to gather.
735
736
738 Some sinfo options may be set via environment variables. These environ‐
739 ment variables, along with their corresponding options, are listed be‐
740 low. NOTE: Command line options will always override these settings.
741
742 SINFO_ALL Same as -a, --all
743
744 SINFO_FEDERATION Same as --federation
745
746 SINFO_FORMAT Same as -o <output_format>, --format=<output_for‐
747 mat>
748
749 SINFO_LOCAL Same as --local
750
751 SINFO_PARTITION Same as -p <partition>, --partition=<partition>
752
753 SINFO_SORT Same as -S <sort>, --sort=<sort>
754
755 SLURM_CLUSTERS Same as --clusters
756
757 SLURM_CONF The location of the Slurm configuration file.
758
759 SLURM_TIME_FORMAT Specify the format used to report time stamps. A
760 value of standard, the default value, generates
761 output in the form
762 "year-month-dateThour:minute:second". A value of
763 relative returns only "hour:minute:second" if the
764 current day. For other dates in the current year
765 it prints the "hour:minute" preceded by "Tomorr"
766 (tomorrow), "Ystday" (yesterday), the name of the
767 day for the coming week (e.g. "Mon", "Tue", etc.),
768 otherwise the date (e.g. "25 Apr"). For other
769 years it returns a date month and year without a
770 time (e.g. "6 Jun 2012"). All of the time stamps
771 use a 24 hour format.
772
773 A valid strftime() format can also be specified.
774 For example, a value of "%a %T" will report the day
775 of the week and a time stamp (e.g. "Mon 12:34:56").
776
777
779 Report basic node and partition configurations:
780
781 $ sinfo
782 PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
783 batch up infinite 2 alloc adev[8-9]
784 batch up infinite 6 idle adev[10-15]
785 debug* up 30:00 8 idle adev[0-7]
786
787
788 Report partition summary information:
789
790 $ sinfo -s
791 PARTITION AVAIL TIMELIMIT NODES(A/I/O/T) NODELIST
792 batch up infinite 2/6/0/8 adev[8-15]
793 debug* up 30:00 0/8/0/8 adev[0-7]
794
795
796 Report more complete information about the partition debug:
797
798 $ sinfo --long --partition=debug
799 PARTITION AVAIL TIMELIMIT JOB_SIZE ROOT OVERSUBS GROUPS NODES STATE NODELIST
800 debug* up 30:00 8 no no all 8 idle dev[0-7]
801
802
803 Report only those nodes that are in state DRAINED:
804
805 $ sinfo --states=drained
806 PARTITION AVAIL NODES TIMELIMIT STATE NODELIST
807 debug* up 2 30:00 drain adev[6-7]
808
809
810 Report node-oriented information with details and exact matches:
811
812 $ sinfo -Nel
813 NODELIST NODES PARTITION STATE CPUS MEMORY TMP_DISK WEIGHT FEATURES REASON
814 adev[0-1] 2 debug* idle 2 3448 38536 16 (null) (null)
815 adev[2,4-7] 5 debug* idle 2 3384 38536 16 (null) (null)
816 adev3 1 debug* idle 2 3394 38536 16 (null) (null)
817 adev[8-9] 2 batch allocated 2 246 82306 16 (null) (null)
818 adev[10-15] 6 batch idle 2 246 82306 16 (null) (null)
819
820
821 Report only down, drained and draining nodes and their reason field:
822
823 $ sinfo -R
824 REASON NODELIST
825 Memory errors dev[0,5]
826 Not Responding dev8
827
828
830 Copyright (C) 2002-2007 The Regents of the University of California.
831 Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER).
832 Copyright (C) 2008-2009 Lawrence Livermore National Security.
833 Copyright (C) 2010-2021 SchedMD LLC.
834
835 This file is part of Slurm, a resource management program. For de‐
836 tails, see <https://slurm.schedmd.com/>.
837
838 Slurm is free software; you can redistribute it and/or modify it under
839 the terms of the GNU General Public License as published by the Free
840 Software Foundation; either version 2 of the License, or (at your op‐
841 tion) any later version.
842
843 Slurm is distributed in the hope that it will be useful, but WITHOUT
844 ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
845 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
846 for more details.
847
848
850 scontrol(1), squeue(1), slurm_load_ctl_conf (3), slurm_load_jobs (3),
851 slurm_load_node (3), slurm_load_partitions (3), slurm_reconfigure [22m(3),
852 slurm_shutdown [22m(3), slurm_update_job [22m(3), slurm_update_node [22m(3),
853 slurm_update_partition (3), slurm.conf(5)
854
855
856
857July 2021 Slurm Commands sinfo(1)