1sinfo(1)                        Slurm Commands                        sinfo(1)
2
3
4

NAME

6       sinfo - view information about Slurm nodes and partitions.
7
8

SYNOPSIS

10       sinfo [OPTIONS...]
11

DESCRIPTION

13       sinfo  is used to view partition and node information for a system run‐
14       ning Slurm.
15
16

OPTIONS

18       -a, --all
19              Display information about all partitions. This  causes  informa‐
20              tion  to  be  displayed  about partitions that are configured as
21              hidden and partitions that are unavailable to user's group.
22
23
24       -d, --dead
25              If set only report state information for  non-responding  (dead)
26              nodes.
27
28
29       -e, --exact
30              If  set,  do not group node information on multiple nodes unless
31              their configurations to be reported are identical. Otherwise cpu
32              count, memory size, and disk space for nodes will be listed with
33              the minimum value followed by a "+" for nodes with the same par‐
34              tition and state (e.g., "250+").
35
36
37       --federation
38              Show all partitions from the federation if a member of one.
39
40
41       -h, --noheader
42              Do not print a header on the output.
43
44
45       --help Print a message describing all sinfo options.
46
47
48       --hide Do  not display information about hidden partitions. By default,
49              partitions that are configured as hidden or are not available to
50              the user's group will not be displayed (i.e. this is the default
51              behavior).
52
53
54       -i <seconds>, --iterate=<seconds>
55              Print the state on a periodic basis.  Sleep  for  the  indicated
56              number  of  seconds  between reports.  By default, prints a time
57              stamp with the header.
58
59
60       --local
61              Show only jobs local to this cluster. Ignore other  clusters  in
62              this federation (if any). Overrides --federation.
63
64
65       -l, --long
66              Print  more detailed information.  This is ignored if the --for‐
67              mat option is specified.
68
69
70       -M, --clusters=<string>
71              Clusters to issue commands to.  Multiple cluster  names  may  be
72              comma  separated.   A value of of 'all' will query to run on all
73              clusters.  Note that the SlurmDBD must be up for this option  to
74              work properly.  This option implicitly sets the --local option.
75
76
77       -n <nodes>, --nodes=<nodes>
78              Print  information  only  about the specified node(s).  Multiple
79              nodes may be comma separated or expressed  using  a  node  range
80              expression.  For  example  "linux[00-07]"  would  indicate eight
81              nodes, "linux00" through "linux07."  Performance of the  command
82              can  be  measurably  improved  for systems with large numbers of
83              nodes when a single node name is specified.
84
85
86       --noconvert
87              Don't convert units from their original type (e.g.  2048M  won't
88              be converted to 2G).
89
90
91       -N, --Node
92              Print  information  in  a node-oriented format with one line per
93              node and partition. That is, if a node belongs to more than  one
94              partition,  then  one  line for each node-partition pair will be
95              shown.  If --partition is also specified, then only one line per
96              node in this partition is shown.  The default is to print infor‐
97              mation in a partition-oriented format.  This is ignored  if  the
98              --format option is specified.
99
100
101       -o <output_format>, --format=<output_format>
102              Specify  the  information  to be displayed using an sinfo format
103              string. Format strings transparently used by sinfo when  running
104              with various options are
105
106              default        "%#P %.5a %.10l %.6D %.6t %N"
107
108              --summarize    "%#P %.5a %.10l %.16F  %N"
109
110              --long         "%#P  %.5a %.10l %.10s %.4r %.8h %.10g %.6D %.11T
111                             %N"
112
113              --Node         "%#N %.6D %#P %6t"
114
115              --long --Node  "%#N %.6D %#P %.11T %.4c %.8z %.6m %.8d %.6w %.8f
116                             %20E"
117
118              --list-reasons "%20E %9u %19H %N"
119
120              --long --list-reasons
121                             "%20E %12U %19H %6t %N"
122
123
124              In the above format strings, the use of "#" represents the maxi‐
125              mum length of any partition name or node list to be printed.   A
126              pass  is  made  over  the records to be printed to establish the
127              size in order to align the sinfo output, then a second  pass  is
128              made  over  the  records  to  print them.  Note that the literal
129              character "#" itself is not a valid field length  specification,
130              but is only used to document this behaviour.
131
132              The field specifications available include:
133
134              %all  Print  all fields available for this data type with a ver‐
135                    tical bar separating each field.
136
137              %a    State/availability of a partition
138
139              %A    Number of nodes by state in the  format  "allocated/idle".
140                    Do not use this with a node state option ("%t" or "%T") or
141                    the different node  states  will  be  placed  on  separate
142                    lines.
143
144              %b    Features currently active on the nodes, also see %f
145
146              %B    The  max  number of CPUs per node available to jobs in the
147                    partition.
148
149              %c    Number of CPUs per node
150
151              %C    Number  of  CPUs   by   state   in   the   format   "allo‐
152                    cated/idle/other/total". Do not use this with a node state
153                    option ("%t" or "%T") or the different node states will be
154                    placed on separate lines.
155
156              %d    Size of temporary disk space per node in megabytes
157
158              %D    Number of nodes
159
160              %e    Free memory of a node
161
162              %E    The reason a node is unavailable (down, drained, or drain‐
163                    ing states).
164
165              %f    Features available the nodes, also see %b
166
167              %F    Number  of  nodes  by   state   in   the   format   "allo‐
168                    cated/idle/other/total".   Note  the  use  of  this format
169                    option with a node state format option ("%t" or "%T") will
170                    result  in  the different node states being be reported on
171                    separate lines.
172
173              %g    Groups which may use the nodes
174
175              %G    Generic resources (gres) associated with the nodes
176
177              %h    Jobs may  oversubscribe  compute  resources  (i.e.  CPUs),
178                    "yes", "no", "exclusive" or "force"
179
180              %H    Print the timestamp of the reason a node is unavailable.
181
182              %I    Partition job priority weighting factor.
183
184              %l    Maximum  time  for  any job in the format "days-hours:min‐
185                    utes:seconds"
186
187              %L    Default time for any job in  the  format  "days-hours:min‐
188                    utes:seconds"
189
190              %m    Size of memory per node in megabytes
191
192              %M    PreemptionMode
193
194              %n    List of node hostnames
195
196              %N    List of node names
197
198              %o    List of node communication addresses
199
200              %O    CPU load of a node
201
202              %p    Partition scheduling tier priority.
203
204              %P    Partition  name followed by "*" for the default partition,
205                    also see %R
206
207              %r    Only user root may initiate jobs, "yes" or "no"
208
209              %R    Partition name, also see %P
210
211              %s    Maximum job size in nodes
212
213              %S    Allowed allocating nodes
214
215              %t    State of nodes, compact form
216
217              %T    State of nodes, extended form
218
219              %u    Print the user name of  who  set  the  reason  a  node  is
220                    unavailable.
221
222              %U    Print  the  user name and uid of who set the reason a node
223                    is unavailable.
224
225              %v    Print the version of the running slurmd daemon.
226
227              %V    Print the cluster name if running in a federation
228
229              %w    Scheduling weight of the nodes
230
231              %X    Number of sockets per node
232
233              %Y    Number of cores per socket
234
235              %Z    Number of threads per core
236
237              %z    Extended processor information: number of sockets,  cores,
238                    threads (S:C:T) per node
239
240              %.<*> right justification of the field
241
242              %<Number><*>
243                    size of field
244
245
246       -O <output_format>, --Format=<output_format>
247              Specify  the information to be displayed.  Also see the -o <out‐
248              put_format>,  --format=<output_format>  option  described  below
249              (which  supports greater flexibility in formatting, but does not
250              support access to all fields because we  ran  out  of  letters).
251              Requests  a  comma  separated list of job information to be dis‐
252              played.
253
254
255              The format of each field is "type[:[.]size]"
256
257              size    is the minimum field size.  If no size is specified,  20
258                      characters will be allocated to print the information.
259
260               .      indicates  the output should be right justified and size
261                      must be specified.  By default, output  is  left  justi‐
262                      fied.
263
264
265              Valid type specifications include:
266
267              all   Print  all fields available in the -o format for this data
268                    type with a vertical bar separating each field.
269
270              allocmem
271                    Prints the amount of allocated memory on a node.
272
273              allocnodes
274                    Allowed allocating nodes.
275
276              available
277                    State/availability of a partition.
278
279              cluster
280                    Print the cluster name if running in a federation
281
282              cpus  Number of CPUs per node.
283
284              cpusload
285                    CPU load of a node.
286
287              freemem
288                    Free memory of a node.
289
290              cpusstate
291                    Number  of  CPUs   by   state   in   the   format   "allo‐
292                    cated/idle/other/total". Do not use this with a node state
293                    option ("%t" or "%T") or the different node states will be
294                    placed on separate lines.
295
296              cores Number of cores per socket.
297
298              defaulttime
299                    Default  time  for  any job in the format "days-hours:min‐
300                    utes:seconds".
301
302              disk  Size of temporary disk space per node in megabytes.
303
304              features
305                    Features available on the nodes. Also see features_act.
306
307              features_act
308                    Features currently active on the nodes. Also see features.
309
310              groups
311                    Groups which may use the nodes.
312
313              gres  Generic resources (gres) associated with the nodes.
314
315              gresused
316                    Generic resources (gres) currently in use on the nodes.
317
318              maxcpuspernode
319                    The max number of CPUs per node available to jobs  in  the
320                    partition.
321
322              memory
323                    Size of memory per node in megabytes.
324
325              nodes Number of nodes.
326
327              nodeaddr
328                    List of node communication addresses.
329
330              nodeai
331                    Number  of  nodes by state in the format "allocated/idle".
332                    Do not use this with a node state option ("%t" or "%T") or
333                    the  different  node  states  will  be  placed on separate
334                    lines.
335
336              nodeaiot
337                    Number  of  nodes  by   state   in   the   format   "allo‐
338                    cated/idle/other/total".   Do  not  use  this  with a node
339                    state option ("%t" or "%T") or the different  node  states
340                    will be placed on separate lines.
341
342              nodehost
343                    List of node hostnames.
344
345              nodelist
346                    List of node names.
347
348              oversubscribe
349                    Jobs  may  oversubscribe  compute  resources  (i.e. CPUs),
350                    "yes", "no", "exclusive" or "force".
351
352              partition
353                    Partition name followed by "*" for the default  partition,
354                    also see %R.
355
356              partitionname
357                    Partition name, also see %P.
358
359              port  Node TCP port.
360
361              preemptmode
362                    PreemptionMode.
363
364              priorityjobfactor
365                    Partition  factor  used  by priority/multifactor plugin in
366                    calculating job priority.
367
368              prioritytier or priority
369                    Partition scheduling tier priority.
370
371              reason
372                    The reason a node is unavailable (down, drained, or drain‐
373                    ing states).
374
375              root  Only user root may initiate jobs, "yes" or "no".
376
377              size  Maximum job size in nodes.
378
379              statecompact
380                    State of nodes, compact form.
381
382              statelong
383                    State of nodes, extended form.
384
385              sockets
386                    Number of sockets per node.
387
388              socketcorethread
389                    Extended  processor information: number of sockets, cores,
390                    threads (S:C:T) per node.
391
392              time  Maximum time for any job in  the  format  "days-hours:min‐
393                    utes:seconds".
394
395              timestamp
396                    Print the timestamp of the reason a node is unavailable.
397
398              threads
399                    Number of threads per core.
400
401              user  Print  the  user  name  of  who  set  the reason a node is
402                    unavailable.
403
404              userlong
405                    Print the user name and uid of who set the reason  a  node
406                    is unavailable.
407
408              version
409                    Print the version of the running slurmd daemon.
410
411              weight
412                    Scheduling weight of the nodes.
413
414
415       -p <partition>, --partition=<partition>
416              Print  information only about the specified partition(s). Multi‐
417              ple partitions are separated by commas.
418
419
420       -r, --responding
421              If set only report state information for responding nodes.
422
423
424       -R, --list-reasons
425              List reasons nodes are in the down,  drained,  fail  or  failing
426              state.   When  nodes are in these states Slurm supports optional
427              inclusion of a "reason" string by an administrator.  This option
428              will  display  the  first  20 characters of the reason field and
429              list of nodes with that  reason  for  all  nodes  that  are,  by
430              default, down, drained, draining or failing.  This option may be
431              used with other node filtering options (e.g. -r,  -d,  -t,  -n),
432              however,  combinations of these options that result in a list of
433              nodes that are not down or drained or failing will  not  produce
434              any  output.  When used with -l the output additionally includes
435              the current node state.
436
437
438       -s, --summarize
439              List only a partition state summary with no node state  details.
440              This is ignored if the --format option is specified.
441
442
443       -S <sort_list>, --sort=<sort_list>
444              Specification  of the order in which records should be reported.
445              This uses the same field specification as  the  <output_format>.
446              Multiple  sorts may be performed by listing multiple sort fields
447              separated by commas.  The field specifications may  be  preceded
448              by  "+"  or  "-"  for  ascending  (default) and descending order
449              respectively.  The partition field specification,  "P",  may  be
450              preceded  by  a  "#" to report partitions in the same order that
451              they appear in Slurm's   configuration  file,  slurm.conf.   For
452              example,  a  sort  value  of  "+P,-m"  requests  that records be
453              printed in order of increasing partition name and within a  par‐
454              tition  by decreasing memory size.  The default value of sort is
455              "#P,-t" (partitions ordered as configured then  decreasing  node
456              state).   If  the  --Node  option  is selected, the default sort
457              value is "N" (increasing node name).
458
459
460       -t <states> , --states=<states>
461              List nodes only having the given state(s).  Multiple states  may
462              be comma separated and the comparison is case insensitive.  Pos‐
463              sible values include (case insensitive): ALLOC, ALLOCATED, COMP,
464              COMPLETING,  DOWN,  DRAIN  (for  node  in  DRAINING  or  DRAINED
465              states), DRAINED, DRAINING, FAIL,  FUTURE,  FUTR,  IDLE,  MAINT,
466              MIX,   MIXED,  NO_RESPOND,  NPC,  PERFCTRS,  POWER_DOWN,  POWER‐
467              ING_DOWN,  POWER_UP,  RESV,  RESERVED,  UNK,  and  UNKNOWN.   By
468              default  nodes  in the specified state are reported whether they
469              are responding or not.  The --dead and --responding options  may
470              be used to filtering nodes by the responding flag.
471
472
473       -T, --reservation
474              Only display information about Slurm reservations.
475
476
477       --usage
478              Print a brief message listing the sinfo options.
479
480
481       -v, --verbose
482              Provide detailed event logging through program execution.
483
484
485       -V, --version
486              Print version information and exit.
487
488

OUTPUT FIELD DESCRIPTIONS

490       AVAIL  Partition state: up or down.
491
492       CPUS   Count of CPUs (processors) on each node.
493
494       S:C:T  Count of sockets (S), cores (C), and threads (T) on these nodes.
495
496       SOCKETS
497              Count of sockets on these nodes.
498
499       CORES  Count of cores on these nodes.
500
501       THREADS
502              Count of threads on these nodes.
503
504       GROUPS Resource  allocations  in  this  partition are restricted to the
505              named groups.  all indicates that all groups may use this parti‐
506              tion.
507
508       JOB_SIZE
509              Minimum and maximum node count that can be allocated to any user
510              job.  A single number indicates the  minimum  and  maximum  node
511              count  are  the  same.   infinite is used to identify partitions
512              without a maximum node count.
513
514       TIMELIMIT
515              Maximum time limit for any user job  in  days-hours:minutes:sec‐
516              onds.   infinite  is  used  to identify partitions without a job
517              time limit.
518
519       MEMORY Size of real memory in megabytes on these nodes.
520
521       NODELIST
522              Names of nodes associated with this configuration/partition.
523
524       NODES  Count of nodes with this particular configuration.
525
526       NODES(A/I)
527              Count of nodes with this particular configuration by node  state
528              in the form "available/idle".
529
530       NODES(A/I/O/T)
531              Count  of nodes with this particular configuration by node state
532              in the form "available/idle/other/total".
533
534       PARTITION
535              Name of a partition.  Note that the suffix  "*"  identifies  the
536              default partition.
537
538       PORT   Local TCP port used by slurmd on the node.
539
540       ROOT   Is   the   ability  to  allocate  resources  in  this  partition
541              restricted to user root, yes or no.
542
543       OVERSUBSCRIBE
544              Will jobs allocated resources in  this  partition  oversubscribe
545              those compute resources (i.e. CPUs).  no indicates resources are
546              never oversubscribed.  exclusive indicates whole nodes are dedi‐
547              cated  to  jobs  (equivalent  to srun --exclusive option, may be
548              used even with select/cons_res managing individual  processors).
549              force  indicates  resources  are always available to be oversub‐
550              scribed.  yes indicates resource may be  oversubscribed  or  not
551              per job's resource allocation.
552
553       STATE  State  of  the  nodes.  Possible states include: allocated, com‐
554              pleting, down, drained, draining, fail, failing,  future,  idle,
555              maint,  mixed,  perfctrs,  power_down,  power_up,  reserved, and
556              unknown plus Their abbreviated forms: alloc, comp, down,  drain,
557              drng,  fail, failg, futr, idle, maint, mix, npc, pow_dn, pow_up,
558              resv, and unk respectively.  Note that the suffix "*" identifies
559              nodes that are presently not responding.
560
561       TMP_DISK
562              Size of temporary disk space in megabytes on these nodes.
563
564

NODE STATE CODES

566       Node  state  codes are shortened as required for the field size.  These
567       node states may be followed by a special character  to  identify  state
568       flags  associated  with  the  node.   The  following node sufficies and
569       states are used:
570
571       *   The node is presently not responding and will not be allocated  any
572           new work.  If the node remains non-responsive, it will be placed in
573           the DOWN state (except in the case of COMPLETING,  DRAINED,  DRAIN‐
574           ING, FAIL, FAILING nodes).
575
576       ~   The  node is presently in a power saving mode (typically running at
577           reduced frequency).
578
579       #   The node is presently being powered up or configured.
580
581       %   The node is presently being powered down.
582
583       $   The node is currently in a reservation with a flag value of  "main‐
584           tenance".
585
586       @   The node is pending reboot.
587
588       ALLOCATED   The node has been allocated to one or more jobs.
589
590       ALLOCATED+  The  node  is allocated to one or more active jobs plus one
591                   or more jobs are in the process of COMPLETING.
592
593       COMPLETING  All jobs associated with this node are in  the  process  of
594                   COMPLETING.   This  node  state will be removed when all of
595                   the job's processes have terminated and  the  Slurm  epilog
596                   program  (if  any) has terminated. See the Epilog parameter
597                   description in the slurm.conf man page  for  more  informa‐
598                   tion.
599
600       DOWN        The  node  is  unavailable for use. Slurm can automatically
601                   place nodes in this state if some  failure  occurs.  System
602                   administrators  may  also  explicitly  place  nodes in this
603                   state. If a node resumes normal operation, Slurm can  auto‐
604                   matically return it to service. See the ReturnToService and
605                   SlurmdTimeout parameter descriptions in  the  slurm.conf(5)
606                   man page for more information.
607
608       DRAINED     The  node  is  unavailable for use per system administrator
609                   request.  See the update node command  in  the  scontrol(1)
610                   man  page  or  the slurm.conf(5) man page for more informa‐
611                   tion.
612
613       DRAINING    The node is currently executing a  job,  but  will  not  be
614                   allocated  to  additional  jobs.  The  node  state  will be
615                   changed to state DRAINED when the last job on it completes.
616                   Nodes  enter  this  state per system administrator request.
617                   See the update node command in the scontrol(1) man page  or
618                   the slurm.conf(5) man page for more information.
619
620       FAIL        The  node  is  expected to fail soon and is unavailable for
621                   use per system administrator request.  See the update  node
622                   command  in  the  scontrol(1) man page or the slurm.conf(5)
623                   man page for more information.
624
625       FAILING     The node is currently executing a job, but is  expected  to
626                   fail soon and is unavailable for use per system administra‐
627                   tor request.  See the update  node  command  in  the  scon‐
628                   trol(1)  man  page  or  the slurm.conf(5) man page for more
629                   information.
630
631       FUTURE      The node is currently not fully configured, but expected to
632                   be  available  at  some  point in the indefinite future for
633                   use.
634
635       IDLE        The node is not allocated to any jobs and is available  for
636                   use.
637
638       MAINT       The node is currently in a reservation with a flag value of
639                   "maintainence".
640
641       REBOOT      The node is currently scheduled to be rebooted.
642
643       MIXED       The node has some of its CPUs ALLOCATED  while  others  are
644                   IDLE.
645
646       PERFCTRS (NPC)
647                   Network  Performance Counters associated with this node are
648                   in use, rendering this node as not  usable  for  any  other
649                   jobs
650
651       POWER_DOWN  The  node is currently powered down and not capable of run‐
652                   ning any jobs.
653
654       POWERING_DOWN
655                   The node is currently powering down and not capable of run‐
656                   ning any jobs.
657
658       POWER_UP    The node is currently in the process of being powered up.
659
660       RESERVED    The  node  is  in an advanced reservation and not generally
661                   available.
662
663       UNKNOWN     The Slurm controller has just started and the node's  state
664                   has not yet been determined.
665
666

ENVIRONMENT VARIABLES

668       Some sinfo options may be set via environment variables. These environ‐
669       ment variables, along with  their  corresponding  options,  are  listed
670       below. (Note: Commandline options will always override these settings.)
671
672       SINFO_ALL           -a, --all
673
674       SINFO_FEDERATION    Same as --federation
675
676       SINFO_FORMAT        -o <output_format>, --format=<output_format>
677
678       SINFO_LOCAL         Same as --local
679
680       SINFO_PARTITION     -p <partition>, --partition=<partition>
681
682       SINFO_SORT          -S <sort>, --sort=<sort>
683
684       SLURM_CLUSTERS      Same as --clusters
685
686       SLURM_CONF          The location of the Slurm configuration file.
687
688       SLURM_TIME_FORMAT   Specify  the  format  used to report time stamps. A
689                           value of standard,  the  default  value,  generates
690                           output            in            the            form
691                           "year-month-dateThour:minute:second".  A  value  of
692                           relative  returns  only "hour:minute:second" if the
693                           current day.  For other dates in the  current  year
694                           it  prints  the  "hour:minute" preceded by "Tomorr"
695                           (tomorrow), "Ystday" (yesterday), the name  of  the
696                           day  for the coming week (e.g. "Mon", "Tue", etc.),
697                           otherwise the date  (e.g.  "25  Apr").   For  other
698                           years  it  returns  a date month and year without a
699                           time (e.g.  "6 Jun 2012"). All of the  time  stamps
700                           use a 24 hour format.
701
702                           A  valid  strftime()  format can also be specified.
703                           For example, a value of "%a %T" will report the day
704                           of the week and a time stamp (e.g. "Mon 12:34:56").
705
706

EXAMPLES

708       Report basic node and partition configurations:
709
710
711       > sinfo
712       PARTITION AVAIL TIMELIMIT NODES STATE  NODELIST
713       batch     up     infinite     2 alloc  adev[8-9]
714       batch     up     infinite     6 idle   adev[10-15]
715       debug*    up        30:00     8 idle   adev[0-7]
716
717
718       Report partition summary information:
719
720       > sinfo -s
721       PARTITION AVAIL TIMELIMIT NODES(A/I/O/T) NODELIST
722       batch     up     infinite 2/6/0/8        adev[8-15]
723       debug*    up        30:00 0/8/0/8        adev[0-7]
724
725
726       Report more complete information about the partition debug:
727
728       > sinfo --long --partition=debug
729       PARTITION AVAIL TIMELIMIT JOB_SIZE ROOT OVERSUBS GROUPS NODES STATE NODELIST
730       debug*    up        30:00        8 no   no       all        8 idle  dev[0-7]
731
732       Report only those nodes that are in state DRAINED:
733
734       > sinfo --states=drained
735       PARTITION AVAIL NODES TIMELIMIT STATE  NODELIST
736       debug*    up        2     30:00 drain  adev[6-7]
737
738
739       Report node-oriented information with details and exact matches:
740
741       > sinfo -Nel
742       NODELIST    NODES PARTITION STATE  CPUS MEMORY TMP_DISK WEIGHT FEATURES REASON
743       adev[0-1]       2 debug*    idle      2   3448    38536     16 (null)   (null)
744       adev[2,4-7]     5 debug*    idle      2   3384    38536     16 (null)   (null)
745       adev3           1 debug*    idle      2   3394    38536     16 (null)   (null)
746       adev[8-9]       2 batch     allocated 2    246    82306     16 (null)   (null)
747       adev[10-15]     6 batch     idle      2    246    82306     16 (null)   (null)
748
749
750       Report only down, drained and draining nodes and their reason field:
751
752       > sinfo -R
753       REASON                              NODELIST
754       Memory errors                       dev[0,5]
755       Not Responding                      dev8
756
757
758

COPYING

760       Copyright  (C)  2002-2007  The Regents of the University of California.
761       Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER).
762       Copyright (C) 2008-2009 Lawrence Livermore National Security.
763       Copyright (C) 2010-2017 SchedMD LLC.
764
765       This file is  part  of  Slurm,  a  resource  management  program.   For
766       details, see <https://slurm.schedmd.com/>.
767
768       Slurm  is free software; you can redistribute it and/or modify it under
769       the terms of the GNU General Public License as published  by  the  Free
770       Software  Foundation;  either  version  2  of  the License, or (at your
771       option) any later version.
772
773       Slurm is distributed in the hope that it will be  useful,  but  WITHOUT
774       ANY  WARRANTY;  without even the implied warranty of MERCHANTABILITY or
775       FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General  Public  License
776       for more details.
777
778

SEE ALSO

780       scontrol(1),     smap(1),     squeue(1),    slurm_load_ctl_conf    (3),
781       slurm_load_jobs (3), slurm_load_node  (3),  slurm_load_partitions  (3),
782       slurm_reconfigure   (3),   slurm_shutdown  (3),  slurm_update_job  (3),
783       slurm_update_node (3), slurm_update_partition (3), slurm.conf(5)
784
785
786
787November 2016                   Slurm Commands                        sinfo(1)
Impressum