1sacctmgr(1) Slurm Commands sacctmgr(1)
2
3
4
6 sacctmgr - Used to view and modify Slurm account information.
7
8
10 sacctmgr [OPTIONS...] [COMMAND...]
11
12
14 sacctmgr is used to view or modify Slurm account information. The
15 account information is maintained within a database with the interface
16 being provided by slurmdbd (Slurm Database daemon). This database can
17 serve as a central storehouse of user and computer information for mul‐
18 tiple computers at a single site. Slurm account information is
19 recorded based upon four parameters that form what is referred to as an
20 association. These parameters are user, cluster, partition, and
21 account. user is the login name. cluster is the name of a Slurm man‐
22 aged cluster as specified by the ClusterName parameter in the
23 slurm.conf configuration file. partition is the name of a Slurm parti‐
24 tion on that cluster. account is the bank account for a job. The
25 intended mode of operation is to initiate the sacctmgr command, add,
26 delete, modify, and/or list association records then commit the changes
27 and exit.
28
29
30 Note: The content's of Slurm's database are maintained in lower case.
31 This may result in some sacctmgr output differing from that of
32 other Slurm commands.
33
34
36 -h, --help
37 Print a help message describing the usage of sacctmgr. This is
38 equivalent to the help command.
39
40
41 -i, --immediate
42 commit changes immediately without asking for confirmation.
43
44
45 -n, --noheader
46 No header will be added to the beginning of the output.
47
48
49 -p, --parsable
50 Output will be '|' delimited with a '|' at the end.
51
52
53 -P, --parsable2
54 Output will be '|' delimited without a '|' at the end.
55
56
57 -Q, --quiet
58 Print no messages other than error messages. This is equivalent
59 to the quiet command.
60
61
62 -r, --readonly
63 Makes it so the running sacctmgr cannot modify accounting infor‐
64 mation. The readonly option is for use within interactive mode.
65
66
67 -s, --associations
68 Use with show or list to display associations with the entity.
69 This is equivalent to the associations command.
70
71
72 -v, --verbose
73 Enable detailed logging. This is equivalent to the verbose com‐
74 mand.
75
76
77 -V , --version
78 Display version number. This is equivalent to the version com‐
79 mand.
80
81
83 add <ENTITY> <SPECS>
84 Add an entity. Identical to the create command.
85
86
87 archive {dump|load} <SPECS>
88 Write database information to a flat file or load information
89 that has previously been written to a file.
90
91
92 clear stats
93 Clear the server statistics.
94
95
96 create <ENTITY> <SPECS>
97 Add an entity. Identical to the add command.
98
99
100 delete <ENTITY> where <SPECS>
101 Delete the specified entities. Identical to the remove command.
102
103
104 dump <ENTITY> [File=FILENAME]
105 Dump cluster data to the specified file. If the filename is not
106 specified it uses clustername.cfg filename by default.
107
108
109 help Display a description of sacctmgr options and commands.
110
111
112 list <ENTITY> [<SPECS>]
113 Display information about the specified entity. By default, all
114 entries are displayed, you can narrow results by specifying
115 SPECS in your query. Identical to the show command.
116
117
118 load <FILENAME>
119 Load cluster data from the specified file. This is a configura‐
120 tion file generated by running the sacctmgr dump command. This
121 command does not load archive data, see the sacctmgr archive
122 load option instead.
123
124
125 modify <ENTITY> where <SPECS> set <SPECS>
126 Modify an entity.
127
128
129 reconfigure
130 Reconfigures the SlurmDBD if running with one.
131
132
133 remove <ENTITY> where <SPECS>
134 Delete the specified entities. Identical to the delete command.
135
136
137 show <ENTITY> [<SPECS>]
138 Display information about the specified entity. By default, all
139 entries are displayed, you can narrow results by specifying
140 SPECS in your query. Identical to the list command.
141
142
143 shutdown
144 Shutdown the server.
145
146
147 version
148 Display the version number of sacctmgr.
149
150
152 NOTE: All commands listed below can be used in the interactive mode,
153 but NOT on the initial command line.
154
155
156 exit Terminate sacctmgr interactive mode. Identical to the quit com‐
157 mand.
158
159
160 quiet Print no messages other than error messages.
161
162
163 quit Terminate the execution of sacctmgr interactive mode. Identical
164 to the exit command.
165
166
167 verbose
168 Enable detailed logging. This includes time-stamps on data
169 structures, record counts, etc. This is an independent command
170 with no options meant for use in interactive mode.
171
172
173 !! Repeat the last command.
174
175
177 account
178 A bank account, typically specified at job submit time using the
179 --account= option. These may be arranged in a hierarchical
180 fashion, for example accounts 'chemistry' and 'physics' may be
181 children of the account 'science'. The hierarchy may have an
182 arbitrary depth.
183
184
185 association
186 The entity used to group information consisting of four parame‐
187 ters: account, cluster, partition (optional), and user. Used
188 only with the list or show command. Add, modify, and delete
189 should be done to a user, account or cluster entity. This will
190 in turn update the underlying associations.
191
192
193 cluster
194 The ClusterName parameter in the slurm.conf configuration file,
195 used to differentiate accounts on different machines.
196
197
198 configuration
199 Used only with the list or show command to report current system
200 configuration.
201
202
203 coordinator
204 A special privileged user, usually an account manager, that can
205 add users or sub-accounts to the account they are coordinator
206 over. This should be a trusted person since they can change
207 limits on account and user associations, as well as cancel,
208 requeue or reassign accounts of jobs inside their realm.
209
210
211 event Events like downed or draining nodes on clusters.
212
213
214 federation
215 A group of clusters that work together to schedule jobs.
216
217
218 job Used to modify specific fields of a job: Derived Exit Code, the
219 Comment String, or wckey.
220
221
222 problem
223 Use with show or list to display entity problems.
224
225
226 qos Quality of Service.
227
228
229 reservation
230 A collection of resources set apart for use by a particular
231 account, user or group of users for a given period of time.
232
233
234 resource
235 Software resources for the system. Those are software licenses
236 shared among clusters.
237
238
239 RunawayJobs
240 Used only with the list or show command to report current jobs
241 that have been orphaned on the local cluster and are now run‐
242 away. If there are jobs in this state it will also give you an
243 option to "fix" them. NOTE: You must have an AdminLevel of at
244 least Operator to perform this.
245
246
247 stats Used with list or show command to view server statistics.
248 Accepts optional argument of ave_time or total_time to sort on
249 those fields. By default, sorts on increasing RPC count field.
250
251
252 transaction
253 List of transactions that have occurred during a given time
254 period.
255
256
257 tres Used with list or show command to view a list of Trackable
258 RESources configured on the system.
259
260
261 user The login name. Usernames are case-insensitive (forced to lower‐
262 case) unless the PreserveCaseUser option has been set in the
263 SlurmDBD configuration file.
264
265
266 wckeys Workload Characterization Key. An arbitrary string for
267 grouping orthogonal accounts.
268
269
271 NOTE: The group limits (GrpJobs, GrpTRES, etc.) are tested when a job
272 is being considered for being allocated resources. If starting a job
273 would cause any of its group limit to be exceeded, that job will not be
274 considered for scheduling even if that job might preempt other jobs
275 which would release sufficient group resources for the pending job to
276 be initiated.
277
278
279 DefaultQOS=<default qos>
280 The default QOS this association and its children should have.
281 This is overridden if set directly on a user. To clear a previ‐
282 ously set value use the modify command with a new value of -1.
283
284
285 Fairshare=<fairshare number | parent>
286 Number used in conjunction with other accounts to determine job
287 priority. Can also be the string parent, when used on a user
288 this means that the parent association is used for fairshare.
289 If Fairshare=parent is set on an account, that account's chil‐
290 dren will be effectively reparented for fairshare calculations
291 to the first parent of their parent that is not Fairshare=par‐
292 ent. Limits remain the same, only its fairshare value is
293 affected. To clear a previously set value use the modify com‐
294 mand with a new value of -1.
295
296
297 GrpTRESMins=<TRES=max TRES minutes,...>
298 The total number of TRES minutes that can possibly be used by
299 past, present and future jobs running from this association and
300 its children. To clear a previously set value use the modify
301 command with a new value of -1 for each TRES id.
302
303 NOTE: This limit is not enforced if set on the root association
304 of a cluster. So even though it may appear in sacctmgr output,
305 it will not be enforced.
306
307 ALSO NOTE: This limit only applies when using the Priority Mul‐
308 tifactor plugin. The time is decayed using the value of Priori‐
309 tyDecayHalfLife or PriorityUsageResetPeriod as set in the
310 slurm.conf. When this limit is reached all associated jobs run‐
311 ning will be killed and all future jobs submitted with associa‐
312 tions in the group will be delayed until they are able to run
313 inside the limit.
314
315
316 GrpTRESRunMins=<TRES=max TRES run minutes,...>
317 Used to limit the combined total number of TRES minutes used by
318 all jobs running with this association and its children. This
319 takes into consideration time limit of running jobs and consumes
320 it, if the limit is reached no new jobs are started until other
321 jobs finish to allow time to free up.
322
323
324 GrpTRES=<TRES=max TRES,...>
325 Maximum number of TRES running jobs are able to be allocated in
326 aggregate for this association and all associations which are
327 children of this association. To clear a previously set value
328 use the modify command with a new value of -1 for each TRES id.
329
330 NOTE: This limit only applies fully when using the Select Con‐
331 sumable Resource plugin.
332
333
334 GrpJobs=<max jobs>
335 Maximum number of running jobs in aggregate for this association
336 and all associations which are children of this association. To
337 clear a previously set value use the modify command with a new
338 value of -1.
339
340
341 GrpJobsAccrue=<max jobs>
342 Maximum number of pending jobs in aggregate able to accrue age
343 priority for this association and all associations which are
344 children of this association. To clear a previously set value
345 use the modify command with a new value of -1.
346
347
348 GrpSubmitJobs=<max jobs>
349 Maximum number of jobs which can be in a pending or running
350 state at any time in aggregate for this association and all
351 associations which are children of this association. To clear a
352 previously set value use the modify command with a new value of
353 -1.
354
355 NOTE: This setting shows up in the sacctmgr output as GrpSubmit.
356
357
358 GrpWall=<max wall>
359 Maximum wall clock time running jobs are able to be allocated in
360 aggregate for this association and all associations which are
361 children of this association. To clear a previously set value
362 use the modify command with a new value of -1.
363
364 NOTE: This limit is not enforced if set on the root association
365 of a cluster. So even though it may appear in sacctmgr output,
366 it will not be enforced.
367
368 ALSO NOTE: This limit only applies when using the Priority Mul‐
369 tifactor plugin. The time is decayed using the value of Priori‐
370 tyDecayHalfLife or PriorityUsageResetPeriod as set in the
371 slurm.conf. When this limit is reached all associated jobs run‐
372 ning will be killed and all future jobs submitted with associa‐
373 tions in the group will be delayed until they are able to run
374 inside the limit.
375
376
377 MaxTRESMinsPerJob=<max TRES minutes>
378 Maximum number of TRES minutes each job is able to use in this
379 association. This is overridden if set directly on a user.
380 Default is the cluster's limit. To clear a previously set value
381 use the modify command with a new value of -1 for each TRES id.
382
383 NOTE: This setting shows up in the sacctmgr output as Max‐
384 TRESMins.
385
386
387 MaxTRESPerJob=<max TRES>
388 Maximum number of TRES each job is able to use in this associa‐
389 tion. This is overridden if set directly on a user. Default is
390 the cluster's limit. To clear a previously set value use the
391 modify command with a new value of -1 for each TRES id.
392
393 NOTE: This setting shows up in the sacctmgr output as MaxTRES.
394
395 NOTE: This limit only applies fully when using cons_res or
396 cons_tres select type plugins.
397
398
399 MaxJobs=<max jobs>
400 Maximum number of jobs each user is allowed to run at one time
401 in this association. This is overridden if set directly on a
402 user. Default is the cluster's limit. To clear a previously
403 set value use the modify command with a new value of -1.
404
405
406 MaxJobsAccrue=<max jobs>
407 Maximum number of pending jobs able to accrue age priority at
408 any given time for the given association. This is overridden if
409 set directly on a user. Default is the cluster's limit. To
410 clear a previously set value use the modify command with a new
411 value of -1.
412
413
414 MaxSubmitJobs=<max jobs>
415 Maximum number of jobs which can this association can have in a
416 pending or running state at any time. Default is the cluster's
417 limit. To clear a previously set value use the modify command
418 with a new value of -1.
419
420 NOTE: This setting shows up in the sacctmgr output as MaxSubmit.
421
422
423 MaxWallDurationPerJob=<max wall>
424 Maximum wall clock time each job is able to use in this associa‐
425 tion. This is overridden if set directly on a user. Default is
426 the cluster's limit. <max wall> format is <min> or <min>:<sec>
427 or <hr>:<min>:<sec> or <days>-<hr>:<min>:<sec> or <days>-<hr>.
428 The value is recorded in minutes with rounding as needed. To
429 clear a previously set value use the modify command with a new
430 value of -1.
431
432 NOTE: Changing this value will have no effect on any running or
433 pending job.
434
435 NOTE: This setting shows up in the sacctmgr output as MaxWall.
436
437
438 Priority
439 What priority will be added to a job's priority when using this
440 association. This is overridden if set directly on a user.
441 Default is the cluster's limit. To clear a previously set value
442 use the modify command with a new value of -1.
443
444
445 QosLevel<operator><comma separated list of qos names>
446 Specify the default Quality of Service's that jobs are able to
447 run at for this association. To get a list of valid QOS's use
448 'sacctmgr list qos'. This value will override its parents value
449 and push down to its children as the new default. Setting a
450 QosLevel to '' (two single quotes with nothing between them)
451 restores its default setting. You can also use the operator +=
452 and -= to add or remove certain QOS's from a QOS list.
453
454 Valid <operator> values include:
455
456 = Set QosLevel to the specified value. Note: the QOS that can
457 be used at a given account in the hierarchy are inherited
458 by the children of that account. By assigning QOS with the
459 = sign only the assigned QOS can be used by the account and
460 its children.
461
462 += Add the specified <qos> value to the current QosLevel. The
463 account will have access to this QOS and the other previ‐
464 ously assigned to it.
465
466 -= Remove the specified <qos> value from the current QosLevel.
467
468
469 See the EXAMPLES section below.
470
471
473 Cluster=<cluster>
474 Specific cluster to add account to. Default is all in system.
475
476
477 Description=<description>
478 An arbitrary string describing an account.
479
480
481 Name=<name>
482 The name of a bank account. Note the name must be unique and
483 can not be represent different bank accounts at different points
484 in the account hierarchy.
485
486
487 Organization=<org>
488 Organization to which the account belongs.
489
490
491 Parent=<parent>
492 Parent account of this account. Default is the root account, a
493 top level account.
494
495
496 RawUsage=<value>
497 This allows an administrator to reset the raw usage accrued to
498 an account. The only value currently supported is 0 (zero).
499 This is a settable specification only - it cannot be used as a
500 filter to list accounts.
501
502
503 WithAssoc
504 Display all associations for this account.
505
506
507 WithCoord
508 Display all coordinators for this account.
509
510
511 WithDeleted
512 Display information with previously deleted data.
513
514 NOTE: If using the WithAssoc option you can also query against associa‐
515 tion specific information to view only certain associations this
516 account may have. These extra options can be found in the SPECIFICA‐
517 TIONS FOR ASSOCIATIONS section. You can also use the general specifi‐
518 cations list above in the GENERAL SPECIFICATIONS FOR ASSOCIATION BASED
519 ENTITIES section.
520
521
523 Account
524 The name of a bank account.
525
526
527 Description
528 An arbitrary string describing an account.
529
530
531 Organization
532 Organization to which the account belongs.
533
534
535 Coordinators
536 List of users that are a coordinator of the account. (Only
537 filled in when using the WithCoordinator option.)
538
539 NOTE: If using the WithAssoc option you can also view the information
540 about the various associations the account may have on all the clusters
541 in the system. The association information can be filtered. Note that
542 all the accounts in the database will always be shown as filter only
543 takes effect over the association data. The Association format fields
544 are described in the LIST/SHOW ASSOCIATION FORMAT OPTIONS section.
545
546
547
549 Clusters=<comma separated list of cluster names>
550 List the associations of the cluster(s).
551
552
553 Accounts=<comma separated list of account names>
554 List the associations of the account(s).
555
556
557 Users=<comma separated list of user names>
558 List the associations of the user(s).
559
560
561 Partition=<comma separated list of partition names>
562 List the associations of the partition(s).
563
564 NOTE: You can also use the general specifications list above in the
565 GENERAL SPECIFICATIONS FOR ASSOCIATION BASED ENTITIES section.
566
567 Other options unique for listing associations:
568
569
570 OnlyDefaults
571 Display only associations that are default associations
572
573
574 Tree Display account names in a hierarchical fashion.
575
576
577 WithDeleted
578 Display information with previously deleted data.
579
580
581 WithSubAccounts
582 Display information with subaccounts. Only really valuable when
583 used with the account= option. This will display all the subac‐
584 count associations along with the accounts listed in the option.
585
586
587 WOLimits
588 Display information without limit information. This is for a
589 smaller default format of "Cluster,Account,User,Partition".
590
591
592 WOPInfo
593 Display information without parent information (i.e. parent id,
594 and parent account name). This option also implicitly sets the
595 WOPLimits option.
596
597
598 WOPLimits
599 Display information without hierarchical parent limits (i.e.
600 will only display limits where they are set instead of propagat‐
601 ing them from the parent).
602
603
604
606 Account
607 The name of a bank account in the association.
608
609
610 Cluster
611 The name of a cluster in the association.
612
613
614 DefaultQOS
615 The QOS the association will use by default if it as access to
616 it in the QOS list mentioned below.
617
618
619 Fairshare
620 Number used in conjunction with other accounts to determine job
621 priority. Can also be the string parent, when used on a user
622 this means that the parent association is used for fairshare.
623 If Fairshare=parent is set on an account, that account's chil‐
624 dren will be effectively reparented for fairshare calculations
625 to the first parent of their parent that is not Fairshare=par‐
626 ent. Limits remain the same, only its fairshare value is
627 affected.
628
629
630 GrpTRESMins
631 The total number of TRES minutes that can possibly be used by
632 past, present and future jobs running from this association and
633 its children.
634
635
636 GrpTRESRunMins
637 Used to limit the combined total number of TRES minutes used by
638 all jobs running with this association and its children. This
639 takes into consideration time limit of running jobs and consumes
640 it, if the limit is reached no new jobs are started until other
641 jobs finish to allow time to free up.
642
643
644 GrpTRES
645 Maximum number of TRES running jobs are able to be allocated in
646 aggregate for this association and all associations which are
647 children of this association.
648
649
650 GrpJobs
651 Maximum number of running jobs in aggregate for this association
652 and all associations which are children of this association.
653
654
655 GrpJobsAccrue
656 Maximum number of pending jobs in aggregate able to accrue age
657 priority for this association and all associations which are
658 children of this association.
659
660
661 GrpSubmitJobs
662 Maximum number of jobs which can be in a pending or running
663 state at any time in aggregate for this association and all
664 associations which are children of this association.
665
666 NOTE: This setting shows up in the sacctmgr output as GrpSubmit.
667
668
669 GrpWall
670 Maximum wall clock time running jobs are able to be allocated in
671 aggregate for this association and all associations which are
672 children of this association.
673
674
675 ID The id of the association.
676
677
678 LFT Associations are kept in a hierarchy: this is the left most spot
679 in the hierarchy. When used with the RGT variable, all associa‐
680 tions with a LFT inside this LFT and before the RGT are children
681 of this association.
682
683
684 MaxTRESPerJob
685 Maximum number of TRES each job is able to use.
686
687 NOTE: This setting shows up in the sacctmgr output as MaxTRES.
688
689
690 MaxTRESMinsPerJob
691 Maximum number of TRES minutes each job is able to use.
692
693 NOTE: This setting shows up in the sacctmgr output as Max‐
694 TRESMins.
695
696
697 MaxTRESPerNode
698 Maximum number of TRES each node in a job allocation can use.
699
700
701 MaxJobs
702 Maximum number of jobs each user is allowed to run at one time.
703
704
705 MaxJobsAccrue
706 Maximum number of pending jobs able to accrue age priority at
707 any given time.
708
709
710 MaxSubmitJobs
711 Maximum number of jobs pending or running state at any time.
712
713 NOTE: This setting shows up in the sacctmgr output as MaxSubmit.
714
715
716 MaxWallDurationPerJob
717 Maximum wall clock time each job is able to use.
718
719 NOTE: This setting shows up in the sacctmgr output as MaxWall.
720
721
722 Qos
723 Valid QOS' for this association.
724
725
726 QosRaw
727 QOS' ID.
728
729
730 ParentID
731 The association id of the parent of this association.
732
733
734 ParentName
735 The account name of the parent of this association.
736
737
738 Partition
739 The name of a partition in the association.
740
741
742 Priority
743 What priority will be added to a job's priority when using this
744 association.
745
746
747 WithRawQOSLevel
748 Display QosLevel in an unevaluated raw format, consisting of a
749 comma separated list of QOS names prepended with '' (nothing),
750 '+' or '-' for the association. QOS names without +/- prepended
751 were assigned (ie, sacctmgr modify ... set QosLevel=qos_name)
752 for the entity listed or on one of its parents in the hierarchy.
753 QOS names with +/- prepended indicate the QOS was added/filtered
754 (ie, sacctmgr modify ... set QosLevel=[+-]qos_name) for the
755 entity listed or on one of its parents in the hierarchy. Includ‐
756 ing WOPLimits will show exactly where each QOS was assigned,
757 added or filtered in the hierarchy.
758
759
760 RGT Associations are kept in a hierarchy: this is the right most
761 spot in the hierarchy. When used with the LFT variable, all
762 associations with a LFT inside this RGT and after the LFT are
763 children of this association.
764
765
766 User The name of a user in the association.
767
768
770 Classification=<classification>
771 Type of machine, current classifications are capability, capac‐
772 ity and capapacity.
773
774
775 Features=<comma separated list of feature names>
776 Features that are specific to the cluster. Federated jobs can be
777 directed to clusters that contain the job requested features.
778
779
780 Federation=<federation>
781 The federation that this cluster should be a member of. A clus‐
782 ter can only be a member of one federation at a time.
783
784
785 FedState=<state>
786 The state of the cluster in the federation.
787 Valid states are:
788
789 ACTIVE Cluster will actively accept and schedule federated jobs.
790
791
792 INACTIVE
793 Cluster will not schedule or accept any jobs.
794
795
796 DRAIN Cluster will not accept any new jobs and will let exist‐
797 ing federated jobs complete.
798
799
800 DRAIN+REMOVE
801 Cluster will not accept any new jobs and will remove
802 itself from the federation once all federated jobs have
803 completed. When removed from the federation, the cluster
804 will accept jobs as a non-federated cluster.
805
806
807 Name=<name>
808 The name of a cluster. This should be equal to the ClusterName
809 parameter in the slurm.conf configuration file for some
810 Slurm-managed cluster.
811
812
813 RPC=<rpc list>
814 Comma separated list of numeric RPC values.
815
816
817 WithFed
818 Appends federation related columns to default format options
819 (e.g. Federation,ID,Features,FedState).
820
821
822 WOLimits
823 Display information without limit information. This is for a
824 smaller default format of Cluster,ControlHost,ControlPort,RPC
825
826 NOTE: You can also use the general specifications list above in the
827 GENERAL SPECIFICATIONS FOR ASSOCIATION BASED ENTITIES section.
828
829
830
832 Classification
833 Type of machine, i.e. capability, capacity or capapacity.
834
835
836 Cluster
837 The name of the cluster.
838
839
840 ControlHost
841 When a slurmctld registers with the database the ip address of
842 the controller is placed here.
843
844
845 ControlPort
846 When a slurmctld registers with the database the port the con‐
847 troller is listening on is placed here.
848
849
850 Features
851 The list of features on the cluster (if any).
852
853
854 Federation
855 The name of the federation this cluster is a member of (if any).
856
857
858 FedState
859 The state of the cluster in the federation (if a member of one).
860
861
862 FedStateRaw
863 Numeric value of the name of the FedState.
864
865
866 Flags Attributes possessed by the cluster. Current flags include Cray,
867 External and MultipleSlurmd.
868
869 External clusters are registration only clusters. A slurmctld
870 can designate an external slurmdbd with the AccountingStorageEx‐
871 ternalHost slurm.conf option. This allows a slurmctld to regis‐
872 ter to an external slurmdbd so that clusters attached to the
873 external slurmdbd can communicate with the external cluster with
874 Slurm commands.
875
876
877 ID The ID assigned to the cluster when a member of a federation.
878 This ID uniquely identifies the cluster and its jobs in the fed‐
879 eration.
880
881
882 NodeCount
883 The current count of nodes associated with the cluster.
884
885
886 NodeNames
887 The current Nodes associated with the cluster.
888
889
890 PluginIDSelect
891 The numeric value of the select plugin the cluster is using.
892
893
894 RPC When a slurmctld registers with the database the rpc version the
895 controller is running is placed here.
896
897
898 TRES Trackable RESources (Billing, BB (Burst buffer), CPU, Energy,
899 GRES, License, Memory, and Node) this cluster is accounting for.
900
901
902 NOTE: You can also view the information about the root association for
903 the cluster. The Association format fields are described in the
904 LIST/SHOW ASSOCIATION FORMAT OPTIONS section.
905
906
907
909 Account=<comma separated list of account names>
910 Account name to add this user as a coordinator to.
911
912 Names=<comma separated list of user names>
913 Names of coordinators.
914
915 NOTE: To list coordinators use the WithCoordinator options with list
916 account or list user.
917
918
919
921 All_Clusters
922 Get information on all cluster shortcut.
923
924
925 All_Time
926 Get time period for all time shortcut.
927
928
929 Clusters=<comma separated list of cluster names>
930 List the events of the cluster(s). Default is the cluster where
931 the command was run.
932
933
934 End=<OPT>
935 Period ending of events. Default is now.
936
937 Valid time formats are...
938
939 HH:MM[:SS] [AM|PM]
940 MMDD[YY] or MM/DD[/YY] or MM.DD[.YY]
941 MM/DD[/YY]-HH:MM[:SS]
942 YYYY-MM-DD[THH:MM[:SS]]
943 now[{+|-}count[seconds(default)|minutes|hours|days|weeks]]
944
945
946 Event=<OPT>
947 Specific events to look for, valid options are Cluster or Node,
948 default is both.
949
950
951 MaxTRES=<OPT>
952 Max number of TRES affected by an event.
953
954
955 MinTRES=<OPT>
956 Min number of TRES affected by an event.
957
958
959 Nodes=<comma separated list of node names>
960 Node names affected by an event.
961
962
963 Reason=<comma separated list of reasons>
964 Reason an event happened.
965
966
967 Start=<OPT>
968 Period start of events. Default is 00:00:00 of previous day,
969 unless states are given with the States= spec events. If this
970 is the case the default behavior is to return events currently
971 in the states specified.
972
973 Valid time formats are...
974
975 HH:MM[:SS] [AM|PM]
976 MMDD[YY] or MM/DD[/YY] or MM.DD[.YY]
977 MM/DD[/YY]-HH:MM[:SS]
978 YYYY-MM-DD[THH:MM[:SS]]
979 now[{+|-}count[seconds(default)|minutes|hours|days|weeks]]
980
981
982 States=<comma separated list of states>
983 State of a node in a node event. If this is set, the event type
984 is set automatically to Node.
985
986
987 User=<comma separated list of users>
988 Query against users who set the event. If this is set, the
989 event type is set automatically to Node since only user slurm
990 can perform a cluster event.
991
992
993
995 Cluster
996 The name of the cluster event happened on.
997
998
999 ClusterNodes
1000 The hostlist of nodes on a cluster in a cluster event.
1001
1002
1003 Duration
1004 Time period the event was around for.
1005
1006
1007 End Period when event ended.
1008
1009
1010 Event Name of the event.
1011
1012
1013 EventRaw
1014 Numeric value of the name of the event.
1015
1016
1017 NodeName
1018 The node affected by the event. In a cluster event, this is
1019 blank.
1020
1021
1022 Reason The reason an event happened.
1023
1024
1025 Start Period when event started.
1026
1027
1028 State On a node event this is the formatted state of the node during
1029 the event.
1030
1031
1032 StateRaw
1033 On a node event this is the numeric value of the state of the
1034 node during the event.
1035
1036
1037 TRES Number of TRES involved with the event.
1038
1039
1040 User On a node event this is the user who caused the event to happen.
1041
1042
1043
1045 Clusters[+|-]=<comma separated list of cluster names>
1046 List of clusters to add/remove to a federation. A blank value
1047 (e.g. clusters=) will remove all federations for the federation.
1048 NOTE: a cluster can only be a member of one federation.
1049
1050
1051 Name=<name>
1052 The name of the federation.
1053
1054
1055 Tree Display federations in a hierarchical fashion.
1056
1057
1059 Features
1060 The list of features on the cluster.
1061
1062
1063 Federation
1064 The name of the federation.
1065
1066
1067 Cluster
1068 Name of the cluster that is a member of the federation.
1069
1070
1071 FedState
1072 The state of the cluster in the federation.
1073
1074
1075 FedStateRaw
1076 Numeric value of the name of the FedState.
1077
1078
1079 Index The index of the cluster in the federation.
1080
1081
1082
1084 Comment=<comment>
1085 The job's comment string when the AccountingStoreJobComment
1086 parameter in the slurm.conf file is set (or defaults) to YES.
1087 The user can only modify the comment string of their own job.
1088
1089
1090 Cluster=<cluster_list>
1091 List of clusters to alter jobs on, defaults to local cluster.
1092
1093
1094 DerivedExitCode=<derived_exit_code>
1095 The derived exit code can be modified after a job completes
1096 based on the user's judgment of whether the job succeeded or
1097 failed. The user can only modify the derived exit code of their
1098 own job.
1099
1100
1101 EndTime
1102 Jobs must end before this time to be modified. Format output is,
1103 YYYY-MM-DDTHH:MM:SS, unless changed through the SLURM_TIME_FOR‐
1104 MAT environment variable.
1105
1106
1107 JobID=<jobid_list>
1108 The id of the job to change. Not needed if altering multiple
1109 jobs using wckey specification.
1110
1111
1112 NewWCKey=<newwckey>
1113 Use to rename a wckey on job(s) in the accounting database
1114
1115
1116 StartTime
1117 Jobs must start at or after this time to be modified in the same
1118 format as EndTime.
1119
1120
1121 User=<user_list>
1122 Used to specify the jobs of users jobs to alter.
1123
1124
1125 WCKey=<wckey_list>
1126 Used to specify the wckeys to alter.
1127
1128
1129 The DerivedExitCode, Comment and WCKey fields are the only
1130 fields of a job record in the database that can be modified
1131 after job completion.
1132
1133
1135 The sacct command is the exclusive command to display job records from
1136 the Slurm database.
1137
1138
1140 NOTE: The group limits (GrpJobs, GrpNodes, etc.) are tested when a job
1141 is being considered for being allocated resources. If starting a job
1142 would cause any of its group limit to be exceeded, that job will not be
1143 considered for scheduling even if that job might preempt other jobs
1144 which would release sufficient group resources for the pending job to
1145 be initiated.
1146
1147
1148 Flags Used by the slurmctld to override or enforce certain character‐
1149 istics.
1150 Valid options are
1151
1152 DenyOnLimit
1153 If set, jobs using this QOS will be rejected at submis‐
1154 sion time if they do not conform to the QOS 'Max' limits.
1155 Group limits will also be treated like 'Max' limits as
1156 well and will be denied if they go over. By default jobs
1157 that go over these limits will pend until they conform.
1158 This currently only applies to QOS and Association lim‐
1159 its.
1160
1161 EnforceUsageThreshold
1162 If set, and the QOS also has a UsageThreshold, any jobs
1163 submitted with this QOS that fall below the UsageThresh‐
1164 old will be held until their Fairshare Usage goes above
1165 the Threshold.
1166
1167 NoDecay
1168 If set, this QOS will not have its GrpTRESMins, GrpWall
1169 and UsageRaw decayed by the slurm.conf PriorityDecay‐
1170 HalfLife or PriorityUsageResetPeriod settings. This
1171 allows a QOS to provide aggregate limits that, once con‐
1172 sumed, will not be replenished automatically. Such a QOS
1173 will act as a time-limited quota of resources for an
1174 association that has access to it. Account/user usage
1175 will still be decayed for associations using the QOS.
1176 The QOS GrpTRESMins and GrpWall limits can be increased
1177 or the QOS RawUsage value reset to 0 (zero) to again
1178 allow jobs submitted with this QOS to be queued (if Deny‐
1179 OnLimit is set) or run (pending with QOSGrp{TRES}Minutes‐
1180 Limit or QOSGrpWallLimit reasons, where {TRES} is some
1181 type of trackable resource).
1182
1183 NoReserve
1184 If this flag is set and backfill scheduling is used, jobs
1185 using this QOS will not reserve resources in the backfill
1186 schedule's map of resources allocated through time. This
1187 flag is intended for use with a QOS that may be preempted
1188 by jobs associated with all other QOS (e.g use with a
1189 "standby" QOS). If this flag is used with a QOS which can
1190 not be preempted by all other QOS, it could result in
1191 starvation of larger jobs.
1192
1193 PartitionMaxNodes
1194 If set jobs using this QOS will be able to override the
1195 requested partition's MaxNodes limit.
1196
1197 PartitionMinNodes
1198 If set jobs using this QOS will be able to override the
1199 requested partition's MinNodes limit.
1200
1201 OverPartQOS
1202 If set jobs using this QOS will be able to override any
1203 limits used by the requested partition's QOS limits.
1204
1205 PartitionTimeLimit
1206 If set jobs using this QOS will be able to override the
1207 requested partition's TimeLimit.
1208
1209 RequiresReservation
1210 If set jobs using this QOS must designate a reservation
1211 when submitting a job. This option can be useful in
1212 restricting usage of a QOS that may have greater preemp‐
1213 tive capability or additional resources to be allowed
1214 only within a reservation.
1215
1216 UsageFactorSafe
1217 If set, and AccountingStorageEnforce includes Safe, jobs
1218 will only be able to run if the job can run to completion
1219 with the UsageFactor applied.
1220
1221
1222 GraceTime
1223 Preemption grace time to be extended to a job which has been
1224 selected for preemption.
1225
1226
1227 GrpTRESMins
1228 The total number of TRES minutes that can possibly be used by
1229 past, present and future jobs running from this QOS.
1230
1231
1232 GrpTRESRunMins Used to limit the combined total number of TRES
1233 minutes used by all jobs running with this QOS. This takes into
1234 consideration time limit of running jobs and consumes it, if the
1235 limit is reached no new jobs are started until other jobs finish
1236 to allow time to free up.
1237
1238
1239 GrpTRES
1240 Maximum number of TRES running jobs are able to be allocated in
1241 aggregate for this QOS.
1242
1243
1244 GrpJobs
1245 Maximum number of running jobs in aggregate for this QOS.
1246
1247
1248 GrpJobsAccrue
1249 Maximum number of pending jobs in aggregate able to accrue age
1250 priority for this QOS.
1251
1252
1253 GrpSubmitJobs
1254 Maximum number of jobs which can be in a pending or running
1255 state at any time in aggregate for this QOS.
1256
1257 NOTE: This setting shows up in the sacctmgr output as GrpSubmit.
1258
1259
1260 GrpWall
1261 Maximum wall clock time running jobs are able to be allocated in
1262 aggregate for this QOS. If this limit is reached submission
1263 requests will be denied and the running jobs will be killed.
1264
1265 ID The id of the QOS.
1266
1267
1268 MaxJobsAccruePerAccount
1269 Maximum number of pending jobs an account (or subacct) can have
1270 accruing age priority at any given time.
1271
1272
1273 MaxJobsAccruePerUser
1274 Maximum number of pending jobs a user can have accruing age pri‐
1275 ority at any given time.
1276
1277
1278 MaxJobsPerAccount
1279 Maximum number of jobs each account is allowed to run at one
1280 time.
1281
1282
1283 MaxJobsPerUser
1284 Maximum number of jobs each user is allowed to run at one time.
1285
1286
1287 MaxSubmitJobsPerAccount
1288 Maximum number of jobs pending or running state at any time per
1289 account.
1290
1291
1292 MaxSubmitJobsPerUser
1293 Maximum number of jobs pending or running state at any time per
1294 user.
1295
1296
1297 MaxTRESMinsPerJob
1298 Maximum number of TRES minutes each job is able to use.
1299
1300 NOTE: This setting shows up in the sacctmgr output as Max‐
1301 TRESMins.
1302
1303
1304 MaxTRESPerAccount
1305 Maximum number of TRES each account is able to use.
1306
1307
1308 MaxTRESPerJob
1309 Maximum number of TRES each job is able to use.
1310
1311 NOTE: This setting shows up in the sacctmgr output as MaxTRES.
1312
1313
1314 MaxTRESPerNode
1315 Maximum number of TRES each node in a job allocation can use.
1316
1317
1318 MaxTRESPerUser
1319 Maximum number of TRES each user is able to use.
1320
1321
1322 MaxWallDurationPerJob
1323 Maximum wall clock time each job is able to use.
1324
1325 NOTE: This setting shows up in the sacctmgr output as MaxWall.
1326
1327
1328 MinPrioThreshold
1329 Minimum priority required to reserve resources when scheduling.
1330
1331
1332 MinTRESPerJob
1333 Minimum number of TRES each job running under this QOS must
1334 request. Otherwise the job will pend until modified.
1335
1336 NOTE: This setting shows up in the sacctmgr output as MinTRES.
1337
1338
1339 Name Name of the QOS.
1340
1341
1342 Preempt
1343 Other QOS' this QOS can preempt.
1344
1345 NOTE: The Priority of a QOS is NOT related to QOS preemption,
1346 only Preempt is used to define which QOS can preempt others.
1347
1348
1349 PreemptExemptTime
1350 Specifies a minimum run time for jobs of this QOS before they
1351 are considered for preemption. This QOS option takes precedence
1352 over the global PreemptExemptTime. Setting to -1 disables the
1353 option, allowing another QOS or the global option to take
1354 effect. Setting to 0 indicates no minimum run time and super‐
1355 sedes the lower priority QOS (see OverPartQOS) and/or the global
1356 option in slurm.conf.
1357
1358
1359 PreemptMode
1360 Mechanism used to preempt jobs or enable gang scheduling for
1361 this QOS when the cluster PreemptType is set to preempt/qos.
1362 This QOS-specific PreemptMode will override the cluster-wide
1363 PreemptMode for this QOS. Unsetting the QOS specific Preempt‐
1364 Mode, by specifying "OFF", "" or "Cluster", makes it use the
1365 default cluster-wide PreemptMode.
1366 See the description of the cluster-wide PreemptMode parameter
1367 for further details of the available modes.
1368
1369
1370 Priority
1371 What priority will be added to a job's priority when using this
1372 QOS.
1373
1374 NOTE: The Priority of a QOS is NOT related to QOS preemption,
1375 see Preempt instead.
1376
1377
1378 RawUsage=<value>
1379 This allows an administrator to reset the raw usage accrued to a
1380 QOS. The only value currently supported is 0 (zero). This is a
1381 settable specification only - it cannot be used as a filter to
1382 list accounts.
1383
1384
1385 UsageFactor
1386 Usage factor when running with this QOS. See below for more
1387 details.
1388
1389
1390 UsageThreshold
1391 A float representing the lowest fairshare of an association
1392 allowable to run a job. If an association falls below this
1393 threshold and has pending jobs or submits new jobs those jobs
1394 will be held until the usage goes back above the threshold. Use
1395 sshare to see current shares on the system.
1396
1397
1398 WithDeleted
1399 Display information with previously deleted data.
1400
1401
1402
1404 Description
1405 An arbitrary string describing a QOS.
1406
1407
1408 GraceTime
1409 Preemption grace time to be extended to a job which has been
1410 selected for preemption in the format of hh:mm:ss. The default
1411 value is zero, no preemption grace time is allowed on this par‐
1412 tition. NOTE: This value is only meaningful for QOS Preempt‐
1413 Mode=CANCEL.
1414
1415
1416 GrpTRESMins
1417 The total number of TRES minutes that can possibly be used by
1418 past, present and future jobs running from this QOS. To clear a
1419 previously set value use the modify command with a new value of
1420 -1 for each TRES id. NOTE: This limit only applies when using
1421 the Priority Multifactor plugin. The time is decayed using the
1422 value of PriorityDecayHalfLife or PriorityUsageResetPeriod as
1423 set in the slurm.conf. When this limit is reached all associ‐
1424 ated jobs running will be killed and all future jobs submitted
1425 with this QOS will be delayed until they are able to run inside
1426 the limit.
1427
1428
1429 GrpTRES
1430 Maximum number of TRES running jobs are able to be allocated in
1431 aggregate for this QOS. To clear a previously set value use the
1432 modify command with a new value of -1 for each TRES id.
1433
1434
1435 GrpJobs
1436 Maximum number of running jobs in aggregate for this QOS. To
1437 clear a previously set value use the modify command with a new
1438 value of -1.
1439
1440
1441 GrpJobsAccrue
1442 Maximum number of pending jobs in aggregate able to accrue age
1443 priority for this QOS. To clear a previously set value use the
1444 modify command with a new value of -1.
1445
1446
1447 GrpSubmitJobs
1448 Maximum number of jobs which can be in a pending or running
1449 state at any time in aggregate for this QOS. To clear a previ‐
1450 ously set value use the modify command with a new value of -1.
1451
1452 NOTE: This setting shows up in the sacctmgr output as GrpSubmit.
1453
1454
1455 GrpWall
1456 Maximum wall clock time running jobs are able to be allocated in
1457 aggregate for this QOS. To clear a previously set value use the
1458 modify command with a new value of -1. NOTE: This limit only
1459 applies when using the Priority Multifactor plugin. The time is
1460 decayed using the value of PriorityDecayHalfLife or Priori‐
1461 tyUsageResetPeriod as set in the slurm.conf. When this limit is
1462 reached all associated jobs running will be killed and all
1463 future jobs submitted with this QOS will be delayed until they
1464 are able to run inside the limit.
1465
1466
1467 MaxTRESMinsPerJob
1468 Maximum number of TRES minutes each job is able to use. To
1469 clear a previously set value use the modify command with a new
1470 value of -1 for each TRES id.
1471
1472 NOTE: This setting shows up in the sacctmgr output as Max‐
1473 TRESMins.
1474
1475
1476 MaxTRESPerAccount
1477 Maximum number of TRES each account is able to use. To clear a
1478 previously set value use the modify command with a new value of
1479 -1 for each TRES id.
1480
1481
1482 MaxTRESPerJob
1483 Maximum number of TRES each job is able to use. To clear a pre‐
1484 viously set value use the modify command with a new value of -1
1485 for each TRES id.
1486
1487 NOTE: This setting shows up in the sacctmgr output as MaxTRES.
1488
1489
1490 MaxTRESPerNode
1491 Maximum number of TRES each node in a job allocation can use.
1492 To clear a previously set value use the modify command with a
1493 new value of -1 for each TRES id.
1494
1495
1496 MaxTRESPerUser
1497 Maximum number of TRES each user is able to use. To clear a
1498 previously set value use the modify command with a new value of
1499 -1 for each TRES id.
1500
1501
1502 MaxJobsPerAccount
1503 Maximum number of jobs each account is allowed to run at one
1504 time. To clear a previously set value use the modify command
1505 with a new value of -1.
1506
1507
1508 MaxJobsPerUser
1509 Maximum number of jobs each user is allowed to run at one time.
1510 To clear a previously set value use the modify command with a
1511 new value of -1.
1512
1513
1514 MaxSubmitJobsPerAccount
1515 Maximum number of jobs pending or running state at any time per
1516 account. To clear a previously set value use the modify command
1517 with a new value of -1.
1518
1519
1520 MaxSubmitJobsPerUser
1521 Maximum number of jobs pending or running state at any time per
1522 user. To clear a previously set value use the modify command
1523 with a new value of -1.
1524
1525
1526 MaxWallDurationPerJob
1527 Maximum wall clock time each job is able to use. <max wall>
1528 format is <min> or <min>:<sec> or <hr>:<min>:<sec> or
1529 <days>-<hr>:<min>:<sec> or <days>-<hr>. The value is recorded
1530 in minutes with rounding as needed. To clear a previously set
1531 value use the modify command with a new value of -1.
1532
1533 NOTE: This setting shows up in the sacctmgr output as MaxWall.
1534
1535
1536 MinPrioThreshold
1537 Minimum priority required to reserve resources when scheduling.
1538 To clear a previously set value use the modify command with a
1539 new value of -1.
1540
1541
1542 MinTRES
1543 Minimum number of TRES each job running under this QOS must
1544 request. Otherwise the job will pend until modified. To clear
1545 a previously set value use the modify command with a new value
1546 of -1 for each TRES id.
1547
1548
1549 Name Name of the QOS. Needed for creation.
1550
1551
1552 Preempt
1553 Other QOS' this QOS can preempt. Setting a Preempt to '' (two
1554 single quotes with nothing between them) restores its default
1555 setting. You can also use the operator += and -= to add or
1556 remove certain QOS's from a QOS list.
1557
1558
1559 PreemptMode
1560 Mechanism used to preempt jobs of this QOS if the clusters Pre‐
1561 emptType is configured to preempt/qos. The default preemption
1562 mechanism is specified by the cluster-wide PreemptMode configu‐
1563 ration parameter. Possible values are "Cluster" (meaning use
1564 cluster default), "Cancel", and "Requeue". This option is not
1565 compatible with PreemptMode=OFF or PreemptMode=SUSPEND (i.e.
1566 preempted jobs must be removed from the resources).
1567
1568
1569 Priority
1570 What priority will be added to a job's priority when using this
1571 QOS. To clear a previously set value use the modify command
1572 with a new value of -1.
1573
1574
1575 UsageFactor
1576 A float that is factored into a job’s TRES usage (e.g. RawUsage,
1577 TRESMins, TRESRunMins). For example, if the usagefactor was 2,
1578 for every TRESBillingUnit second a job ran it would count for 2.
1579 If the usagefactor was .5, every second would only count for
1580 half of the time. A setting of 0 would add no timed usage from
1581 the job.
1582
1583 The usage factor only applies to the job's QOS and not the par‐
1584 tition QOS.
1585
1586 If the UsageFactorSafe flag is set and AccountingStorageEnforce
1587 includes Safe, jobs will only be able to run if the job can run
1588 to completion with the UsageFactor applied.
1589
1590 If the UsageFactorSafe flag is not set and AccountingStorageEn‐
1591 force includes Safe, a job will be able to be scheduled without
1592 the UsageFactor applied and will be able to run without being
1593 killed due to limits.
1594
1595 If the UsageFactorSafe flag is not set and AccountingStorageEn‐
1596 force does not include Safe, a job will be able to be scheduled
1597 without the UsageFactor applied and could be killed due to lim‐
1598 its.
1599
1600 See AccountingStorageEnforce in slurm.conf man page.
1601
1602 Default is 1. To clear a previously set value use the modify
1603 command with a new value of -1.
1604
1605
1607 Clusters=<comma separated list of cluster names>
1608 List the reservations of the cluster(s). Default is the cluster
1609 where the command was run.
1610
1611
1612 End=<OPT>
1613 Period ending of reservations. Default is now.
1614
1615 Valid time formats are...
1616
1617 HH:MM[:SS] [AM|PM]
1618 MMDD[YY] or MM/DD[/YY] or MM.DD[.YY]
1619 MM/DD[/YY]-HH:MM[:SS]
1620 YYYY-MM-DD[THH:MM[:SS]]
1621 now[{+|-}count[seconds(default)|minutes|hours|days|weeks]]
1622
1623
1624 ID=<OPT>
1625 Comma separated list of reservation ids.
1626
1627
1628 Names=<OPT>
1629 Comma separated list of reservation names.
1630
1631
1632 Nodes=<comma separated list of node names>
1633 Node names where reservation ran.
1634
1635
1636 Start=<OPT>
1637 Period start of reservations. Default is 00:00:00 of current
1638 day.
1639
1640 Valid time formats are...
1641
1642 HH:MM[:SS] [AM|PM]
1643 MMDD[YY] or MM/DD[/YY] or MM.DD[.YY]
1644 MM/DD[/YY]-HH:MM[:SS]
1645 YYYY-MM-DD[THH:MM[:SS]]
1646 now[{+|-}count[seconds(default)|minutes|hours|days|weeks]]
1647
1648
1650 Associations
1651 The id's of the associations able to run in the reservation.
1652
1653
1654 Cluster
1655 Name of cluster reservation was on.
1656
1657
1658 End End time of reservation.
1659
1660
1661 Flags Flags on the reservation.
1662
1663
1664 ID Reservation ID.
1665
1666
1667 Name Name of this reservation.
1668
1669
1670 NodeNames
1671 List of nodes in the reservation.
1672
1673
1674 Start Start time of reservation.
1675
1676
1677 TRES List of TRES in the reservation.
1678
1679
1680 UnusedWall
1681 Wall clock time in seconds unused by any job. A job's allocated
1682 usage is its run time multiplied by the ratio of its CPUs to the
1683 total number of CPUs in the reservation. For example, a job
1684 using all the CPUs in the reservation running for 1 minute would
1685 reduce unused_wall by 1 minute.
1686
1687
1688
1690 Clusters=<name list> Comma separated list of cluster names on which
1691 specified resources are to be available. If no names are designated
1692 then the clusters already allowed to use this resource will be altered.
1693
1694
1695 Count=<OPT>
1696 Number of software resources of a specific name configured on
1697 the system being controlled by a resource manager.
1698
1699
1700 Descriptions=
1701 A brief description of the resource.
1702
1703
1704 Flags=<OPT>
1705 Flags that identify specific attributes of the system resource.
1706 At this time no flags have been defined.
1707
1708
1709 ServerType=<OPT>
1710 The type of a software resource manager providing the licenses.
1711 For example FlexNext Publisher Flexlm license server or Reprise
1712 License Manager RLM.
1713
1714
1715 Names=<OPT>
1716 Comma separated list of the name of a resource configured on the
1717 system being controlled by a resource manager. If this resource
1718 is seen on the slurmctld its name will be name@server to distin‐
1719 guish it from local resources defined in a slurm.conf.
1720
1721
1722 PercentAllowed=<percent allowed>
1723 Percentage of a specific resource that can be used on specified
1724 cluster.
1725
1726
1727 Server=<OPT>
1728 The name of the server serving up the resource. Default is
1729 'slurmdb' indicating the licenses are being served by the data‐
1730 base.
1731
1732
1733 Type=<OPT>
1734 The type of the resource represented by this record. Currently
1735 the only valid type is License.
1736
1737
1738 WithClusters
1739 Display the clusters percentage of resources. If a resource
1740 hasn't been given to a cluster the resource will not be dis‐
1741 played with this flag.
1742
1743
1744 NOTE: Resource is used to define each resource configured on a system
1745 available for usage by Slurm clusters.
1746
1747
1749 Cluster
1750 Name of cluster resource is given to.
1751
1752
1753 Count The count of a specific resource configured on the system glob‐
1754 ally.
1755
1756
1757 Allocated
1758 The percent of licenses allocated to a cluster.
1759
1760
1761 Description
1762 Description of the resource.
1763
1764
1765 ServerType
1766 The type of the server controlling the licenses.
1767
1768
1769 Name Name of this resource.
1770
1771
1772 Server Server serving up the resource.
1773
1774
1775 Type Type of resource this record represents.
1776
1777
1779 Cluster
1780 Name of cluster job ran on.
1781
1782
1783 ID Id of the job.
1784
1785
1786 Name Name of the job.
1787
1788
1789 Partition
1790 Partition job ran on.
1791
1792
1793 State Current State of the job in the database.
1794
1795
1796 TimeStart
1797 Time job started running.
1798
1799
1800 TimeEnd
1801 Current recorded time of the end of the job.
1802
1803
1805 Accounts=<comma separated list of account names>
1806 Only print out the transactions affecting specified accounts.
1807
1808
1809 Action=<Specific action the list will display>
1810
1811
1812 Actor=<Specific name the list will display>
1813 Only display transactions done by a certain person.
1814
1815
1816 Clusters=<comma separated list of cluster names>
1817 Only print out the transactions affecting specified clusters.
1818
1819
1820 End=<Date and time of last transaction to return>
1821 Return all transactions before this Date and time. Default is
1822 now.
1823
1824
1825 Start=<Date and time of first transaction to return>
1826 Return all transactions after this Date and time. Default is
1827 epoch.
1828
1829 Valid time formats for End and Start are...
1830
1831 HH:MM[:SS] [AM|PM]
1832 MMDD[YY] or MM/DD[/YY] or MM.DD[.YY]
1833 MM/DD[/YY]-HH:MM[:SS]
1834 YYYY-MM-DD[THH:MM[:SS]]
1835 now[{+|-}count[seconds(default)|minutes|hours|days|weeks]]
1836
1837
1838 Users=<comma separated list of user names>
1839 Only print out the transactions affecting specified users.
1840
1841
1842 WithAssoc
1843 Get information about which associations were affected by the
1844 transactions.
1845
1846
1847
1849 Action Displays the type of Action that took place.
1850
1851
1852 Actor Displays the Actor to generate a transaction.
1853
1854
1855 Info Displays details of the transaction.
1856
1857
1858 TimeStamp
1859 Displays when the transaction occurred.
1860
1861
1862 Where Displays details of the constraints for the transaction.
1863
1864 NOTE: If using the WithAssoc option you can also view the information
1865 about the various associations the transaction affected. The Associa‐
1866 tion format fields are described in the LIST/SHOW ASSOCIATION FORMAT
1867 OPTIONS section.
1868
1869
1870
1872 Account=<account>
1873 Account name to add this user to.
1874
1875
1876 AdminLevel=<level>
1877 Admin level of user. Valid levels are None, Operator, and
1878 Admin.
1879
1880
1881 Cluster=<cluster>
1882 Specific cluster to add user to the account on. Default is all
1883 in system.
1884
1885
1886 DefaultAccount=<account>
1887 Identify the default bank account name to be used for a job if
1888 none is specified at submission time.
1889
1890
1891 DefaultWCKey=<defaultwckey>
1892 Identify the default Workload Characterization Key.
1893
1894
1895 Name=<name>
1896 Name of user.
1897
1898
1899 NewName=<newname>
1900 Use to rename a user in the accounting database
1901
1902
1903 Partition=<name>
1904 Partition name.
1905
1906
1907 RawUsage=<value>
1908 This allows an administrator to reset the raw usage accrued to a
1909 user. The only value currently supported is 0 (zero). This is
1910 a settable specification only - it cannot be used as a filter to
1911 list users.
1912
1913
1914 WCKeys=<wckeys>
1915 Workload Characterization Key values.
1916
1917
1918 WithAssoc
1919 Display all associations for this user.
1920
1921
1922 WithCoord
1923 Display all accounts a user is coordinator for.
1924
1925
1926 WithDeleted
1927 Display information with previously deleted data.
1928
1929 NOTE: If using the WithAssoc option you can also query against associa‐
1930 tion specific information to view only certain associations this user
1931 may have. These extra options can be found in the SPECIFICATIONS FOR
1932 ASSOCIATIONS section. You can also use the general specifications list
1933 above in the GENERAL SPECIFICATIONS FOR ASSOCIATION BASED ENTITIES sec‐
1934 tion.
1935
1936
1937
1939 AdminLevel
1940 Admin level of user.
1941
1942
1943 DefaultAccount
1944 The user's default account.
1945
1946
1947 Coordinators
1948 List of users that are a coordinator of the account. (Only
1949 filled in when using the WithCoordinator option.)
1950
1951
1952 User The name of a user.
1953
1954 NOTE: If using the WithAssoc option you can also view the information
1955 about the various associations the user may have on all the clusters in
1956 the system. The association information can be filtered. Note that all
1957 the users in the database will always be shown as filter only takes
1958 effect over the association data. The Association format fields are
1959 described in the LIST/SHOW ASSOCIATION FORMAT OPTIONS section.
1960
1961
1962
1964 WCKey Workload Characterization Key.
1965
1966
1967 Cluster
1968 Specific cluster for the WCKey.
1969
1970
1971 User The name of a user for the WCKey.
1972
1973 NOTE: If using the WithAssoc option you can also view the information
1974 about the various associations the user may have on all the clusters in
1975 the system. The Association format fields are described in the
1976 LIST/SHOW ASSOCIATION FORMAT OPTIONS section.
1977
1978
1980 Name The name of the trackable resource. This option is required for
1981 TRES types BB (Burst buffer), GRES, and License. Types CPU,
1982 Energy, Memory, and Node do not have Names. For example if GRES
1983 is the type then name is the denomination of the GRES itself
1984 e.g. GPU.
1985
1986
1987 ID The identification number of the trackable resource as it
1988 appears in the database.
1989
1990
1991 Type The type of the trackable resource. Current types are BB (Burst
1992 buffer), CPU, Energy, GRES, License, Memory, and Node.
1993
1994
1996 Trackable RESources (TRES) are used in many QOS or Association limits.
1997 When setting the limits they are comma separated list. Each TRES has a
1998 different limit, i.e. GrpTRESMins=cpu=10,mem=20 would make 2 different
1999 limits 1 for 10 cpu minutes and 1 for 20 MB memory minutes. This is
2000 the case for each limit that deals with TRES. To remove the limit -1
2001 is used i.e. GrpTRESMins=cpu=-1 would remove only the cpu TRES limit.
2002
2003 NOTE: When dealing with Memory as a TRES all limits are in MB.
2004
2005 NOTE: The Billing TRES is calculated from a partition's TRESBilling‐
2006 Weights. It is temporarily calculated during scheduling for each parti‐
2007 tion to enforce billing TRES limits. The final Billing TRES is calcu‐
2008 lated after the job has been allocated resources. The final number can
2009 be seen in scontrol show jobs and sacct output.
2010
2011
2013 When using the format option for listing various fields you can put a
2014 %NUMBER afterwards to specify how many characters should be printed.
2015
2016 e.g. format=name%30 will print 30 characters of field name right justi‐
2017 fied. A -30 will print 30 characters left justified.
2018
2019
2021 sacctmgr has the capability to load and dump Slurm association data to
2022 and from a file. This method can easily add a new cluster or copy an
2023 existing cluster's associations into a new cluster with similar
2024 accounts. Each file contains Slurm association data for a single clus‐
2025 ter. Comments can be put into the file with the # character. Each
2026 line of information must begin with one of the four titles; Cluster,
2027 Parent, Account or User. Following the title is a space, dash, space,
2028 entity value, then specifications. Specifications are colon separated.
2029 If any variable, such as an Organization name, has a space in it, sur‐
2030 round the name with single or double quotes.
2031
2032 To create a file of associations you can run
2033 sacctmgr dump tux file=tux.cfg
2034
2035 To load a previously created file you can run
2036 sacctmgr load file=tux.cfg
2037
2038 sacctmgr dump/load must be run as a Slurm administrator or root. If
2039 using sacctmgr load on a database without any associations, it must be
2040 run as root (because there aren't any users in the database yet).
2041
2042 Other options for load are:
2043 clean - delete what was already there and start from scratch
2044 with this information.
2045 Cluster= - specify a different name for the cluster than that
2046 which is in the file.
2047
2048 Since the associations in the system follow a hierarchy, so does the
2049 file. Anything that is a parent needs to be defined before any chil‐
2050 dren. The only exception is the understood 'root' account. This is
2051 always a default for any cluster and does not need to be defined.
2052
2053 To edit/create a file start with a cluster line for the new cluster:
2054
2055 Cluster - cluster_name:MaxTRESPerJob=node=15
2056
2057 Anything included on this line will be the default for all
2058 associations on this cluster. The options for the cluster are:
2059
2060 GrpTRESMins=
2061 The total number of TRES minutes that can possibly be used by past,
2062 present and future jobs running from this association and its children.
2063
2064 GrpTRESRunMins=
2065 Used to limit the combined total number of TRES minutes used by all
2066 jobs running with this association and its children. This takes into
2067 consideration time limit of running jobs and consumes it, if the limit
2068 is reached no new jobs are started until other jobs finish to allow
2069 time to free up.
2070
2071 GrpTRES=
2072 Maximum number of TRES running jobs are able to be
2073 allocated in aggregate for this association and all associations which
2074 are children of this association.
2075
2076 GrpJobs=
2077 Maximum number of running jobs in aggregate for this
2078 association and all associations which are children of this association.
2079
2080 GrpJobsAccrue=
2081 Maximum number of pending jobs in aggregate able to accrue age priority for this
2082 association and all associations which are children of this association.
2083
2084 GrpNodes=
2085 Maximum number of nodes running jobs are able to be
2086 allocated in aggregate for this association and all associations which
2087 are children of this association.
2088
2089 GrpSubmitJobs=
2090 Maximum number of jobs which can be in a pending or
2091 running state at any time in aggregate for this association and all
2092 associations which are children of this association.
2093
2094 GrpWall=
2095 Maximum wall clock time running jobs are able to be
2096 allocated in aggregate for this association and all associations which
2097 are children of this association.
2098
2099 FairShare=
2100 Number used in conjunction with other associations to determine job priority.
2101
2102 MaxJobs=
2103 Maximum number of jobs the children of this association can run.
2104
2105 MaxTRESPerJob=
2106 Maximum number of trackable resources per job the children of this association
2107 can run.
2108
2109 MaxWallDurationPerJob=
2110 Maximum time (not related to job size) children of this accounts jobs can run.
2111
2112 QOS=
2113 Comma separated list of Quality of Service names (Defined in sacctmgr).
2114
2115 After the entry for the root account you will have entries for the other
2116 accounts on the system. The entries will look similar to this example:
2117
2118 Parent - root
2119 Account - cs:MaxTRESPerJob=node=5:MaxJobs=4:FairShare=399:MaxWallDurationPerJob=40:Description='Computer Science':Organization='LC'
2120 Parent - cs
2121 Account - test:MaxTRESPerJob=node=1:MaxJobs=1:FairShare=1:MaxWallDurationPerJob=1:Description='Test Account':Organization='Test'
2122
2123 Any of the options after a ':' can be left out and they can be in any order.
2124 If you want to add any sub accounts just list the Parent THAT HAS ALREADY
2125 BEEN CREATED before the account you are adding.
2126
2127 Account options are:
2128
2129 Description=
2130 A brief description of the account.
2131
2132 GrpTRESMins=
2133 Maximum number of TRES hours running jobs are able to
2134 be allocated in aggregate for this association and all associations
2135 which are children of this association.
2136 GrpTRESRunMins=
2137 Used to limit the combined total number of TRES minutes used by all
2138 jobs running with this association and its children. This takes into
2139 consideration time limit of running jobs and consumes it, if the limit
2140 is reached no new jobs are started until other jobs finish to allow
2141 time to free up.
2142
2143 GrpTRES=
2144 Maximum number of TRES running jobs are able to be
2145 allocated in aggregate for this association and all associations which
2146 are children of this association.
2147
2148 GrpJobs=
2149 Maximum number of running jobs in aggregate for this
2150 association and all associations which are children of this association.
2151
2152 GrpJobsAccrue
2153 Maximum number of pending jobs in aggregate able to accrue age priority for this
2154 association and all associations which are children of this association.
2155
2156 GrpNodes=
2157 Maximum number of nodes running jobs are able to be
2158 allocated in aggregate for this association and all associations which
2159 are children of this association.
2160
2161 GrpSubmitJobs=
2162 Maximum number of jobs which can be in a pending or
2163 running state at any time in aggregate for this association and all
2164 associations which are children of this association.
2165
2166 GrpWall=
2167 Maximum wall clock time running jobs are able to be
2168 allocated in aggregate for this association and all associations which
2169 are children of this association.
2170
2171 FairShare=
2172 Number used in conjunction with other associations to determine job priority.
2173
2174 MaxJobs=
2175 Maximum number of jobs the children of this association can run.
2176
2177 MaxNodesPerJob=
2178 Maximum number of nodes per job the children of this association can run.
2179
2180 MaxWallDurationPerJob=
2181 Maximum time (not related to job size) children of this accounts jobs can run.
2182
2183 Organization=
2184 Name of organization that owns this account.
2185
2186 QOS(=,+=,-=)
2187 Comma separated list of Quality of Service names (Defined in sacctmgr).
2188
2189
2190 To add users to an account add a line after the Parent line, similar to this:
2191
2192 Parent - test
2193 User - adam:MaxTRESPerJob=node:2:MaxJobs=3:FairShare=1:MaxWallDurationPerJob=1:AdminLevel=Operator:Coordinator='test'
2194
2195
2196 User options are:
2197
2198 AdminLevel=
2199 Type of admin this user is (Administrator, Operator)
2200 Must be defined on the first occurrence of the user.
2201
2202 Coordinator=
2203 Comma separated list of accounts this user is coordinator over
2204 Must be defined on the first occurrence of the user.
2205
2206 DefaultAccount=
2207 System wide default account name
2208 Must be defined on the first occurrence of the user.
2209
2210 FairShare=
2211 Number used in conjunction with other associations to determine job priority.
2212
2213 MaxJobs=
2214 Maximum number of jobs this user can run.
2215
2216 MaxTRESPerJob=
2217 Maximum number of trackable resources per job this user can run.
2218
2219 MaxWallDurationPerJob=
2220 Maximum time (not related to job size) this user can run.
2221
2222 QOS(=,+=,-=)
2223 Comma separated list of Quality of Service names (Defined in sacctmgr).
2224
2225
2227 Sacctmgr has the capability to archive to a flatfile and or load that
2228 data if needed later. The archiving is usually done by the slurmdbd
2229 and it is highly recommended you only do it through sacctmgr if you
2230 completely understand what you are doing. For slurmdbd options see
2231 "man slurmdbd" for more information. Loading data into the database
2232 can be done from these files to either view old data or regenerate
2233 rolled up data.
2234
2235
2236 archive dump
2237 Dump accounting data to file. Data will not be archived unless the cor‐
2238 responding purge option is included in this command or in slur‐
2239 mdbd.conf. This operation cannot be rolled back once executed. If one
2240 of the following options is not specified when sacctmgr is called, the
2241 value configured in slurmdbd.conf is used.
2242
2243
2244 Directory=
2245 Directory to store the archive data.
2246
2247 Events Archive Events. If not specified and PurgeEventAfter is set all
2248 event data removed will be lost permanently.
2249
2250 Jobs Archive Jobs. If not specified and PurgeJobAfter is set all job
2251 data removed will be lost permanently.
2252
2253 PurgeEventAfter=
2254 Purge cluster event records older than time stated in months.
2255 If you want to purge on a shorter time period you can include
2256 hours, or days behind the numeric value to get those more fre‐
2257 quent purges. (e.g. a value of '12hours' would purge everything
2258 older than 12 hours.)
2259
2260 PurgeJobAfter=
2261 Purge job records older than time stated in months. If you want
2262 to purge on a shorter time period you can include hours, or days
2263 behind the numeric value to get those more frequent purges.
2264 (e.g. a value of '12hours' would purge everything older than 12
2265 hours.)
2266
2267 PurgeStepAfter=
2268 Purge step records older than time stated in months. If you
2269 want to purge on a shorter time period you can include hours, or
2270 days behind the numeric value to get those more frequent purges.
2271 (e.g. a value of '12hours' would purge everything older than 12
2272 hours.)
2273
2274 PurgeSuspendAfter=
2275 Purge job suspend records older than time stated in months. If
2276 you want to purge on a shorter time period you can include
2277 hours, or days behind the numeric value to get those more fre‐
2278 quent purges. (e.g. a value of '12hours' would purge everything
2279 older than 12 hours.)
2280
2281 Script=
2282 Run this script instead of the generic form of archive to flat
2283 files.
2284
2285 Steps Archive Steps. If not specified and PurgeStepAfter is set all
2286 step data removed will be lost permanently.
2287
2288 Suspend
2289 Archive Suspend Data. If not specified and PurgeSuspendAfter is
2290 set all suspend data removed will be lost permanently.
2291
2292
2293 archive load
2294 Load in to the database previously archived data. The archive file will
2295 not be loaded if the records already exist in the database - therefore,
2296 trying to load an archive file more than once will result in an error.
2297 When this data is again archived and purged from the database, if the
2298 old archive file is still in the directory ArchiveDir, a new archive
2299 file will be created (see ArchiveDir in the slurmdbd.conf man page), so
2300 the old file will not be overwritten and these files will have dupli‐
2301 cate records.
2302
2303
2304 File= File to load into database. The specified file must exist on the
2305 slurmdbd host, which is not necessarily the machine running the
2306 command.
2307
2308 Insert=
2309 SQL to insert directly into the database. This should be used
2310 very cautiously since this is writing your sql into the data‐
2311 base.
2312
2313
2315 Executing sacctmgr sends a remote procedure call to slurmdbd. If enough
2316 calls from sacctmgr or other Slurm client commands that send remote
2317 procedure calls to the slurmdbd daemon come in at once, it can result
2318 in a degradation of performance of the slurmdbd daemon, possibly
2319 resulting in a denial of service.
2320
2321 Do not run sacctmgr or other Slurm client commands that send remote
2322 procedure calls to slurmdbd from loops in shell scripts or other pro‐
2323 grams. Ensure that programs limit calls to sacctmgr to the minimum
2324 necessary for the information you are trying to gather.
2325
2326
2328 Some sacctmgr options may be set via environment variables. These envi‐
2329 ronment variables, along with their corresponding options, are listed
2330 below. (Note: commandline options will always override these settings)
2331
2332 SLURM_CONF The location of the Slurm configuration file.
2333
2334
2336 NOTE: There is an order to set up accounting associations. You must
2337 define clusters before you add accounts and you must add accounts
2338 before you can add users.
2339
2340 -> sacctmgr create cluster tux
2341 -> sacctmgr create account name=science fairshare=50
2342 -> sacctmgr create account name=chemistry parent=science fairshare=30
2343 -> sacctmgr create account name=physics parent=science fairshare=20
2344 -> sacctmgr create user name=adam cluster=tux account=physics fair‐
2345 share=10
2346 -> sacctmgr delete user name=adam cluster=tux account=physics
2347 -> sacctmgr delete account name=physics cluster=tux
2348 -> sacctmgr modify user where name=adam cluster=tux account=physics set
2349 maxjobs=2 maxwall=30:00
2350 -> sacctmgr add user brian account=chemistry
2351 -> sacctmgr list associations cluster=tux format=Account,Clus‐
2352 ter,User,Fairshare tree withd
2353 -> sacctmgr list transactions Action="Add Users" Start=11/03-10:30:00
2354 format=Where,Time
2355 -> sacctmgr dump cluster=tux file=tux_data_file
2356 -> sacctmgr load tux_data_file
2357
2358 A user's account can not be changed directly. A new association needs
2359 to be created for the user with the new account. Then the association
2360 with the old account can be deleted.
2361
2362 When modifying an object placing the key words 'set' and the optional
2363 'where' is critical to perform correctly below are examples to produce
2364 correct results. As a rule of thumb anything you put in front of the
2365 set will be used as a quantifier. If you want to put a quantifier
2366 after the key word 'set' you should use the key word 'where'.
2367
2368 wrong-> sacctmgr modify user name=adam set fairshare=10 cluster=tux
2369
2370 This will produce an error as the above line reads modify user adam set
2371 fairshare=10 and cluster=tux.
2372
2373 right-> sacctmgr modify user name=adam cluster=tux set fairshare=10
2374 right-> sacctmgr modify user name=adam set fairshare=10 where clus‐
2375 ter=tux
2376
2377 When changing qos for something only use the '=' operator when wanting
2378 to explicitly set the qos to something. In most cases you will want to
2379 use the '+=' or '\-=' operator to either add to or remove from the
2380 existing qos already in place.
2381
2382 If a user already has qos of normal,standby for a parent or it was
2383 explicitly set you should use qos+=expedite to add this to the list in
2384 this fashion.
2385
2386 If you are looking to only add the qos expedite to only a certain
2387 account and or cluster you can do that by specifying them in the sacct‐
2388 mgr line.
2389
2390 -> sacctmgr modify user name=adam set qos+=expedite
2391
2392 > sacctmgr modify user name=adam acct=this cluster=tux set qos+=expe‐
2393 dite
2394
2395 Let's give an example how to add QOS to user accounts. List all avail‐
2396 able QOSs in the cluster.
2397
2398 ->sacctmgr show qos format=name
2399 Name
2400 ---------
2401 normal
2402 expedite
2403
2404 List all the associations in the cluster.
2405
2406 ->sacctmgr show assoc format=cluster,account,qos
2407 Cluster Account QOS
2408 -------- ---------- -----
2409 zebra root normal
2410 zebra root normal
2411 zebra g normal
2412 zebra g1 normal
2413
2414 Add the QOS expedite to account G1 and display the result. Using the
2415 operator += the QOS will be added together with the existing QOS to
2416 this account.
2417
2418 ->sacctmgr modify account name=g1 set qos+=expedite
2419
2420 ->sacctmgr show assoc format=cluster,account,qos
2421 Cluster Account QOS
2422 -------- -------- -------
2423 zebra root normal
2424 zebra root normal
2425 zebra g normal
2426 zebra g1 expedite,normal
2427
2428 Now set the QOS expedite as the only QOS for the account G and display
2429 the result. Using the operator = that expedite is the only usable QOS
2430 by account G
2431
2432 ->sacctmgr modify account name=G set qos=expedite
2433
2434 >sacctmgr show assoc format=cluster,account,user,qos
2435 Cluster Account QOS
2436 --------- -------- -----
2437 zebra root normal
2438 zebra root normal
2439 zebra g expedite
2440 zebra g1 expedite,normal
2441
2442 If a new account is added under the account G it will inherit the QOS
2443 expedite and it will not have access to QOS normal.
2444
2445 ->sacctmgr add account banana parent=G
2446
2447 ->sacctmgr show assoc format=cluster,account,qos
2448 Cluster Account QOS
2449 --------- -------- -----
2450 zebra root normal
2451 zebra root normal
2452 zebra g expedite
2453 zebra banana expedite
2454 zebra g1 expedite,normal
2455
2456 An example of listing trackable resources
2457
2458 ->sacctmgr show tres
2459 Type Name ID
2460 ---------- ----------------- --------
2461 cpu 1
2462 mem 2
2463 energy 3
2464 node 4
2465 billing 5
2466 gres gpu:tesla 1001
2467 license vcs 1002
2468 bb cray 1003
2469
2470
2471
2473 Copyright (C) 2008-2010 Lawrence Livermore National Security. Produced
2474 at Lawrence Livermore National Laboratory (cf, DISCLAIMER).
2475 Copyright (C) 2010-2016 SchedMD LLC.
2476
2477 This file is part of Slurm, a resource management program. For
2478 details, see <https://slurm.schedmd.com/>.
2479
2480 Slurm is free software; you can redistribute it and/or modify it under
2481 the terms of the GNU General Public License as published by the Free
2482 Software Foundation; either version 2 of the License, or (at your
2483 option) any later version.
2484
2485 Slurm is distributed in the hope that it will be useful, but WITHOUT
2486 ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
2487 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
2488 for more details.
2489
2490
2492 slurm.conf(5), slurmdbd(8)
2493
2494
2495
2496November 2020 Slurm Commands sacctmgr(1)