1squeue(1) Slurm Commands squeue(1)
2
3
4
6 squeue - view information about jobs located in the Slurm scheduling
7 queue.
8
9
11 squeue [OPTIONS...]
12
13
15 squeue is used to view job and job step information for jobs managed by
16 Slurm.
17
18
20 -A, --account=<account_list>
21 Specify the accounts of the jobs to view. Accepts a comma sepa‐
22 rated list of account names. This has no effect when listing job
23 steps.
24
25
26 -a, --all
27 Display information about jobs and job steps in all partitions.
28 This causes information to be displayed about partitions that
29 are configured as hidden, partitions that are unavailable to a
30 user's group, and federated jobs that are in a "revoked" state.
31
32
33 -r, --array
34 Display one job array element per line. Without this option,
35 the display will be optimized for use with job arrays (pending
36 job array elements will be combined on one line of output with
37 the array index values printed using a regular expression).
38
39
40 --array-unique
41 Display one unique pending job array element per line. Without
42 this option, the pending job array elements will be grouped into
43 the master array job to optimize the display. This can also be
44 set with the environment variable SQUEUE_ARRAY_UNIQUE.
45
46
47 -M, --clusters=<cluster_name>
48 Clusters to issue commands to. Multiple cluster names may be
49 comma separated. A value of 'all' will query to run on all
50 clusters. This option implicitly sets the --local option.
51
52
53 --federation
54 Show jobs from the federation if a member of one.
55
56
57 -o, --format=<output_format>
58 Specify the information to be displayed, its size and position
59 (right or left justified). Also see the -O, --Format=<out‐
60 put_format> option described below (which supports less flexi‐
61 bility in formatting, but supports access to all fields). If
62 the command is executed in a federated cluster environment and
63 information about more than one cluster is to be displayed and
64 the -h, --noheader option is used, then the cluster name will be
65 displayed before the default output formats shown below.
66
67 The default formats with various options are:
68
69 default "%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R"
70
71 -l, --long "%.18i %.9P %.8j %.8u %.8T %.10M %.9l %.6D %R"
72
73 -s, --steps "%.15i %.8j %.9P %.8u %.9M %N"
74
75
76 The format of each field is "%[[.]size]type[suffix]"
77
78 size Minimum field size. If no size is specified, whatever
79 is needed to print the information will be used.
80
81 . Indicates the output should be right justified and
82 size must be specified. By default output is left
83 justified.
84
85 suffix Arbitrary string to append to the end of the field.
86
87
88 Note that many of these type specifications are valid only for
89 jobs while others are valid only for job steps. Valid type
90 specifications include:
91
92
93 %all Print all fields available for this data type with a ver‐
94 tical bar separating each field.
95
96 %a Account associated with the job. (Valid for jobs only)
97
98 %A Number of tasks created by a job step. This reports the
99 value of the srun --ntasks option. (Valid for job steps
100 only)
101
102 %A Job id. This will have a unique value for each element of
103 job arrays. (Valid for jobs only)
104
105 %B Executing (batch) host. For an allocated session, this is
106 the host on which the session is executing (i.e. the node
107 from which the srun or the salloc command was executed).
108 For a batch job, this is the node executing the batch
109 script. In the case of a typical Linux cluster, this would
110 be the compute node zero of the allocation. In the case of
111 a Cray ALPS system, this would be the front-end host whose
112 slurmd daemon executes the job script.
113
114 %c Minimum number of CPUs (processors) per node requested by
115 the job. This reports the value of the srun --mincpus op‐
116 tion with a default value of zero. (Valid for jobs only)
117
118 %C Number of CPUs (processors) requested by the job or allo‐
119 cated to it if already running. As a job is completing
120 this number will reflect the current number of CPUs allo‐
121 cated. (Valid for jobs only)
122
123 %d Minimum size of temporary disk space (in MB) requested by
124 the job. (Valid for jobs only)
125
126 %D Number of nodes allocated to the job or the minimum number
127 of nodes required by a pending job. The actual number of
128 nodes allocated to a pending job may exceed this number if
129 the job specified a node range count (e.g. minimum and
130 maximum node counts) or the job specifies a processor
131 count instead of a node count. As a job is completing this
132 number will reflect the current number of nodes allocated.
133 (Valid for jobs only)
134
135 %e Time at which the job ended or is expected to end (based
136 upon its time limit). (Valid for jobs only)
137
138 %E Job dependencies remaining. This job will not begin execu‐
139 tion until these dependent jobs complete. In the case of a
140 job that can not run due to job dependencies never being
141 satisfied, the full original job dependency specification
142 will be reported. A value of NULL implies this job has no
143 dependencies. (Valid for jobs only)
144
145 %f Features required by the job. (Valid for jobs only)
146
147 %F Job array's job ID. This is the base job ID. For non-ar‐
148 ray jobs, this is the job ID. (Valid for jobs only)
149
150 %g Group name of the job. (Valid for jobs only)
151
152 %G Group ID of the job. (Valid for jobs only)
153
154 %h Can the compute resources allocated to the job be over
155 subscribed by other jobs. The resources to be over sub‐
156 scribed can be nodes, sockets, cores, or hyperthreads de‐
157 pending upon configuration. The value will be "YES" if
158 the job was submitted with the oversubscribe option or the
159 partition is configured with OverSubscribe=Force, "NO" if
160 the job requires exclusive node access, "USER" if the al‐
161 located compute nodes are dedicated to a single user,
162 "MCS" if the allocated compute nodes are dedicated to a
163 single security class (See MCSPlugin and MCSParameters
164 configuration parameters for more information), "OK" oth‐
165 erwise (typically allocated dedicated CPUs), (Valid for
166 jobs only)
167
168 %H Number of sockets per node requested by the job. This re‐
169 ports the value of the srun --sockets-per-node option.
170 When --sockets-per-node has not been set, "*" is dis‐
171 played. (Valid for jobs only)
172
173 %i Job or job step id. In the case of job arrays, the job ID
174 format will be of the form "<base_job_id>_<index>". By
175 default, the job array index field size will be limited to
176 64 bytes. Use the environment variable SLURM_BITSTR_LEN
177 to specify larger field sizes. (Valid for jobs and job
178 steps) In the case of heterogeneous job allocations, the
179 job ID format will be of the form "#+#" where the first
180 number is the "heterogeneous job leader" and the second
181 number the zero origin offset for each component of the
182 job.
183
184 %I Number of cores per socket requested by the job. This re‐
185 ports the value of the srun --cores-per-socket option.
186 When --cores-per-socket has not been set, "*" is dis‐
187 played. (Valid for jobs only)
188
189 %j Job or job step name. (Valid for jobs and job steps)
190
191 %J Number of threads per core requested by the job. This re‐
192 ports the value of the srun --threads-per-core option.
193 When --threads-per-core has not been set, "*" is dis‐
194 played. (Valid for jobs only)
195
196 %k Comment associated with the job. (Valid for jobs only)
197
198 %K Job array index. By default, this field size will be lim‐
199 ited to 64 bytes. Use the environment variable SLURM_BIT‐
200 STR_LEN to specify larger field sizes. (Valid for jobs
201 only)
202
203 %l Time limit of the job or job step in days-hours:min‐
204 utes:seconds. The value may be "NOT_SET" if not yet es‐
205 tablished or "UNLIMITED" for no limit. (Valid for jobs
206 and job steps)
207
208 %L Time left for the job to execute in days-hours:min‐
209 utes:seconds. This value is calculated by subtracting the
210 job's time used from its time limit. The value may be
211 "NOT_SET" if not yet established or "UNLIMITED" for no
212 limit. (Valid for jobs only)
213
214 %m Minimum size of memory (in MB) requested by the job.
215 (Valid for jobs only)
216
217 %M Time used by the job or job step in days-hours:min‐
218 utes:seconds. The days and hours are printed only as
219 needed. For job steps this field shows the elapsed time
220 since execution began and thus will be inaccurate for job
221 steps which have been suspended. Clock skew between nodes
222 in the cluster will cause the time to be inaccurate. If
223 the time is obviously wrong (e.g. negative), it displays
224 as "INVALID". (Valid for jobs and job steps)
225
226 %n List of node names explicitly requested by the job.
227 (Valid for jobs only)
228
229 %N List of nodes allocated to the job or job step. In the
230 case of a COMPLETING job, the list of nodes will comprise
231 only those nodes that have not yet been returned to ser‐
232 vice. (Valid for jobs and job steps)
233
234 %o The command to be executed.
235
236 %O Are contiguous nodes requested by the job. (Valid for
237 jobs only)
238
239 %p Priority of the job (converted to a floating point number
240 between 0.0 and 1.0). Also see %Q. (Valid for jobs only)
241
242 %P Partition of the job or job step. (Valid for jobs and job
243 steps)
244
245 %q Quality of service associated with the job. (Valid for
246 jobs only)
247
248 %Q Priority of the job (generally a very large unsigned inte‐
249 ger). Also see %p. (Valid for jobs only)
250
251 %r The reason a job is in its current state. See the JOB
252 REASON CODES section below for more information. (Valid
253 for jobs only)
254
255 %R For pending jobs: the reason a job is waiting for execu‐
256 tion is printed within parenthesis. For terminated jobs
257 with failure: an explanation as to why the job failed is
258 printed within parenthesis. For all other job states: the
259 list of allocate nodes. See the JOB REASON CODES section
260 below for more information. (Valid for jobs only)
261
262 %s Node selection plugin specific data for a job. Possible
263 data includes: Geometry requirement of resource allocation
264 (X,Y,Z dimensions), Connection type (TORUS, MESH, or NAV
265 == torus else mesh), Permit rotation of geometry (yes or
266 no), Node use (VIRTUAL or COPROCESSOR), etc. (Valid for
267 jobs only)
268
269 %S Actual or expected start time of the job or job step.
270 (Valid for jobs and job steps)
271
272 %t Job state in compact form. See the JOB STATE CODES sec‐
273 tion below for a list of possible states. (Valid for jobs
274 only)
275
276 %T Job state in extended form. See the JOB STATE CODES sec‐
277 tion below for a list of possible states. (Valid for jobs
278 only)
279
280 %u User name for a job or job step. (Valid for jobs and job
281 steps)
282
283 %U User ID for a job or job step. (Valid for jobs and job
284 steps)
285
286 %v Reservation for the job. (Valid for jobs only)
287
288 %V The job's submission time.
289
290 %w Workload Characterization Key (wckey). (Valid for jobs
291 only)
292
293 %W Licenses reserved for the job. (Valid for jobs only)
294
295 %x List of node names explicitly excluded by the job. (Valid
296 for jobs only)
297
298 %X Count of cores reserved on each node for system use (core
299 specialization). (Valid for jobs only)
300
301 %y Nice value (adjustment to a job's scheduling priority).
302 (Valid for jobs only)
303
304 %Y For pending jobs, a list of the nodes expected to be used
305 when the job is started.
306
307 %z Number of requested sockets, cores, and threads (S:C:T)
308 per node for the job. When (S:C:T) has not been set, "*"
309 is displayed. (Valid for jobs only)
310
311 %Z The job's working directory.
312
313
314
315 -O, --Format=<output_format>
316 Specify the information to be displayed. Also see the -o,
317 --format=<output_format> option described below (which supports
318 greater flexibility in formatting, but does not support access
319 to all fields because we ran out of letters). Requests a comma
320 separated list of job information to be displayed.
321
322
323 The format of each field is "type[:[.][size][sufix]]"
324
325 size Minimum field size. If no size is specified, 20 char‐
326 acters will be allocated to print the information.
327
328 . Indicates the output should be right justified and
329 size must be specified. By default output is left
330 justified.
331
332 sufix Arbitrary string to append to the end of the field.
333
334
335 Note that many of these type specifications are valid only for
336 jobs while others are valid only for job steps. Valid type
337 specifications include:
338
339
340 Account
341 Print the account associated with the job. (Valid for
342 jobs only)
343
344 AccrueTime
345 Print the accrue time associated with the job. (Valid
346 for jobs only)
347
348 admin_comment
349 Administrator comment associated with the job. (Valid
350 for jobs only)
351
352 AllocNodes
353 Print the nodes allocated to the job. (Valid for jobs
354 only)
355
356 AllocSID
357 Print the session ID used to submit the job. (Valid for
358 jobs only)
359
360 ArrayJobID
361 Prints the job ID of the job array. (Valid for jobs and
362 job steps)
363
364 ArrayTaskID
365 Prints the task ID of the job array. (Valid for jobs and
366 job steps)
367
368 AssocID
369 Prints the ID of the job association. (Valid for jobs
370 only)
371
372 BatchFlag
373 Prints whether the batch flag has been set. (Valid for
374 jobs only)
375
376 BatchHost
377 Executing (batch) host. For an allocated session, this is
378 the host on which the session is executing (i.e. the node
379 from which the srun or the salloc command was executed).
380 For a batch job, this is the node executing the batch
381 script. In the case of a typical Linux cluster, this
382 would be the compute node zero of the allocation. In the
383 case of a Cray ALPS system, this would be the front-end
384 host whose slurmd daemon executes the job script. (Valid
385 for jobs only)
386
387 BoardsPerNode
388 Prints the number of boards per node allocated to the
389 job. (Valid for jobs only)
390
391 BurstBuffer
392 Burst Buffer specification (Valid for jobs only)
393
394 BurstBufferState
395 Burst Buffer state (Valid for jobs only)
396
397 Cluster
398 Name of the cluster that is running the job or job step.
399
400 ClusterFeature
401 Cluster features required by the job. (Valid for jobs
402 only)
403
404 Command
405 The command to be executed. (Valid for jobs only)
406
407 Comment
408 Comment associated with the job. (Valid for jobs only)
409
410 Contiguous
411 Are contiguous nodes requested by the job. (Valid for
412 jobs only)
413
414 Container
415 OCI container bundle path.
416
417 Cores Number of cores per socket requested by the job. This
418 reports the value of the srun --cores-per-socket option.
419 When --cores-per-socket has not been set, "*" is dis‐
420 played. (Valid for jobs only)
421
422 CoreSpec
423 Count of cores reserved on each node for system use (core
424 specialization). (Valid for jobs only)
425
426 CPUFreq
427 Prints the frequency of the allocated CPUs. (Valid for
428 job steps only)
429
430 cpus-per-task
431 Prints the number of CPUs per tasks allocated to the job.
432 (Valid for jobs only)
433
434 cpus-per-tres
435 Print the memory required per trackable resources allo‐
436 cated to the job or job step.
437
438 Deadline
439 Prints the deadline affected to the job (Valid for jobs
440 only)
441
442 DelayBoot
443 Delay boot time. (Valid for jobs only)
444
445 Dependency
446 Job dependencies remaining. This job will not begin exe‐
447 cution until these dependent jobs complete. In the case
448 of a job that can not run due to job dependencies never
449 being satisfied, the full original job dependency speci‐
450 fication will be reported. A value of NULL implies this
451 job has no dependencies. (Valid for jobs only)
452
453 DerivedEC
454 Derived exit code for the job, which is the highest exit
455 code of any job step. (Valid for jobs only)
456
457 EligibleTime
458 Time the job is eligible for running. (Valid for jobs
459 only)
460
461 EndTime
462 The time of job termination, actual or expected. (Valid
463 for jobs only)
464
465 exit_code
466 The exit code for the job. (Valid for jobs only)
467
468 Feature
469 Features required by the job. (Valid for jobs only)
470
471 GroupID
472 Group ID of the job. (Valid for jobs only)
473
474 GroupName
475 Group name of the job. (Valid for jobs only)
476
477 HetJobID
478 Job ID of the heterogeneous job leader.
479
480 HetJobIDSet
481 Expression identifying all components job IDs within a
482 heterogeneous job.
483
484 HetJobOffset
485 Zero origin offset within a collection of heterogeneous
486 job components.
487
488 JobArrayID
489 Job array's job ID. This is the base job ID. For non-ar‐
490 ray jobs, this is the job ID. (Valid for jobs only)
491
492 JobID Job ID. This will have a unique value for each element
493 of job arrays and each component of heterogeneous jobs.
494 (Valid for jobs only)
495
496 LastSchedEval
497 Prints the last time the job was evaluated for schedul‐
498 ing. (Valid for jobs only)
499
500 Licenses
501 Licenses reserved for the job. (Valid for jobs only)
502
503 MaxCPUs
504 Prints the max number of CPUs allocated to the job.
505 (Valid for jobs only)
506
507 MaxNodes
508 Prints the max number of nodes allocated to the job.
509 (Valid for jobs only)
510
511 MCSLabel
512 Prints the MCS_label of the job. (Valid for jobs only)
513
514 mem-per-tres
515 Print the memory (in MB) required per trackable resources
516 allocated to the job or job step.
517
518 MinCpus
519 Minimum number of CPUs (processors) per node requested by
520 the job. This reports the value of the srun --mincpus
521 option with a default value of zero. (Valid for jobs
522 only)
523
524 MinMemory
525 Minimum size of memory (in MB) requested by the job.
526 (Valid for jobs only)
527
528 MinTime
529 Minimum time limit of the job (Valid for jobs only)
530
531 MinTmpDisk
532 Minimum size of temporary disk space (in MB) requested by
533 the job. (Valid for jobs only)
534
535 Name Job or job step name. (Valid for jobs and job steps)
536
537 Network
538 The network that the job is running on. (Valid for jobs
539 and job steps)
540
541 Nice Nice value (adjustment to a job's scheduling priority).
542 (Valid for jobs only)
543
544 NodeList
545 List of nodes allocated to the job or job step. In the
546 case of a COMPLETING job, the list of nodes will comprise
547 only those nodes that have not yet been returned to ser‐
548 vice. (Valid for jobs only)
549
550 Nodes List of nodes allocated to the job or job step. In the
551 case of a COMPLETING job, the list of nodes will comprise
552 only those nodes that have not yet been returned to ser‐
553 vice. (Valid job steps only)
554
555 NTPerBoard
556 The number of tasks per board allocated to the job.
557 (Valid for jobs only)
558
559 NTPerCore
560 The number of tasks per core allocated to the job.
561 (Valid for jobs only)
562
563 NTPerNode
564 The number of tasks per node allocated to the job.
565 (Valid for jobs only)
566
567 NTPerSocket
568 The number of tasks per socket allocated to the job.
569 (Valid for jobs only)
570
571 NumCPUs
572 Number of CPUs (processors) requested by the job or allo‐
573 cated to it if already running. As a job is completing,
574 this number will reflect the current number of CPUs allo‐
575 cated. (Valid for jobs and job steps)
576
577 NumNodes
578 Number of nodes allocated to the job or the minimum num‐
579 ber of nodes required by a pending job. The actual number
580 of nodes allocated to a pending job may exceed this num‐
581 ber if the job specified a node range count (e.g. mini‐
582 mum and maximum node counts) or the job specifies a pro‐
583 cessor count instead of a node count. As a job is com‐
584 pleting this number will reflect the current number of
585 nodes allocated. (Valid for jobs only)
586
587 NumTasks
588 Number of tasks requested by a job or job step. This re‐
589 ports the value of the --ntasks option. (Valid for jobs
590 and job steps)
591
592 Origin Cluster name where federated job originated from. (Valid
593 for federated jobs only)
594
595 OriginRaw
596 Cluster ID where federated job originated from. (Valid
597 for federated jobs only)
598
599 OverSubscribe
600 Can the compute resources allocated to the job be over
601 subscribed by other jobs. The resources to be over sub‐
602 scribed can be nodes, sockets, cores, or hyperthreads de‐
603 pending upon configuration. The value will be "YES" if
604 the job was submitted with the oversubscribe option or
605 the partition is configured with OverSubscribe=Force,
606 "NO" if the job requires exclusive node access, "USER" if
607 the allocated compute nodes are dedicated to a single
608 user, "MCS" if the allocated compute nodes are dedicated
609 to a single security class (See MCSPlugin and MCSParame‐
610 ters configuration parameters for more information), "OK"
611 otherwise (typically allocated dedicated CPUs), (Valid
612 for jobs only)
613
614 Partition
615 Partition of the job or job step. (Valid for jobs and
616 job steps)
617
618 PreemptTime
619 The preempt time for the job. (Valid for jobs only)
620
621 PendingTime
622 The time (in seconds) between start time and submit time
623 of the job. If the job has not started yet, then the
624 time (in seconds) between now and the submit time of the
625 job. (Valid for jobs only)
626
627 Priority
628 Priority of the job (converted to a floating point number
629 between 0.0 and 1.0). Also see prioritylong. (Valid for
630 jobs only)
631
632 PriorityLong
633 Priority of the job (generally a very large unsigned in‐
634 teger). Also see priority. (Valid for jobs only)
635
636 Profile
637 Profile of the job. (Valid for jobs only)
638
639 QOS Quality of service associated with the job. (Valid for
640 jobs only)
641
642 Reason The reason a job is in its current state. See the JOB
643 REASON CODES section below for more information. (Valid
644 for jobs only)
645
646 ReasonList
647 For pending jobs: the reason a job is waiting for execu‐
648 tion is printed within parenthesis. For terminated jobs
649 with failure: an explanation as to why the job failed is
650 printed within parenthesis. For all other job states:
651 the list of allocate nodes. See the JOB REASON CODES
652 section below for more information. (Valid for jobs
653 only)
654
655 Reboot Indicates if the allocated nodes should be rebooted be‐
656 fore starting the job. (Valid on jobs only)
657
658 ReqNodes
659 List of node names explicitly requested by the job.
660 (Valid for jobs only)
661
662 ReqSwitch
663 The max number of requested switches by for the job.
664 (Valid for jobs only)
665
666 Requeue
667 Prints whether the job will be requeued on failure.
668 (Valid for jobs only)
669
670 Reservation
671 Reservation for the job. (Valid for jobs only)
672
673 ResizeTime
674 The amount of time changed for the job to run. (Valid
675 for jobs only)
676
677 RestartCnt
678 The number of restarts for the job. (Valid for jobs
679 only)
680
681 ResvPort
682 Reserved ports of the job. (Valid for job steps only)
683
684 SchedNodes
685 For pending jobs, a list of the nodes expected to be used
686 when the job is started. (Valid for jobs only)
687
688 SCT Number of requested sockets, cores, and threads (S:C:T)
689 per node for the job. When (S:C:T) has not been set, "*"
690 is displayed. (Valid for jobs only)
691
692 SelectJobInfo
693 Node selection plugin specific data for a job. Possible
694 data includes: Geometry requirement of resource alloca‐
695 tion (X,Y,Z dimensions), Connection type (TORUS, MESH, or
696 NAV == torus else mesh), Permit rotation of geometry (yes
697 or no), Node use (VIRTUAL or COPROCESSOR), etc. (Valid
698 for jobs only)
699
700 SiblingsActive
701 Cluster names of where federated sibling jobs exist.
702 (Valid for federated jobs only)
703
704 SiblingsActiveRaw
705 Cluster IDs of where federated sibling jobs exist.
706 (Valid for federated jobs only)
707
708 SiblingsViable
709 Cluster names of where federated sibling jobs are viable
710 to run. (Valid for federated jobs only)
711
712 SiblingsViableRaw
713 Cluster IDs of where federated sibling jobs viable to
714 run. (Valid for federated jobs only)
715
716 Sockets
717 Number of sockets per node requested by the job. This
718 reports the value of the srun --sockets-per-node option.
719 When --sockets-per-node has not been set, "*" is dis‐
720 played. (Valid for jobs only)
721
722 SPerBoard
723 Number of sockets per board allocated to the job. (Valid
724 for jobs only)
725
726 StartTime
727 Actual or expected start time of the job or job step.
728 (Valid for jobs and job steps)
729
730 State Job state in extended form. See the JOB STATE CODES sec‐
731 tion below for a list of possible states. (Valid for
732 jobs only)
733
734 StateCompact
735 Job state in compact form. See the JOB STATE CODES sec‐
736 tion below for a list of possible states. (Valid for
737 jobs only)
738
739 STDERR The directory for standard error to output to. (Valid
740 for jobs only)
741
742 STDIN The directory for standard in. (Valid for jobs only)
743
744 STDOUT The directory for standard out to output to. (Valid for
745 jobs only)
746
747 StepID Job or job step ID. In the case of job arrays, the job
748 ID format will be of the form "<base_job_id>_<index>".
749 (Valid forjob steps only)
750
751 StepName
752 Job step name. (Valid for job steps only)
753
754 StepState
755 The state of the job step. (Valid for job steps only)
756
757 SubmitTime
758 The time that the job was submitted at. (Valid for jobs
759 only)
760
761 system_comment
762 System comment associated with the job. (Valid for jobs
763 only)
764
765 Threads
766 Number of threads per core requested by the job. This
767 reports the value of the srun --threads-per-core option.
768 When --threads-per-core has not been set, "*" is dis‐
769 played. (Valid for jobs only)
770
771 TimeLeft
772 Time left for the job to execute in days-hours:min‐
773 utes:seconds. This value is calculated by subtracting
774 the job's time used from its time limit. The value may
775 be "NOT_SET" if not yet established or "UNLIMITED" for no
776 limit. (Valid for jobs only)
777
778 TimeLimit
779 Timelimit for the job or job step. (Valid for jobs and
780 job steps)
781
782 TimeUsed
783 Time used by the job or job step in days-hours:min‐
784 utes:seconds. The days and hours are printed only as
785 needed. For job steps this field shows the elapsed time
786 since execution began and thus will be inaccurate for job
787 steps which have been suspended. Clock skew between
788 nodes in the cluster will cause the time to be inaccu‐
789 rate. If the time is obviously wrong (e.g. negative), it
790 displays as "INVALID". (Valid for jobs and job steps)
791
792 tres-alloc
793 Print the trackable resources allocated to the job if
794 running. If not running, then print the trackable re‐
795 sources requested by the job.
796
797 tres-bind
798 Print the trackable resources task binding requested by
799 the job or job step.
800
801 tres-freq
802 Print the trackable resources frequencies requested by
803 the job or job step.
804
805 tres-per-job
806 Print the trackable resources requested by the job.
807
808 tres-per-node
809 Print the trackable resources per node requested by the
810 job or job step.
811
812 tres-per-socket
813 Print the trackable resources per socket requested by the
814 job or job step.
815
816 tres-per-step
817 Print the trackable resources requested by the job step.
818
819 tres-per-task
820 Print the trackable resources per task requested by the
821 job or job step.
822
823 UserID User ID for a job or job step. (Valid for jobs and job
824 steps)
825
826 UserName
827 User name for a job or job step. (Valid for jobs and job
828 steps)
829
830 Wait4Switch
831 The amount of time to wait for the desired number of
832 switches. (Valid for jobs only)
833
834 WCKey Workload Characterization Key (wckey). (Valid for jobs
835 only)
836
837 WorkDir
838 The job's working directory. (Valid for jobs only)
839
840
841 --help Print a help message describing all options squeue.
842
843
844 --hide Do not display information about jobs and job steps in all par‐
845 titions. By default, information about partitions that are con‐
846 figured as hidden or are not available to the user's group will
847 not be displayed (i.e. this is the default behavior).
848
849
850 -i, --iterate=<seconds>
851 Repeatedly gather and report the requested information at the
852 interval specified (in seconds). By default, prints a time
853 stamp with the header.
854
855
856 -j, --jobs=<job_id_list>
857 Requests a comma separated list of job IDs to display. Defaults
858 to all jobs. The --jobs=<job_id_list> option may be used in
859 conjunction with the --steps option to print step information
860 about specific jobs. Note: If a list of job IDs is provided,
861 the jobs are displayed even if they are on hidden partitions.
862 Since this option's argument is optional, for proper parsing the
863 single letter option must be followed immediately with the value
864 and not include a space between them. For example "-j1008" and
865 not "-j 1008". The job ID format is "job_id[_array_id]". Per‐
866 formance of the command can be measurably improved for systems
867 with large numbers of jobs when a single job ID is specified.
868 By default, this field size will be limited to 64 bytes. Use
869 the environment variable SLURM_BITSTR_LEN to specify larger
870 field sizes.
871
872
873 --json Dump job information as JSON. All other formating and filtering
874 arugments will be ignored.
875
876
877 -L, --licenses=<license_list>
878 Request jobs requesting or using one or more of the named li‐
879 censes. The license list consists of a comma separated list of
880 license names.
881
882
883 --local
884 Show only jobs local to this cluster. Ignore other clusters in
885 this federation (if any). Overrides --federation.
886
887
888 -l, --long
889 Report more of the available information for the selected jobs
890 or job steps, subject to any constraints specified.
891
892
893 --me Equivalent to --user=<my username>.
894
895
896 -n, --name=<name_list>
897 Request jobs or job steps having one of the specified names.
898 The list consists of a comma separated list of job names.
899
900
901 --noconvert
902 Don't convert units from their original type (e.g. 2048M won't
903 be converted to 2G).
904
905
906 -w, --nodelist=<hostlist>
907 Report only on jobs allocated to the specified node or list of
908 nodes. This may either be the NodeName or NodeHostname as de‐
909 fined in slurm.conf(5) in the event that they differ. A
910 node_name of localhost is mapped to the current host name.
911
912
913 -h, --noheader
914 Do not print a header on the output.
915
916
917 -p, --partition=<part_list>
918 Specify the partitions of the jobs or steps to view. Accepts a
919 comma separated list of partition names.
920
921
922 -P, --priority
923 For pending jobs submitted to multiple partitions, list the job
924 once per partition. In addition, if jobs are sorted by priority,
925 consider both the partition and job priority. This option can be
926 used to produce a list of pending jobs in the same order consid‐
927 ered for scheduling by Slurm with appropriate additional options
928 (e.g. "--sort=-p,i --states=PD").
929
930
931 -q, --qos=<qos_list>
932 Specify the qos(s) of the jobs or steps to view. Accepts a comma
933 separated list of qos's.
934
935
936 -R, --reservation=<reservation_name>
937 Specify the reservation of the jobs to view.
938
939
940 --sibling
941 Show all sibling jobs on a federated cluster. Implies --federa‐
942 tion.
943
944
945 -S, --sort=<sort_list>
946 Specification of the order in which records should be reported.
947 This uses the same field specification as the <output_format>.
948 The long format option "cluster" can also be used to sort jobs
949 or job steps by cluster name (e.g. federated jobs). Multiple
950 sorts may be performed by listing multiple sort fields separated
951 by commas. The field specifications may be preceded by "+" or
952 "-" for ascending (default) and descending order respectively.
953 For example, a sort value of "P,U" will sort the records by par‐
954 tition name then by user id. The default value of sort for jobs
955 is "P,t,-p" (increasing partition name then within a given par‐
956 tition by increasing job state and then decreasing priority).
957 The default value of sort for job steps is "P,i" (increasing
958 partition name then within a given partition by increasing step
959 id).
960
961
962 --start
963 Report the expected start time and resources to be allocated for
964 pending jobs in order of increasing start time. This is equiva‐
965 lent to the following options: --format="%.18i %.9P %.8j %.8u
966 %.2t %.19S %.6D %20Y %R", --sort=S and --states=PENDING. Any
967 of these options may be explicitly changed as desired by combin‐
968 ing the --start option with other option values (e.g. to use a
969 different output format). The expected start time of pending
970 jobs is only available if the Slurm is configured to use the
971 backfill scheduling plugin.
972
973
974 -t, --states=<state_list>
975 Specify the states of jobs to view. Accepts a comma separated
976 list of state names or "all". If "all" is specified then jobs of
977 all states will be reported. If no state is specified then pend‐
978 ing, running, and completing jobs are reported. See the JOB
979 STATE CODES section below for a list of valid states. Both ex‐
980 tended and compact forms are valid. Note the <state_list> sup‐
981 plied is case insensitive ("pd" and "PD" are equivalent).
982
983
984 -s, --steps
985 Specify the job steps to view. This flag indicates that a comma
986 separated list of job steps to view follows without an equal
987 sign (see examples). The job step format is "job_id[_ar‐
988 ray_id].step_id". Defaults to all job steps. Since this option's
989 argument is optional, for proper parsing the single letter op‐
990 tion must be followed immediately with the value and not include
991 a space between them. For example "-s1008.0" and not "-s
992 1008.0".
993
994
995 --usage
996 Print a brief help message listing the squeue options.
997
998
999 -u, --user=<user_list>
1000 Request jobs or job steps from a comma separated list of users.
1001 The list can consist of user names or user id numbers. Perfor‐
1002 mance of the command can be measurably improved for systems with
1003 large numbers of jobs when a single user is specified.
1004
1005
1006 -v, --verbose
1007 Report details of squeues actions.
1008
1009
1010 -V , --version
1011 Print version information and exit.
1012
1013
1014 --yaml Dump job information as YAML. All other formating and filtering
1015 arugments will be ignored.
1016
1017
1019 These codes identify the reason that a job is waiting for execution. A
1020 job may be waiting for more than one reason, in which case only one of
1021 those reasons is displayed.
1022
1023 AssociationJobLimit The job's association has reached its maximum job
1024 count.
1025
1026 AssociationResourceLimit
1027 The job's association has reached some resource
1028 limit.
1029
1030 AssociationTimeLimit The job's association has reached its time limit.
1031
1032 BadConstraints The job's constraints can not be satisfied.
1033
1034 BeginTime The job's earliest start time has not yet been
1035 reached.
1036
1037 Cleaning The job is being requeued and still cleaning up
1038 from its previous execution.
1039
1040 Dependency This job is waiting for a dependent job to com‐
1041 plete.
1042
1043 FrontEndDown No front end node is available to execute this
1044 job.
1045
1046 InactiveLimit The job reached the system InactiveLimit.
1047
1048 InvalidAccount The job's account is invalid.
1049
1050 InvalidQOS The job's QOS is invalid.
1051
1052 JobHeldAdmin The job is held by a system administrator.
1053
1054 JobHeldUser The job is held by the user.
1055
1056 JobLaunchFailure The job could not be launched. This may be due
1057 to a file system problem, invalid program name,
1058 etc.
1059
1060 Licenses The job is waiting for a license.
1061
1062 NodeDown A node required by the job is down.
1063
1064 NonZeroExitCode The job terminated with a non-zero exit code.
1065
1066 PartitionDown The partition required by this job is in a DOWN
1067 state.
1068
1069 PartitionInactive The partition required by this job is in an Inac‐
1070 tive state and not able to start jobs.
1071
1072 PartitionNodeLimit The number of nodes required by this job is out‐
1073 side of its partition's current limits. Can also
1074 indicate that required nodes are DOWN or DRAINED.
1075
1076 PartitionTimeLimit The job's time limit exceeds its partition's cur‐
1077 rent time limit.
1078
1079 Priority One or more higher priority jobs exist for this
1080 partition or advanced reservation.
1081
1082 Prolog Its PrologSlurmctld program is still running.
1083
1084 QOSJobLimit The job's QOS has reached its maximum job count.
1085
1086 QOSResourceLimit The job's QOS has reached some resource limit.
1087
1088 QOSTimeLimit The job's QOS has reached its time limit.
1089
1090 ReqNodeNotAvail Some node specifically required by the job is not
1091 currently available. The node may currently be
1092 in use, reserved for another job, in an advanced
1093 reservation, DOWN, DRAINED, or not responding.
1094 Nodes which are DOWN, DRAINED, or not responding
1095 will be identified as part of the job's "reason"
1096 field as "UnavailableNodes". Such nodes will typ‐
1097 ically require the intervention of a system ad‐
1098 ministrator to make available.
1099
1100 Reservation The job is waiting its advanced reservation to
1101 become available.
1102
1103 Resources The job is waiting for resources to become avail‐
1104 able.
1105
1106 SystemFailure Failure of the Slurm system, a file system, the
1107 network, etc.
1108
1109 TimeLimit The job exhausted its time limit.
1110
1111 QOSUsageThreshold Required QOS threshold has been breached.
1112
1113 WaitingForScheduling No reason has been set for this job yet. Waiting
1114 for the scheduler to determine the appropriate
1115 reason.
1116
1117
1119 Jobs typically pass through several states in the course of their exe‐
1120 cution. The typical states are PENDING, RUNNING, SUSPENDED, COMPLET‐
1121 ING, and COMPLETED. An explanation of each state follows.
1122
1123 BF BOOT_FAIL Job terminated due to launch failure, typically due
1124 to a hardware failure (e.g. unable to boot the node
1125 or block and the job can not be requeued).
1126
1127 CA CANCELLED Job was explicitly cancelled by the user or system
1128 administrator. The job may or may not have been
1129 initiated.
1130
1131 CD COMPLETED Job has terminated all processes on all nodes with
1132 an exit code of zero.
1133
1134 CF CONFIGURING Job has been allocated resources, but are waiting
1135 for them to become ready for use (e.g. booting).
1136
1137 CG COMPLETING Job is in the process of completing. Some processes
1138 on some nodes may still be active.
1139
1140 DL DEADLINE Job terminated on deadline.
1141
1142 F FAILED Job terminated with non-zero exit code or other
1143 failure condition.
1144
1145 NF NODE_FAIL Job terminated due to failure of one or more allo‐
1146 cated nodes.
1147
1148 OOM OUT_OF_MEMORY Job experienced out of memory error.
1149
1150 PD PENDING Job is awaiting resource allocation.
1151
1152 PR PREEMPTED Job terminated due to preemption.
1153
1154 R RUNNING Job currently has an allocation.
1155
1156 RD RESV_DEL_HOLD Job is being held after requested reservation was
1157 deleted.
1158
1159 RF REQUEUE_FED Job is being requeued by a federation.
1160
1161 RH REQUEUE_HOLD Held job is being requeued.
1162
1163 RQ REQUEUED Completing job is being requeued.
1164
1165 RS RESIZING Job is about to change size.
1166
1167 RV REVOKED Sibling was removed from cluster due to other clus‐
1168 ter starting the job.
1169
1170 SI SIGNALING Job is being signaled.
1171
1172 SE SPECIAL_EXIT The job was requeued in a special state. This state
1173 can be set by users, typically in EpilogSlurmctld,
1174 if the job has terminated with a particular exit
1175 value.
1176
1177 SO STAGE_OUT Job is staging out files.
1178
1179 ST STOPPED Job has an allocation, but execution has been
1180 stopped with SIGSTOP signal. CPUS have been re‐
1181 tained by this job.
1182
1183 S SUSPENDED Job has an allocation, but execution has been sus‐
1184 pended and CPUs have been released for other jobs.
1185
1186 TO TIMEOUT Job terminated upon reaching its time limit.
1187
1188
1190 Executing squeue sends a remote procedure call to slurmctld. If enough
1191 calls from squeue or other Slurm client commands that send remote pro‐
1192 cedure calls to the slurmctld daemon come in at once, it can result in
1193 a degradation of performance of the slurmctld daemon, possibly result‐
1194 ing in a denial of service.
1195
1196 Do not run squeue or other Slurm client commands that send remote pro‐
1197 cedure calls to slurmctld from loops in shell scripts or other pro‐
1198 grams. Ensure that programs limit calls to squeue to the minimum neces‐
1199 sary for the information you are trying to gather.
1200
1201
1203 Some squeue options may be set via environment variables. These envi‐
1204 ronment variables, along with their corresponding options, are listed
1205 below. (Note: Command line options will always override these set‐
1206 tings.)
1207
1208 SLURM_BITSTR_LEN Specifies the string length to be used for holding
1209 a job array's task ID expression. The default
1210 value is 64 bytes. A value of 0 will print the
1211 full expression with any length required. Larger
1212 values may adversely impact the application perfor‐
1213 mance.
1214
1215 SLURM_CLUSTERS Same as --clusters
1216
1217 SLURM_CONF The location of the Slurm configuration file.
1218
1219 SLURM_TIME_FORMAT Specify the format used to report time stamps. A
1220 value of standard, the default value, generates
1221 output in the form
1222 "year-month-dateThour:minute:second". A value of
1223 relative returns only "hour:minute:second" if the
1224 current day. For other dates in the current year
1225 it prints the "hour:minute" preceded by "Tomorr"
1226 (tomorrow), "Ystday" (yesterday), the name of the
1227 day for the coming week (e.g. "Mon", "Tue", etc.),
1228 otherwise the date (e.g. "25 Apr"). For other
1229 years it returns a date month and year without a
1230 time (e.g. "6 Jun 2012"). All of the time stamps
1231 use a 24 hour format.
1232
1233 A valid strftime() format can also be specified.
1234 For example, a value of "%a %T" will report the day
1235 of the week and a time stamp (e.g. "Mon 12:34:56").
1236
1237 SQUEUE_ACCOUNT -A <account_list>, --account=<account_list>
1238
1239 SQUEUE_ALL -a, --all
1240
1241 SQUEUE_ARRAY -r, --array
1242
1243 SQUEUE_NAMES --name=<name_list>
1244
1245 SQUEUE_FEDERATION --federation
1246
1247 SQUEUE_FORMAT -o <output_format>, --format=<output_format>
1248
1249 SQUEUE_FORMAT2 -O <output_format>, --Format=<output_format>
1250
1251 SQUEUE_LICENSES -p-l <license_list>, --license=<license_list>
1252
1253 SQUEUE_LOCAL --local
1254
1255 SQUEUE_PARTITION -p <part_list>, --partition=<part_list>
1256
1257 SQUEUE_PRIORITY -P, --priority
1258
1259 SQUEUE_QOS -p <qos_list>, --qos=<qos_list>
1260
1261 SQUEUE_SIBLING --sibling
1262
1263 SQUEUE_SORT -S <sort_list>, --sort=<sort_list>
1264
1265 SQUEUE_STATES -t <state_list>, --states=<state_list>
1266
1267 SQUEUE_USERS -u <user_list>, --users=<user_list>
1268
1269
1271 Print the jobs scheduled in the debug partition and in the COMPLETED
1272 state in the format with six right justified digits for the job id fol‐
1273 lowed by the priority with an arbitrary fields size:
1274
1275 $ squeue -p debug -t COMPLETED -o "%.6i %p"
1276 JOBID PRIORITY
1277 65543 99993
1278 65544 99992
1279 65545 99991
1280
1281
1282 Print the job steps in the debug partition sorted by user:
1283
1284 $ squeue -s -p debug -S u
1285 STEPID NAME PARTITION USER TIME NODELIST
1286 65552.1 test1 debug alice 0:23 dev[1-4]
1287 65562.2 big_run debug bob 0:18 dev22
1288 65550.1 param1 debug candice 1:43:21 dev[6-12]
1289
1290
1291 Print information only about jobs 12345, 12346 and 12348:
1292
1293 $ squeue --jobs 12345,12346,12348
1294 JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
1295 12345 debug job1 dave R 0:21 4 dev[9-12]
1296 12346 debug job2 dave PD 0:00 8 (Resources)
1297 12348 debug job3 ed PD 0:00 4 (Priority)
1298
1299
1300 Print information only about job step 65552.1:
1301
1302 $ squeue --steps 65552.1
1303 STEPID NAME PARTITION USER TIME NODELIST
1304 65552.1 test2 debug alice 12:49 dev[1-4]
1305
1306
1308 Copyright (C) 2002-2007 The Regents of the University of California.
1309 Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER).
1310 Copyright (C) 2008-2010 Lawrence Livermore National Security.
1311 Copyright (C) 2010-2021 SchedMD LLC.
1312
1313 This file is part of Slurm, a resource management program. For de‐
1314 tails, see <https://slurm.schedmd.com/>.
1315
1316 Slurm is free software; you can redistribute it and/or modify it under
1317 the terms of the GNU General Public License as published by the Free
1318 Software Foundation; either version 2 of the License, or (at your op‐
1319 tion) any later version.
1320
1321 Slurm is distributed in the hope that it will be useful, but WITHOUT
1322 ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
1323 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
1324 for more details.
1325
1327 scancel(1), scontrol(1), sinfo(1), srun(1), slurm_load_ctl_conf (3),
1328 slurm_load_jobs (3), slurm_load_node (3), slurm_load_partitions (3)
1329
1330
1331
1332May 2021 Slurm Commands squeue(1)