1squeue(1) Slurm Commands squeue(1)
2
3
4
6 squeue - view information about jobs located in the Slurm scheduling
7 queue.
8
9
11 squeue [OPTIONS...]
12
13
15 squeue is used to view job and job step information for jobs managed by
16 Slurm.
17
18
20 -A <account_list>, --account=<account_list>
21 Specify the accounts of the jobs to view. Accepts a comma sepa‐
22 rated list of account names. This has no effect when listing job
23 steps.
24
25
26 -a, --all
27 Display information about jobs and job steps in all partitions.
28 This causes information to be displayed about partitions that
29 are configured as hidden and partitions that are unavailable to
30 user's group.
31
32
33 -r, --array
34 Display one job array element per line. Without this option,
35 the display will be optimized for use with job arrays (pending
36 job array elements will be combined on one line of output with
37 the array index values printed using a regular expression).
38
39
40 --array-unique
41 Display one unique pending job array element per line. Without
42 this option, the pending job array elements will be grouped into
43 the master array job to optimize the display. This can also be
44 set with the environment variable SQUEUE_ARRAY_UNIQUE.
45
46
47 --federation
48 Show jobs from the federation if a member of one.
49
50
51 -h, --noheader
52 Do not print a header on the output.
53
54
55 --help Print a help message describing all options squeue.
56
57
58 --hide Do not display information about jobs and job steps in all par‐
59 titions. By default, information about partitions that are con‐
60 figured as hidden or are not available to the user's group will
61 not be displayed (i.e. this is the default behavior).
62
63
64 -i <seconds>, --iterate=<seconds>
65 Repeatedly gather and report the requested information at the
66 interval specified (in seconds). By default, prints a time
67 stamp with the header.
68
69
70 -j <job_id_list>, --jobs=<job_id_list>
71 Requests a comma separated list of job IDs to display. Defaults
72 to all jobs. The --jobs=<job_id_list> option may be used in
73 conjunction with the --steps option to print step information
74 about specific jobs. Note: If a list of job IDs is provided,
75 the jobs are displayed even if they are on hidden partitions.
76 Since this option's argument is optional, for proper parsing the
77 single letter option must be followed immediately with the value
78 and not include a space between them. For example "-j1008" and
79 not "-j 1008". The job ID format is "job_id[_array_id]". Per‐
80 formance of the command can be measurably improved for systems
81 with large numbers of jobs when a single job ID is specified.
82 By default, this field size will be limited to 64 bytes. Use
83 the environment variable SLURM_BITSTR_LEN to specify larger
84 field sizes.
85
86
87
88 --local
89 Show only jobs local to this cluster. Ignore other clusters in
90 this federation (if any). Overrides --federation.
91
92
93 -l, --long
94 Report more of the available information for the selected jobs
95 or job steps, subject to any constraints specified.
96
97
98 -L, --licenses=<license_list>
99 Request jobs requesting or using one or more of the named
100 licenses. The license list consists of a comma separated list
101 of license names.
102
103
104 -M, --clusters=<string>
105 Clusters to issue commands to. Multiple cluster names may be
106 comma separated. A value of of 'all' will query to run on all
107 clusters. This option implicitly sets the --local option.
108
109
110 -n, --name=<name_list>
111 Request jobs or job steps having one of the specified names.
112 The list consists of a comma separated list of job names.
113
114
115 --noconvert
116 Don't convert units from their original type (e.g. 2048M won't
117 be converted to 2G).
118
119
120 -o <output_format>, --format=<output_format>
121 Specify the information to be displayed, its size and position
122 (right or left justified). Also see the -O <output_format>,
123 --Format=<output_format> option described below (which supports
124 less flexibility in formatting, but supports access to all
125 fields). The default formats with various options are
126
127
128 default "%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R"
129
130 -l, --long "%.18i %.9P %.8j %.8u %.8T %.10M %.9l %.6D %R"
131
132 -s, --steps "%.15i %.8j %.9P %.8u %.9M %N"
133
134
135 The format of each field is "%[[.]size]type".
136
137 size is the minimum field size. If no size is specified,
138 whatever is needed to print the information will be
139 used.
140
141 . indicates the output should be right justified and size
142 must be specified. By default, output is left justi‐
143 fied.
144
145
146 Note that many of these type specifications are valid only for
147 jobs while others are valid only for job steps. Valid type
148 specifications include:
149
150
151 %all Print all fields available for this data type with a ver‐
152 tical bar separating each field.
153
154 %a Account associated with the job. (Valid for jobs only)
155
156 %A Number of tasks created by a job step. This reports the
157 value of the srun --ntasks option. (Valid for job steps
158 only)
159
160 %A Job id. This will have a unique value for each element of
161 job arrays. (Valid for jobs only)
162
163 %B Executing (batch) host. For an allocated session, this is
164 the host on which the session is executing (i.e. the node
165 from which the srun or the salloc command was executed).
166 For a batch job, this is the node executing the batch
167 script. In the case of a typical Linux cluster, this would
168 be the compute node zero of the allocation. In the case of
169 a Cray ALPS system, this would be the front-end host whose
170 slurmd daemon executes the job script.
171
172 %c Minimum number of CPUs (processors) per node requested by
173 the job. This reports the value of the srun --mincpus
174 option with a default value of zero. (Valid for jobs
175 only)
176
177 %C Number of CPUs (processors) requested by the job or allo‐
178 cated to it if already running. As a job is completing
179 this number will reflect the current number of CPUs allo‐
180 cated. (Valid for jobs only)
181
182 %d Minimum size of temporary disk space (in MB) requested by
183 the job. (Valid for jobs only)
184
185 %D Number of nodes allocated to the job or the minimum number
186 of nodes required by a pending job. The actual number of
187 nodes allocated to a pending job may exceed this number if
188 the job specified a node range count (e.g. minimum and
189 maximum node counts) or the job specifies a processor
190 count instead of a node count and the cluster contains
191 nodes with varying processor counts. As a job is complet‐
192 ing this number will reflect the current number of nodes
193 allocated. (Valid for jobs only)
194
195 %e Time at which the job ended or is expected to end (based
196 upon its time limit). (Valid for jobs only)
197
198 %E Job dependencies remaining. This job will not begin execu‐
199 tion until these dependent jobs complete. In the case of a
200 job that can not run due to job dependencies never being
201 satisfied, the full original job dependency specification
202 will be reported. A value of NULL implies this job has no
203 dependencies. (Valid for jobs only)
204
205 %f Features required by the job. (Valid for jobs only)
206
207 %F Job array's job ID. This is the base job ID. For
208 non-array jobs, this is the job ID. (Valid for jobs only)
209
210 %g Group name of the job. (Valid for jobs only)
211
212 %G Group ID of the job. (Valid for jobs only)
213
214 %h Can the compute resources allocated to the job be over
215 subscribed by other jobs. The resources to be over sub‐
216 scribed can be nodes, sockets, cores, or hyperthreads
217 depending upon configuration. The value will be "YES" if
218 the job was submitted with the oversubscribe option or the
219 partition is configured with OverSubscribe=Force, "NO" if
220 the job requires exclusive node access, "USER" if the
221 allocated compute nodes are dedicated to a single user,
222 "MCS" if the allocated compute nodes are dedicated to a
223 single security class (See MCSPlugin and MCSParameters
224 configuration parameters for more information), "OK" oth‐
225 erwise (typically allocated dedicated CPUs), (Valid for
226 jobs only)
227
228 %H Number of sockets per node requested by the job. This
229 reports the value of the srun --sockets-per-node option.
230 When --sockets-per-node has not been set, "*" is dis‐
231 played. (Valid for jobs only)
232
233 %i Job or job step id. In the case of job arrays, the job ID
234 format will be of the form "<base_job_id>_<index>". By
235 default, the job array index field size will be limited to
236 64 bytes. Use the environment variable SLURM_BITSTR_LEN
237 to specify larger field sizes. (Valid for jobs and job
238 steps) In the case of heterogeneous job allocations, the
239 job ID format will be of the form "#+#" where the first
240 number is the "heterogeneous job leader" and the second
241 number the zero origin offset for each component of the
242 job.
243
244 %I Number of cores per socket requested by the job. This
245 reports the value of the srun --cores-per-socket option.
246 When --cores-per-socket has not been set, "*" is dis‐
247 played. (Valid for jobs only)
248
249 %j Job or job step name. (Valid for jobs and job steps)
250
251 %J Number of threads per core requested by the job. This
252 reports the value of the srun --threads-per-core option.
253 When --threads-per-core has not been set, "*" is dis‐
254 played. (Valid for jobs only)
255
256 %k Comment associated with the job. (Valid for jobs only)
257
258 %K Job array index. By default, this field size will be lim‐
259 ited to 64 bytes. Use the environment variable SLURM_BIT‐
260 STR_LEN to specify larger field sizes. (Valid for jobs
261 only)
262
263 %l Time limit of the job or job step in days-hours:min‐
264 utes:seconds. The value may be "NOT_SET" if not yet
265 established or "UNLIMITED" for no limit. (Valid for jobs
266 and job steps)
267
268 %L Time left for the job to execute in days-hours:min‐
269 utes:seconds. This value is calculated by subtracting the
270 job's time used from its time limit. The value may be
271 "NOT_SET" if not yet established or "UNLIMITED" for no
272 limit. (Valid for jobs only)
273
274 %m Minimum size of memory (in MB) requested by the job.
275 (Valid for jobs only)
276
277 %M Time used by the job or job step in days-hours:min‐
278 utes:seconds. The days and hours are printed only as
279 needed. For job steps this field shows the elapsed time
280 since execution began and thus will be inaccurate for job
281 steps which have been suspended. Clock skew between nodes
282 in the cluster will cause the time to be inaccurate. If
283 the time is obviously wrong (e.g. negative), it displays
284 as "INVALID". (Valid for jobs and job steps)
285
286 %n List of node names explicitly requested by the job.
287 (Valid for jobs only)
288
289 %N List of nodes allocated to the job or job step. In the
290 case of a COMPLETING job, the list of nodes will comprise
291 only those nodes that have not yet been returned to ser‐
292 vice. (Valid for jobs and job steps)
293
294 %o The command to be executed.
295
296 %O Are contiguous nodes requested by the job. (Valid for
297 jobs only)
298
299 %p Priority of the job (converted to a floating point number
300 between 0.0 and 1.0). Also see %Q. (Valid for jobs only)
301
302 %P Partition of the job or job step. (Valid for jobs and job
303 steps)
304
305 %q Quality of service associated with the job. (Valid for
306 jobs only)
307
308 %Q Priority of the job (generally a very large unsigned inte‐
309 ger). Also see %p. (Valid for jobs only)
310
311 %r The reason a job is in its current state. See the JOB
312 REASON CODES section below for more information. (Valid
313 for jobs only)
314
315 %R For pending jobs: the reason a job is waiting for execu‐
316 tion is printed within parenthesis. For terminated jobs
317 with failure: an explanation as to why the job failed is
318 printed within parenthesis. For all other job states: the
319 list of allocate nodes. See the JOB REASON CODES section
320 below for more information. (Valid for jobs only)
321
322 %s Node selection plugin specific data for a job. Possible
323 data includes: Geometry requirement of resource allocation
324 (X,Y,Z dimensions), Connection type (TORUS, MESH, or NAV
325 == torus else mesh), Permit rotation of geometry (yes or
326 no), Node use (VIRTUAL or COPROCESSOR), etc. (Valid for
327 jobs only)
328
329 %S Actual or expected start time of the job or job step.
330 (Valid for jobs and job steps)
331
332 %t Job state in compact form. See the JOB STATE CODES sec‐
333 tion below for a list of possible states. (Valid for jobs
334 only)
335
336 %T Job state in extended form. See the JOB STATE CODES sec‐
337 tion below for a list of possible states. (Valid for jobs
338 only)
339
340 %u User name for a job or job step. (Valid for jobs and job
341 steps)
342
343 %U User ID for a job or job step. (Valid for jobs and job
344 steps)
345
346 %v Reservation for the job. (Valid for jobs only)
347
348 %V The job's submission time.
349
350 %w Workload Characterization Key (wckey). (Valid for jobs
351 only)
352
353 %W Licenses reserved for the job. (Valid for jobs only)
354
355 %x List of node names explicitly excluded by the job. (Valid
356 for jobs only)
357
358 %X Count of cores reserved on each node for system use (core
359 specialization). (Valid for jobs only)
360
361 %y Nice value (adjustment to a job's scheduling priority).
362 (Valid for jobs only)
363
364 %Y For pending jobs, a list of the nodes expected to be used
365 when the job is started.
366
367 %z Number of requested sockets, cores, and threads (S:C:T)
368 per node for the job. When (S:C:T) has not been set, "*"
369 is displayed. (Valid for jobs only)
370
371 %Z The job's working directory.
372
373
374
375 -O <output_format>, --Format=<output_format>
376 Specify the information to be displayed. Also see the -o <out‐
377 put_format>, --format=<output_format> option described below
378 (which supports greater flexibility in formatting, but does not
379 support access to all fields because we ran out of letters).
380 Requests a comma separated list of job information to be dis‐
381 played.
382
383
384 The format of each field is "type[:[.]size]"
385
386 size is the minimum field size. If no size is specified, 20
387 characters will be allocated to print the information.
388
389 . indicates the output should be right justified and size
390 must be specified. By default, output is left justi‐
391 fied.
392
393
394 Note that many of these type specifications are valid only for
395 jobs while others are valid only for job steps. Valid type
396 specifications include:
397
398
399 account
400 Print the account associated with the job. (Valid for
401 jobs only)
402
403 accruetime
404 Print the accrue time associated with the job. (Valid for
405 jobs only)
406
407 admin_comment
408 Administrator comment associated with the job. (Valid for
409 jobs only)
410
411 allocnodes
412 Print the nodes allocated to the job. (Valid for jobs
413 only)
414
415 allocsid
416 Print the session ID used to submit the job. (Valid for
417 jobs only)
418
419 arrayjobid
420 Prints the job ID of the job array. (Valid for jobs and
421 job steps)
422
423 arraytaskid
424 Prints the task ID of the job array. (Valid for jobs and
425 job steps)
426
427 associd
428 Prints the id of the job association. (Valid for jobs
429 only)
430
431 batchflag
432 Prints whether the batch flag has been set. (Valid for
433 jobs only)
434
435 batchhost
436 Executing (batch) host. For an allocated session, this is
437 the host on which the session is executing (i.e. the node
438 from which the srun or the salloc command was executed).
439 For a batch job, this is the node executing the batch
440 script. In the case of a typical Linux cluster, this would
441 be the compute node zero of the allocation. In the case of
442 a Cray ALPS system, this would be the front-end host whose
443 slurmd daemon executes the job script. (Valid for jobs
444 only)
445
446 boardspernode
447 Prints the number of boards per node allocated to the job.
448 (Valid for jobs only)
449
450 burstbuffer
451 Burst Buffer specification (Valid for jobs only)
452
453 burstbufferstate
454 Burst Buffer state (Valid for jobs only)
455
456 chptdir
457 Prints the directory where the job checkpoint will be
458 written to. (Valid for job steps only)
459
460 chptinter
461 Prints the time interval of the checkpoint. (Valid for
462 job steps only)
463
464 cluster
465 Name of the cluster that is running the job or job step.
466
467 clusterfeature
468 Cluster features required by the job. (Valid for jobs
469 only)
470
471 command
472 The command to be executed. (Valid for jobs only)
473
474 comment
475 Comment associated with the job. (Valid for jobs only)
476
477 contiguous
478 Are contiguous nodes requested by the job. (Valid for
479 jobs only)
480
481 cores Number of cores per socket requested by the job. This
482 reports the value of the srun --cores-per-socket option.
483 When --cores-per-socket has not been set, "*" is dis‐
484 played. (Valid for jobs only)
485
486 corespec
487 Count of cores reserved on each node for system use (core
488 specialization). (Valid for jobs only)
489
490 cpufreq
491 Prints the frequency of the allocated CPUs. (Valid for
492 job steps only)
493
494 cpus-per-task
495 Prints the number of CPUs per tasks allocated to the job.
496 (Valid for jobs only)
497
498 cpus-per-tres
499 Print the memory required per trackable resources allo‐
500 cated to the job or job step.
501
502 deadline
503 Prints the deadline affected to the job (Valid for jobs
504 only)
505
506 dependency
507 Job dependencies remaining. This job will not begin execu‐
508 tion until these dependent jobs complete. In the case of a
509 job that can not run due to job dependencies never being
510 satisfied, the full original job dependency specification
511 will be reported. A value of NULL implies this job has no
512 dependencies. (Valid for jobs only)
513
514 delayboot
515 Delay boot time. (Valid for jobs only)
516
517 derivedec
518 Derived exit code for the job, which is the highest exit
519 code of any job step. (Valid for jobs only)
520
521 eligibletime
522 Time the job is eligible for running. (Valid for jobs
523 only)
524
525 endtime
526 The time of job termination, actual or expected. (Valid
527 for jobs only)
528
529 exit_code
530 The exit code for the job. (Valid for jobs only)
531
532 feature
533 Features required by the job. (Valid for jobs only)
534
535 groupid
536 Group ID of the job. (Valid for jobs only)
537
538 groupname
539 Group name of the job. (Valid for jobs only)
540
541 jobarrayid
542 Job array's job ID. This is the base job ID. For
543 non-array jobs, this is the job ID. (Valid for jobs only)
544
545 jobid Job id. This will have a unique value for each element of
546 job arrays and each component of heterogeneous jobs.
547 (Valid for jobs only)
548
549 lastschedeval
550 Prints the last time the job was evaluated for scheduling.
551 (Valid for jobs only)
552
553 licenses
554 Licenses reserved for the job. (Valid for jobs only)
555
556 maxcpus
557 Prints the max number of CPUs allocated to the job.
558 (Valid for jobs only)
559
560 maxnodes
561 Prints the max number of nodes allocated to the job.
562 (Valid for jobs only)
563
564 mcslabel
565 Prints the MCS_label of the job. (Valid for jobs only)
566
567 mem-per-tres
568 Print the memory (in MB) required per trackable resources
569 allocated to the job or job step.
570
571 minmemory
572 Minimum size of memory (in MB) requested by the job.
573 (Valid for jobs only) intime
574
575 mintime
576 Minimum time limit of the job (Valid for jobs only)
577
578 mintmpdisk
579 Minimum size of temporary disk space (in MB) requested by
580 the job. (Valid for jobs only)
581
582 mincpus
583 Minimum number of CPUs (processors) per node requested by
584 the job. This reports the value of the srun --mincpus
585 option with a default value of zero. (Valid for jobs
586 only)
587
588 name Job or job step name. (Valid for jobs and job steps)
589
590 network
591 The network that the job is running on. (Valid for jobs
592 and job steps)
593
594 nice Nice value (adjustment to a job's scheduling priority).
595 (Valid for jobs only)
596
597 nodes List of nodes allocated to the job or job step. In the
598 case of a COMPLETING job, the list of nodes will comprise
599 only those nodes that have not yet been returned to ser‐
600 vice. (Valid job steps only)
601
602 nodelist
603 List of nodes allocated to the job or job step. In the
604 case of a COMPLETING job, the list of nodes will comprise
605 only those nodes that have not yet been returned to ser‐
606 vice. (Valid for jobs only)
607
608 ntperboard
609 The number of tasks per board allocated to the job.
610 (Valid for jobs only)
611
612 ntpercore
613 The number of tasks per core allocated to the job. (Valid
614 for jobs only)
615
616 ntpernode
617 The number of task per node allocated to the job. (Valid
618 for jobs only)
619
620 ntpersocket
621 The number of tasks per socket allocated to the job.
622 (Valid for jobs only)
623
624 numcpus
625 Number of CPUs (processors) requested by the job or allo‐
626 cated to it if already running. As a job is completing,
627 this number will reflect the current number of CPUs allo‐
628 cated. (Valid for jobs and job steps)
629
630 numnodes
631 Number of nodes allocated to the job or the minimum number
632 of nodes required by a pending job. The actual number of
633 nodes allocated to a pending job may exceed this number if
634 the job specified a node range count (e.g. minimum and
635 maximum node counts) or the job specifies a processor
636 count instead of a node count and the cluster contains
637 nodes with varying processor counts. As a job is complet‐
638 ing this number will reflect the current number of nodes
639 allocated. (Valid for jobs only)
640
641 numtasks
642 Number of tasks requested by a job or job step. This
643 reports the value of the --ntasks option. (Valid for jobs
644 and job steps)
645
646 origin
647 Cluster name where federated job originated from. (Valid
648 for federated jobs only)
649
650 originraw
651 Cluster ID where federated job originated from. (Valid
652 for federated jobs only)
653
654 oversubscribe
655 Can the compute resources allocated to the job be over
656 subscribed by other jobs. The resources to be over sub‐
657 scribed can be nodes, sockets, cores, or hyperthreads
658 depending upon configuration. The value will be "YES" if
659 the job was submitted with the oversubscribe option or the
660 partition is configured with OverSubscribe=Force, "NO" if
661 the job requires exclusive node access, "USER" if the
662 allocated compute nodes are dedicated to a single user,
663 "MCS" if the allocated compute nodes are dedicated to a
664 single security class (See MCSPlugin and MCSParameters
665 configuration parameters for more information), "OK" oth‐
666 erwise (typically allocated dedicated CPUs), (Valid for
667 jobs only)
668
669 packjobid
670 Job ID of the heterogeneous job leader.
671
672 packjoboffset
673 Zero origin offset within a collection of heterogeneous
674 jobs.
675
676 packjobidset
677 Expression identifying all job IDs within a heterogeneous
678 job.
679
680 partition
681 Partition of the job or job step. (Valid for jobs and job
682 steps)
683
684 priority
685 Priority of the job (converted to a floating point number
686 between 0.0 and 1.0). Also see prioritylong. (Valid for
687 jobs only)
688
689 prioritylong
690 Priority of the job (generally a very large unsigned inte‐
691 ger). Also see priority. (Valid for jobs only)
692
693 profile
694 Profile of the job. (Valid for jobs only)
695
696 preemptime
697 The preempt time for the job. (Valid for jobs only)
698
699 qos Quality of service associated with the job. (Valid for
700 jobs only)
701
702 reason
703 The reason a job is in its current state. See the JOB
704 REASON CODES section below for more information. (Valid
705 for jobs only)
706
707 reasonlist
708 For pending jobs: the reason a job is waiting for execu‐
709 tion is printed within parenthesis. For terminated jobs
710 with failure: an explanation as to why the job failed is
711 printed within parenthesis. For all other job states: the
712 list of allocate nodes. See the JOB REASON CODES section
713 below for more information. (Valid for jobs only)
714
715 reboot
716 Indicates if the allocated nodes should be rebooted before
717 starting the job. (Valid on jobs only)
718
719 reqnodes
720 List of node names explicitly requested by the job.
721 (Valid for jobs only)
722
723 reqswitch
724 The max number of requested switches by for the job.
725 (Valid for jobs only)
726
727 requeue
728 Prints whether the job will be requeued on failure.
729 (Valid for jobs only)
730
731 reservation
732 Reservation for the job. (Valid for jobs only)
733
734 resizetime
735 The amount of time changed for the job to run. (Valid for
736 jobs only)
737
738 restartcnt
739 The number of checkpoint restarts for the job. (Valid for
740 jobs only)
741
742 resvport
743 Reserved ports of the job. (Valid for job steps only)
744
745 schednodes
746 For pending jobs, a list of the nodes expected to be used
747 when the job is started. (Valid for jobs only)
748
749 sct Number of requested sockets, cores, and threads (S:C:T)
750 per node for the job. When (S:C:T) has not been set, "*"
751 is displayed. (Valid for jobs only)
752
753 selectjobinfo
754 Node selection plugin specific data for a job. Possible
755 data includes: Geometry requirement of resource allocation
756 (X,Y,Z dimensions), Connection type (TORUS, MESH, or NAV
757 == torus else mesh), Permit rotation of geometry (yes or
758 no), Node use (VIRTUAL or COPROCESSOR), etc. (Valid for
759 jobs only)
760
761 siblingsactive
762 Cluster names of where federated sibling jobs exist.
763 (Valid for federated jobs only)
764
765 siblingsactiveraw
766 Cluster IDs of where federated sibling jobs exist. (Valid
767 for federated jobs only)
768
769 siblingsviable
770 Cluster names of where federated sibling jobs are viable
771 to run. (Valid for federated jobs only)
772
773 siblingsviableraw
774 Cluster IDs of where federated sibling jobs viable to run.
775 (Valid for federated jobs only)
776
777 sockets
778 Number of sockets per node requested by the job. This
779 reports the value of the srun --sockets-per-node option.
780 When --sockets-per-node has not been set, "*" is dis‐
781 played. (Valid for jobs only)
782
783 sperboard
784 Number of sockets per board allocated to the job. (Valid
785 for jobs only)
786
787 starttime
788 Actual or expected start time of the job or job step.
789 (Valid for jobs and job steps)
790
791 state Job state in extended form. See the JOB STATE CODES sec‐
792 tion below for a list of possible states. (Valid for jobs
793 only)
794
795 statecompact
796 Job state in compact form. See the JOB STATE CODES sec‐
797 tion below for a list of possible states. (Valid for jobs
798 only)
799
800 stderr
801 The directory for standard error to output to. (Valid for
802 jobs only)
803
804 stdin The directory for standard in. (Valid for jobs only)
805
806 stdout
807 The directory for standard out to output to. (Valid for
808 jobs only)
809
810 stepid
811 Job or job step id. In the case of job arrays, the job ID
812 format will be of the form "<base_job_id>_<index>".
813 (Valid forjob steps only)
814
815 stepname
816 job step name. (Valid for job steps only)
817
818 stepstate
819 The state of the job step. (Valid for job steps only)
820
821 submittime
822 The time that the job was submitted at. (Valid for jobs
823 only)
824
825 system_comment
826 System comment associated with the job. (Valid for jobs
827 only)
828
829 threads
830 Number of threads per core requested by the job. This
831 reports the value of the srun --threads-per-core option.
832 When --threads-per-core has not been set, "*" is dis‐
833 played. (Valid for jobs only)
834
835 timeleft
836 Time left for the job to execute in days-hours:min‐
837 utes:seconds. This value is calculated by subtracting the
838 job's time used from its time limit. The value may be
839 "NOT_SET" if not yet established or "UNLIMITED" for no
840 limit. (Valid for jobs only)
841
842 timelimit
843 Timelimit for the job or job step. (Valid for jobs and
844 job steps)
845
846 timeused
847 Time used by the job or job step in days-hours:min‐
848 utes:seconds. The days and hours are printed only as
849 needed. For job steps this field shows the elapsed time
850 since execution began and thus will be inaccurate for job
851 steps which have been suspended. Clock skew between nodes
852 in the cluster will cause the time to be inaccurate. If
853 the time is obviously wrong (e.g. negative), it displays
854 as "INVALID". (Valid for jobs and job steps)
855
856 tres-alloc
857 Print the trackable resources allocated to the job if run‐
858 ning. If not running, then print the trackable resources
859 requested by the job.
860
861 tres-bind
862 Print the trackable resources task binding requested by
863 the job or job step.
864
865 tres-freq
866 Print the trackable resources frequencies requested by the
867 job or job step.
868
869 tres-per-job
870 Print the trackable resources requested by the job.
871
872 tres-per-node
873 Print the trackable resources per node requested by the
874 job or job step.
875
876 tres-per-socket
877 Print the trackable resources per socket requested by the
878 job or job step.
879
880 tres-per-step
881 Print the trackable resources requested by the job step.
882
883 tres-per-task
884 Print the trackable resources per task requested by the
885 job or job step.
886
887 userid
888 User ID for a job or job step. (Valid for jobs and job
889 steps)
890
891 username
892 User name for a job or job step. (Valid for jobs and job
893 steps)
894
895 wait4switch
896 The amount of time to wait for the desired number of
897 switches. (Valid for jobs only)
898
899 wckey Workload Characterization Key (wckey). (Valid for jobs
900 only)
901
902 workdir
903 The job's working directory. (Valid for jobs only)
904
905
906 -p <part_list>, --partition=<part_list>
907 Specify the partitions of the jobs or steps to view. Accepts a
908 comma separated list of partition names.
909
910
911 -P, --priority
912 For pending jobs submitted to multiple partitions, list the job
913 once per partition. In addition, if jobs are sorted by priority,
914 consider both the partition and job priority. This option can be
915 used to produce a list of pending jobs in the same order consid‐
916 ered for scheduling by Slurm with appropriate additional options
917 (e.g. "--sort=-p,i --states=PD").
918
919
920 -q <qos_list>, --qos=<qos_list>
921 Specify the qos(s) of the jobs or steps to view. Accepts a comma
922 separated list of qos's.
923
924
925 -R, --reservation=reservation_name
926 Specify the reservation of the jobs to view.
927
928
929 -s, --steps
930 Specify the job steps to view. This flag indicates that a comma
931 separated list of job steps to view follows without an equal
932 sign (see examples). The job step format is
933 "job_id[_array_id].step_id". Defaults to all job steps. Since
934 this option's argument is optional, for proper parsing the sin‐
935 gle letter option must be followed immediately with the value
936 and not include a space between them. For example "-s1008.0" and
937 not "-s 1008.0".
938
939
940 --sibling
941 Show all sibling jobs on a federated cluster. Implies --federa‐
942 tion.
943
944
945 -S <sort_list>, --sort=<sort_list>
946 Specification of the order in which records should be reported.
947 This uses the same field specification as the <output_format>.
948 The long format option "cluster" can also be used to sort jobs
949 or job steps by cluster name (e.g. federated jobs). Multiple
950 sorts may be performed by listing multiple sort fields separated
951 by commas. The field specifications may be preceded by "+" or
952 "-" for ascending (default) and descending order respectively.
953 For example, a sort value of "P,U" will sort the records by par‐
954 tition name then by user id. The default value of sort for jobs
955 is "P,t,-p" (increasing partition name then within a given par‐
956 tition by increasing job state and then decreasing priority).
957 The default value of sort for job steps is "P,i" (increasing
958 partition name then within a given partition by increasing step
959 id).
960
961
962 --start
963 Report the expected start time and resources to be allocated for
964 pending jobs in order of increasing start time. This is equiva‐
965 lent to the following options: --format="%.18i %.9P %.8j %.8u
966 %.2t %.19S %.6D %20Y %R", --sort=S and --states=PENDING. Any
967 of these options may be explicitly changed as desired by combin‐
968 ing the --start option with other option values (e.g. to use a
969 different output format). The expected start time of pending
970 jobs is only available if the Slurm is configured to use the
971 backfill scheduling plugin.
972
973
974 -t <state_list>, --states=<state_list>
975 Specify the states of jobs to view. Accepts a comma separated
976 list of state names or "all". If "all" is specified then jobs of
977 all states will be reported. If no state is specified then pend‐
978 ing, running, and completing jobs are reported. See the JOB
979 STATE CODES section below for a list of valid states. Both
980 extended and compact forms are valid. Note the <state_list>
981 supplied is case insensitive ("pd" and "PD" are equivalent).
982
983
984 -u <user_list>, --user=<user_list>
985 Request jobs or job steps from a comma separated list of users.
986 The list can consist of user names or user id numbers. Perfor‐
987 mance of the command can be measurably improved for systems with
988 large numbers of jobs when a single user is specified.
989
990
991 --usage
992 Print a brief help message listing the squeue options.
993
994
995 -v, --verbose
996 Report details of squeues actions.
997
998
999 -V , --version
1000 Print version information and exit.
1001
1002
1003 -w <hostlist>, --nodelist=<hostlist>
1004 Report only on jobs allocated to the specified node or list of
1005 nodes. This may either be the NodeName or NodeHostname as
1006 defined in slurm.conf(5) in the event that they differ. A
1007 node_name of localhost is mapped to the current host name.
1008
1009
1011 These codes identify the reason that a job is waiting for execution. A
1012 job may be waiting for more than one reason, in which case only one of
1013 those reasons is displayed.
1014
1015 AssociationJobLimit The job's association has reached its maximum job
1016 count.
1017
1018 AssociationResourceLimit
1019 The job's association has reached some resource
1020 limit.
1021
1022 AssociationTimeLimit The job's association has reached its time limit.
1023
1024 BadConstraints The job's constraints can not be satisfied.
1025
1026 BeginTime The job's earliest start time has not yet been
1027 reached.
1028
1029 Cleaning The job is being requeued and still cleaning up
1030 from its previous execution.
1031
1032 Dependency This job is waiting for a dependent job to com‐
1033 plete.
1034
1035 FrontEndDown No front end node is available to execute this
1036 job.
1037
1038 InactiveLimit The job reached the system InactiveLimit.
1039
1040 InvalidAccount The job's account is invalid.
1041
1042 InvalidQOS The job's QOS is invalid.
1043
1044 JobHeldAdmin The job is held by a system administrator.
1045
1046 JobHeldUser The job is held by the user.
1047
1048 JobLaunchFailure The job could not be launched. This may be due
1049 to a file system problem, invalid program name,
1050 etc.
1051
1052 Licenses The job is waiting for a license.
1053
1054 NodeDown A node required by the job is down.
1055
1056 NonZeroExitCode The job terminated with a non-zero exit code.
1057
1058 PartitionDown The partition required by this job is in a DOWN
1059 state.
1060
1061 PartitionInactive The partition required by this job is in an Inac‐
1062 tive state and not able to start jobs.
1063
1064 PartitionNodeLimit The number of nodes required by this job is out‐
1065 side of it's partitions current limits. Can also
1066 indicate that required nodes are DOWN or DRAINED.
1067
1068 PartitionTimeLimit The job's time limit exceeds it's partition's
1069 current time limit.
1070
1071 Priority One or more higher priority jobs exist for this
1072 partition or advanced reservation.
1073
1074 Prolog It's PrologSlurmctld program is still running.
1075
1076 QOSJobLimit The job's QOS has reached its maximum job count.
1077
1078 QOSResourceLimit The job's QOS has reached some resource limit.
1079
1080 QOSTimeLimit The job's QOS has reached its time limit.
1081
1082 ReqNodeNotAvail Some node specifically required by the job is not
1083 currently available. The node may currently be
1084 in use, reserved for another job, in an advanced
1085 reservation, DOWN, DRAINED, or not responding.
1086 Nodes which are DOWN, DRAINED, or not responding
1087 will be identified as part of the job's "reason"
1088 field as "UnavailableNodes". Such nodes will typ‐
1089 ically require the intervention of a system
1090 administrator to make available.
1091
1092 Reservation The job is waiting its advanced reservation to
1093 become available.
1094
1095 Resources The job is waiting for resources to become avail‐
1096 able.
1097
1098 SystemFailure Failure of the Slurm system, a file system, the
1099 network, etc.
1100
1101 TimeLimit The job exhausted its time limit.
1102
1103 QOSUsageThreshold Required QOS threshold has been breached.
1104
1105 WaitingForScheduling No reason has been set for this job yet. Waiting
1106 for the scheduler to determine the appropriate
1107 reason.
1108
1109
1111 Jobs typically pass through several states in the course of their exe‐
1112 cution. The typical states are PENDING, RUNNING, SUSPENDED, COMPLET‐
1113 ING, and COMPLETED. An explanation of each state follows.
1114
1115 BF BOOT_FAIL Job terminated due to launch failure, typically due
1116 to a hardware failure (e.g. unable to boot the node
1117 or block and the job can not be requeued).
1118
1119 CA CANCELLED Job was explicitly cancelled by the user or system
1120 administrator. The job may or may not have been
1121 initiated.
1122
1123 CD COMPLETED Job has terminated all processes on all nodes with
1124 an exit code of zero.
1125
1126 CF CONFIGURING Job has been allocated resources, but are waiting
1127 for them to become ready for use (e.g. booting).
1128
1129 CG COMPLETING Job is in the process of completing. Some processes
1130 on some nodes may still be active.
1131
1132 DL DEADLINE Job terminated on deadline.
1133
1134 F FAILED Job terminated with non-zero exit code or other
1135 failure condition.
1136
1137 NF NODE_FAIL Job terminated due to failure of one or more allo‐
1138 cated nodes.
1139
1140 OOM OUT_OF_MEMORY Job experienced out of memory error.
1141
1142 PD PENDING Job is awaiting resource allocation.
1143
1144 PR PREEMPTED Job terminated due to preemption.
1145
1146 R RUNNING Job currently has an allocation.
1147
1148 RD RESV_DEL_HOLD Job is held.
1149
1150 RF REQUEUE_FED Job is being requeued by a federation.
1151
1152 RH REQUEUE_HOLD Held job is being requeued.
1153
1154 RQ REQUEUED Completing job is being requeued.
1155
1156 RS RESIZING Job is about to change size.
1157
1158 RV REVOKED Sibling was removed from cluster due to other clus‐
1159 ter starting the job.
1160
1161 SI SIGNALING Job is being signaled.
1162
1163 SE SPECIAL_EXIT The job was requeued in a special state. This state
1164 can be set by users, typically in EpilogSlurmctld,
1165 if the job has terminated with a particular exit
1166 value.
1167
1168 SO STAGE_OUT Job is staging out files.
1169
1170 ST STOPPED Job has an allocation, but execution has been
1171 stopped with SIGSTOP signal. CPUS have been
1172 retained by this job.
1173
1174 S SUSPENDED Job has an allocation, but execution has been sus‐
1175 pended and CPUs have been released for other jobs.
1176
1177 TO TIMEOUT Job terminated upon reaching its time limit.
1178
1179
1180
1182 Some squeue options may be set via environment variables. These envi‐
1183 ronment variables, along with their corresponding options, are listed
1184 below. (Note: Commandline options will always override these settings.)
1185
1186 SLURM_BITSTR_LEN Specifies the string length to be used for holding
1187 a job array's task ID expression. The default
1188 value is 64 bytes. A value of 0 will print the
1189 full expression with any length required. Larger
1190 values may adversely impact the application perfor‐
1191 mance.
1192
1193 SLURM_CLUSTERS Same as --clusters
1194
1195 SLURM_CONF The location of the Slurm configuration file.
1196
1197 SLURM_TIME_FORMAT Specify the format used to report time stamps. A
1198 value of standard, the default value, generates
1199 output in the form
1200 "year-month-dateThour:minute:second". A value of
1201 relative returns only "hour:minute:second" if the
1202 current day. For other dates in the current year
1203 it prints the "hour:minute" preceded by "Tomorr"
1204 (tomorrow), "Ystday" (yesterday), the name of the
1205 day for the coming week (e.g. "Mon", "Tue", etc.),
1206 otherwise the date (e.g. "25 Apr"). For other
1207 years it returns a date month and year without a
1208 time (e.g. "6 Jun 2012"). All of the time stamps
1209 use a 24 hour format.
1210
1211 A valid strftime() format can also be specified.
1212 For example, a value of "%a %T" will report the day
1213 of the week and a time stamp (e.g. "Mon 12:34:56").
1214
1215 SQUEUE_ACCOUNT -A <account_list>, --account=<account_list>
1216
1217 SQUEUE_ALL -a, --all
1218
1219 SQUEUE_ARRAY -r, --array
1220
1221 SQUEUE_NAMES --name=<name_list>
1222
1223 SQUEUE_FEDERATION --federation
1224
1225 SQUEUE_FORMAT -o <output_format>, --format=<output_format>
1226
1227 SQUEUE_FORMAT2 -O <output_format>, --Format=<output_format>
1228
1229 SQUEUE_LICENSES -p-l <license_list>, --license=<license_list>
1230
1231 SQUEUE_LOCAL --local
1232
1233 SQUEUE_PARTITION -p <part_list>, --partition=<part_list>
1234
1235 SQUEUE_PRIORITY -P, --priority
1236
1237 SQUEUE_QOS -p <qos_list>, --qos=<qos_list>
1238
1239 SQUEUE_SIBLING --sibling
1240
1241 SQUEUE_SORT -S <sort_list>, --sort=<sort_list>
1242
1243 SQUEUE_STATES -t <state_list>, --states=<state_list>
1244
1245 SQUEUE_USERS -u <user_list>, --users=<user_list>
1246
1247
1249 Print the jobs scheduled in the debug partition and in the COMPLETED
1250 state in the format with six right justified digits for the job id fol‐
1251 lowed by the priority with an arbitrary fields size:
1252 # squeue -p debug -t COMPLETED -o "%.6i %p"
1253 JOBID PRIORITY
1254 65543 99993
1255 65544 99992
1256 65545 99991
1257
1258 Print the job steps in the debug partition sorted by user:
1259 # squeue -s -p debug -S u
1260 STEPID NAME PARTITION USER TIME NODELIST
1261 65552.1 test1 debug alice 0:23 dev[1-4]
1262 65562.2 big_run debug bob 0:18 dev22
1263 65550.1 param1 debug candice 1:43:21 dev[6-12]
1264
1265 Print information only about jobs 12345,12345, and 12348:
1266 # squeue --jobs 12345,12346,12348
1267 JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
1268 12345 debug job1 dave R 0:21 4 dev[9-12]
1269 12346 debug job2 dave PD 0:00 8 (Resources)
1270 12348 debug job3 ed PD 0:00 4 (Priority)
1271
1272 Print information only about job step 65552.1:
1273 # squeue --steps 65552.1
1274 STEPID NAME PARTITION USER TIME NODELIST
1275 65552.1 test2 debug alice 12:49 dev[1-4]
1276
1277
1279 Copyright (C) 2002-2007 The Regents of the University of California.
1280 Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER).
1281 Copyright (C) 2008-2010 Lawrence Livermore National Security.
1282 Copyright (C) 2010-2016 SchedMD LLC.
1283
1284 This file is part of Slurm, a resource management program. For
1285 details, see <https://slurm.schedmd.com/>.
1286
1287 Slurm is free software; you can redistribute it and/or modify it under
1288 the terms of the GNU General Public License as published by the Free
1289 Software Foundation; either version 2 of the License, or (at your
1290 option) any later version.
1291
1292 Slurm is distributed in the hope that it will be useful, but WITHOUT
1293 ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
1294 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
1295 for more details.
1296
1298 scancel(1), scontrol(1), sinfo(1), smap(1), srun(1),
1299 slurm_load_ctl_conf [22m(3), slurm_load_jobs [22m(3), slurm_load_node [22m(3),
1300 slurm_load_partitions (3)
1301
1302
1303
1304April 2019 Slurm Commands squeue(1)