1squeue(1)                       Slurm Commands                       squeue(1)
2
3
4

NAME

6       squeue  -  view  information about jobs located in the Slurm scheduling
7       queue.
8
9

SYNOPSIS

11       squeue [OPTIONS...]
12
13

DESCRIPTION

15       squeue is used to view job and job step information for jobs managed by
16       Slurm.
17
18

OPTIONS

20       -A, --account=<account_list>
21              Specify  the accounts of the jobs to view. Accepts a comma sepa‐
22              rated list of account names. This has no effect when listing job
23              steps.
24
25       -a, --all
26              Display  information about jobs and job steps in all partitions.
27              This causes information to be displayed  about  partitions  that
28              are  configured  as hidden, partitions that are unavailable to a
29              user's group, and federated jobs that are in a "revoked" state.
30
31       -r, --array
32              Display one job array element per line.   Without  this  option,
33              the  display  will be optimized for use with job arrays (pending
34              job array elements will be combined on one line of  output  with
35              the array index values printed using a regular expression).
36
37       --array-unique
38              Display  one  unique pending job array element per line. Without
39              this option, the pending job array elements will be grouped into
40              the  master array job to optimize the display.  This can also be
41              set with the environment variable SQUEUE_ARRAY_UNIQUE.
42
43       -M, --clusters=<cluster_name>
44              Clusters to issue commands to.  Multiple cluster  names  may  be
45              comma  separated.   A  value  of  'all' will query to run on all
46              clusters.  This option implicitly sets the --local option.
47
48       --federation
49              Show jobs from the federation if a member of one.
50
51       -o, --format=<output_format>
52              Specify the information to be displayed, its size  and  position
53              (right  or  left  justified).   Also  see the -O, --Format=<out‐
54              put_format> option described below (which supports  less  flexi‐
55              bility  in  formatting,  but supports access to all fields).  If
56              the command is executed in a federated cluster  environment  and
57              information  about  more than one cluster is to be displayed and
58              the -h, --noheader option is used, then the cluster name will be
59              displayed before the default output formats shown below.
60
61              The default formats with various options are:
62
63              default        "%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R"
64
65              -l, --long     "%.18i %.9P %.8j %.8u %.8T %.10M %.9l %.6D %R"
66
67              -s, --steps    "%.15i %.8j %.9P %.8u %.9M %N"
68
69       The format of each field is "%[[.]size]type[suffix]"
70
71                 size   Minimum  field size. If no size is specified, whatever
72                        is needed to print the information will be used.
73
74                 .      Indicates the output should  be  right  justified  and
75                        size  must  be  specified.   By default output is left
76                        justified.
77
78                 suffix Arbitrary string to append to the end of the field.
79
80       Note that many of these type specifications are  valid  only  for  jobs
81       while  others  are valid only for job steps.  Valid type specifications
82       include:
83
84              %all  Print all fields available for this data type with a  ver‐
85                    tical bar separating each field.
86
87              %a    Account associated with the job.  (Valid for jobs only)
88
89              %A    Number  of  tasks created by a job step.  This reports the
90                    value of the srun --ntasks option.  (Valid for  job  steps
91                    only)
92
93              %A    Job id.  This will have a unique value for each element of
94                    job arrays.  (Valid for jobs only)
95
96              %B    Executing (batch) host. For an allocated session, this  is
97                    the  host on which the session is executing (i.e. the node
98                    from which the srun or the salloc command  was  executed).
99                    For  a  batch  job,  this  is the node executing the batch
100                    script. In the case of a typical Linux cluster, this would
101                    be the compute node zero of the allocation. In the case of
102                    a Cray ALPS system, this would be the front-end host whose
103                    slurmd daemon executes the job script.
104
105              %c    Minimum  number of CPUs (processors) per node requested by
106                    the job.  This reports the value of the srun --mincpus op‐
107                    tion with a default value of zero.  (Valid for jobs only)
108
109              %C    Number  of CPUs (processors) requested by the job or allo‐
110                    cated to it if already running.  As a  job  is  completing
111                    this  number will reflect the current number of CPUs allo‐
112                    cated.  (Valid for jobs only)
113
114              %d    Minimum size of temporary disk space (in MB) requested  by
115                    the job.  (Valid for jobs only)
116
117              %D    Number of nodes allocated to the job or the minimum number
118                    of nodes required by a pending job. The actual  number  of
119                    nodes allocated to a pending job may exceed this number if
120                    the job specified a node range count  (e.g.   minimum  and
121                    maximum  node  counts)  or  the  job specifies a processor
122                    count instead of a node count. As a job is completing this
123                    number will reflect the current number of nodes allocated.
124                    (Valid for jobs only)
125
126              %e    Time at which the job ended or is expected to  end  (based
127                    upon its time limit).  (Valid for jobs only)
128
129              %E    Job dependencies remaining. This job will not begin execu‐
130                    tion until these dependent jobs complete. In the case of a
131                    job  that  can not run due to job dependencies never being
132                    satisfied, the full original job dependency  specification
133                    will  be reported. A value of NULL implies this job has no
134                    dependencies.  (Valid for jobs only)
135
136              %f    Features required by the job.  (Valid for jobs only)
137
138              %F    Job array's job ID. This is the base job ID.  For  non-ar‐
139                    ray jobs, this is the job ID.  (Valid for jobs only)
140
141              %g    Group name of the job.  (Valid for jobs only)
142
143              %G    Group ID of the job.  (Valid for jobs only)
144
145              %h    Can  the  compute  resources  allocated to the job be over
146                    subscribed by other jobs.  The resources to be  over  sub‐
147                    scribed  can be nodes, sockets, cores, or hyperthreads de‐
148                    pending upon configuration.  The value will  be  "YES"  if
149                    the job was submitted with the oversubscribe option or the
150                    partition is configured with OverSubscribe=Force, "NO"  if
151                    the  job requires exclusive node access, "USER" if the al‐
152                    located compute nodes are  dedicated  to  a  single  user,
153                    "MCS"  if  the  allocated compute nodes are dedicated to a
154                    single security class  (See  MCSPlugin  and  MCSParameters
155                    configuration  parameters for more information), "OK" oth‐
156                    erwise (typically allocated dedicated  CPUs),  (Valid  for
157                    jobs only)
158
159              %H    Number of sockets per node requested by the job.  This re‐
160                    ports the value of  the  srun  --sockets-per-node  option.
161                    When  --sockets-per-node  has  not  been  set, "*" is dis‐
162                    played.  (Valid for jobs only)
163
164              %i    Job or job step id.  In the case of job arrays, the job ID
165                    format  will  be  of the form "<base_job_id>_<index>".  By
166                    default, the job array index field size will be limited to
167                    64  bytes.   Use the environment variable SLURM_BITSTR_LEN
168                    to specify larger field sizes.  (Valid for  jobs  and  job
169                    steps)  In  the case of heterogeneous job allocations, the
170                    job ID format will be of the form "#+#"  where  the  first
171                    number  is  the  "heterogeneous job leader" and the second
172                    number the zero origin offset for each  component  of  the
173                    job.
174
175              %I    Number of cores per socket requested by the job.  This re‐
176                    ports the value of  the  srun  --cores-per-socket  option.
177                    When  --cores-per-socket  has  not  been  set, "*" is dis‐
178                    played.  (Valid for jobs only)
179
180              %j    Job or job step name.  (Valid for jobs and job steps)
181
182              %J    Number of threads per core requested by the job.  This re‐
183                    ports  the  value  of  the srun --threads-per-core option.
184                    When --threads-per-core has not  been  set,  "*"  is  dis‐
185                    played.  (Valid for jobs only)
186
187              %k    Comment associated with the job.  (Valid for jobs only)
188
189              %K    Job array index.  By default, this field size will be lim‐
190                    ited to 64 bytes.  Use the environment variable SLURM_BIT‐
191                    STR_LEN  to  specify  larger field sizes.  (Valid for jobs
192                    only)
193
194              %l    Time limit of the  job  or  job  step  in  days-hours:min‐
195                    utes:seconds.   The  value may be "NOT_SET" if not yet es‐
196                    tablished or "UNLIMITED" for no limit.   (Valid  for  jobs
197                    and job steps)
198
199              %L    Time  left  for  the  job  to  execute  in days-hours:min‐
200                    utes:seconds.  This value is calculated by subtracting the
201                    job's  time  used  from  its time limit.  The value may be
202                    "NOT_SET" if not yet established  or  "UNLIMITED"  for  no
203                    limit.  (Valid for jobs only)
204
205              %m    Minimum  size  of  memory  (in  MB)  requested by the job.
206                    (Valid for jobs only)
207
208              %M    Time used by  the  job  or  job  step  in  days-hours:min‐
209                    utes:seconds.   The  days  and  hours  are printed only as
210                    needed.  For job steps this field shows the  elapsed  time
211                    since  execution began and thus will be inaccurate for job
212                    steps which have been suspended.  Clock skew between nodes
213                    in  the  cluster will cause the time to be inaccurate.  If
214                    the time is obviously wrong (e.g. negative),  it  displays
215                    as "INVALID".  (Valid for jobs and job steps)
216
217              %n    List  of  node  names  explicitly  requested  by  the job.
218                    (Valid for jobs only)
219
220              %N    List of nodes allocated to the job or  job  step.  In  the
221                    case  of a COMPLETING job, the list of nodes will comprise
222                    only those nodes that have not yet been returned  to  ser‐
223                    vice.  (Valid for jobs and job steps)
224
225              %o    The command to be executed.
226
227              %O    Are  contiguous  nodes  requested  by the job.  (Valid for
228                    jobs only)
229
230              %p    Priority of the job (converted to a floating point  number
231                    between 0.0 and 1.0).  Also see %Q.  (Valid for jobs only)
232
233              %P    Partition of the job or job step.  (Valid for jobs and job
234                    steps)
235
236              %q    Quality of service associated with the  job.   (Valid  for
237                    jobs only)
238
239              %Q    Priority of the job (generally a very large unsigned inte‐
240                    ger).  Also see %p.  (Valid for jobs only)
241
242              %r    The reason a job is in its current  state.   See  the  JOB
243                    REASON  CODES  section below for more information.  (Valid
244                    for jobs only)
245
246              %R    For pending jobs: the reason a job is waiting  for  execu‐
247                    tion  is  printed within parenthesis.  For terminated jobs
248                    with failure: an explanation as to why the job  failed  is
249                    printed within parenthesis.  For all other job states: the
250                    list of allocate nodes.  See the JOB REASON CODES  section
251                    below for more information.  (Valid for jobs only)
252
253              %s    Node  selection  plugin  specific data for a job. Possible
254                    data includes: Geometry requirement of resource allocation
255                    (X,Y,Z  dimensions),  Connection type (TORUS, MESH, or NAV
256                    == torus else mesh), Permit rotation of geometry  (yes  or
257                    no),  Node  use (VIRTUAL or COPROCESSOR), etc.  (Valid for
258                    jobs only)
259
260              %S    Actual or expected start time of  the  job  or  job  step.
261                    (Valid for jobs and job steps)
262
263              %t    Job  state  in compact form.  See the JOB STATE CODES sec‐
264                    tion below for a list of possible states.  (Valid for jobs
265                    only)
266
267              %T    Job  state in extended form.  See the JOB STATE CODES sec‐
268                    tion below for a list of possible states.  (Valid for jobs
269                    only)
270
271              %u    User  name for a job or job step.  (Valid for jobs and job
272                    steps)
273
274              %U    User ID for a job or job step.  (Valid for  jobs  and  job
275                    steps)
276
277              %v    Reservation for the job.  (Valid for jobs only)
278
279              %V    The job's submission time.
280
281              %w    Workload  Characterization  Key  (wckey).  (Valid for jobs
282                    only)
283
284              %W    Licenses reserved for the job.  (Valid for jobs only)
285
286              %x    List of node names explicitly excluded by the job.  (Valid
287                    for jobs only)
288
289              %X    Count  of cores reserved on each node for system use (core
290                    specialization).  (Valid for jobs only)
291
292              %y    Nice value (adjustment to a  job's  scheduling  priority).
293                    (Valid for jobs only)
294
295              %Y    For  pending jobs, a list of the nodes expected to be used
296                    when the job is started.
297
298              %z    Number of requested sockets, cores,  and  threads  (S:C:T)
299                    per  node for the job.  When (S:C:T) has not been set, "*"
300                    is displayed.  (Valid for jobs only)
301
302              %Z    The job's working directory.
303
304       -O, --Format=<output_format>
305              Specify the information to  be  displayed.   Also  see  the  -o,
306              --format=<output_format>  option described above (which supports
307              greater flexibility in formatting, but does not  support  access
308              to  all fields because we ran out of letters).  Requests a comma
309              separated list of job information to be displayed.
310
311              The format of each field is "type[:[.][size][sufix]]"
312
313                 size   Minimum field size. If no size is specified, 20  char‐
314                        acters will be allocated to print the information.
315
316                 .      Indicates  the  output  should  be right justified and
317                        size must be specified.  By  default  output  is  left
318                        justified.
319
320                 sufix  Arbitrary string to append to the end of the field.
321
322       Note  that  many  of  these type specifications are valid only for jobs
323       while others are valid only for job steps.  Valid  type  specifications
324       include:
325
326              Account
327                     Print  the  account  associated with the job.  (Valid for
328                     jobs only)
329
330              AccrueTime
331                     Print the accrue time associated with  the  job.   (Valid
332                     for jobs only)
333
334              admin_comment
335                     Administrator  comment  associated  with the job.  (Valid
336                     for jobs only)
337
338              AllocNodes
339                     Print the nodes allocated to the job.   (Valid  for  jobs
340                     only)
341
342              AllocSID
343                     Print  the session ID used to submit the job.  (Valid for
344                     jobs only)
345
346              ArrayJobID
347                     Prints the job ID of the job array.  (Valid for jobs  and
348                     job steps)
349
350              ArrayTaskID
351                     Prints the task ID of the job array.  (Valid for jobs and
352                     job steps)
353
354              AssocID
355                     Prints the ID of the job association.   (Valid  for  jobs
356                     only)
357
358              BatchFlag
359                     Prints  whether  the batch flag has been set.  (Valid for
360                     jobs only)
361
362              BatchHost
363                     Executing (batch) host. For an allocated session, this is
364                     the host on which the session is executing (i.e. the node
365                     from which the srun or the salloc command was  executed).
366                     For  a  batch  job,  this is the node executing the batch
367                     script. In the case of  a  typical  Linux  cluster,  this
368                     would  be the compute node zero of the allocation. In the
369                     case of a Cray ALPS system, this would be  the  front-end
370                     host whose slurmd daemon executes the job script.  (Valid
371                     for jobs only)
372
373              BoardsPerNode
374                     Prints the number of boards per  node  allocated  to  the
375                     job.  (Valid for jobs only)
376
377              BurstBuffer
378                     Burst Buffer specification (Valid for jobs only)
379
380              BurstBufferState
381                     Burst Buffer state (Valid for jobs only)
382
383              Cluster
384                     Name of the cluster that is running the job or job step.
385
386              ClusterFeature
387                     Cluster  features  required  by the job.  (Valid for jobs
388                     only)
389
390              Command
391                     The command to be executed.  (Valid for jobs only)
392
393              Comment
394                     Comment associated with the job.  (Valid for jobs only)
395
396              Contiguous
397                     Are contiguous nodes requested by the  job.   (Valid  for
398                     jobs only)
399
400              Container
401                     OCI container bundle path.
402
403              Cores  Number  of  cores  per socket requested by the job.  This
404                     reports the value of the srun --cores-per-socket  option.
405                     When  --cores-per-socket  has  not  been set, "*" is dis‐
406                     played.  (Valid for jobs only)
407
408              CoreSpec
409                     Count of cores reserved on each node for system use (core
410                     specialization).  (Valid for jobs only)
411
412              CPUFreq
413                     Prints  the  frequency of the allocated CPUs.  (Valid for
414                     job steps only)
415
416              cpus-per-task
417                     Prints the number of CPUs per tasks allocated to the job.
418                     (Valid for jobs only)
419
420              cpus-per-tres
421                     Print  the  memory required per trackable resources allo‐
422                     cated to the job or job step.
423
424              Deadline
425                     Prints the deadline affected to the job (Valid  for  jobs
426                     only)
427
428              DelayBoot
429                     Delay boot time.  (Valid for jobs only)
430
431              Dependency
432                     Job  dependencies remaining. This job will not begin exe‐
433                     cution until these dependent jobs complete. In  the  case
434                     of  a  job that can not run due to job dependencies never
435                     being satisfied, the full original job dependency  speci‐
436                     fication  will  be reported. A value of NULL implies this
437                     job has no dependencies.  (Valid for jobs only)
438
439              DerivedEC
440                     Derived exit code for the job, which is the highest  exit
441                     code of any job step.  (Valid for jobs only)
442
443              EligibleTime
444                     Time  the  job  is eligible for running.  (Valid for jobs
445                     only)
446
447              EndTime
448                     The time of job termination, actual or expected.   (Valid
449                     for jobs only)
450
451              exit_code
452                     The exit code for the job.  (Valid for jobs only)
453
454              Feature
455                     Features required by the job.  (Valid for jobs only)
456
457              GroupID
458                     Group ID of the job.  (Valid for jobs only)
459
460              GroupName
461                     Group name of the job.  (Valid for jobs only)
462
463              HetJobID
464                     Job ID of the heterogeneous job leader.
465
466              HetJobIDSet
467                     Expression  identifying  all  components job IDs within a
468                     heterogeneous job.
469
470              HetJobOffset
471                     Zero origin offset within a collection  of  heterogeneous
472                     job components.
473
474              JobArrayID
475                     Job array's job ID. This is the base job ID.  For non-ar‐
476                     ray jobs, this is the job ID.  (Valid for jobs only)
477
478              JobID  Job ID.  This will have a unique value for  each  element
479                     of  job  arrays and each component of heterogeneous jobs.
480                     (Valid for jobs only)
481
482              LastSchedEval
483                     Prints the last time the job was evaluated  for  schedul‐
484                     ing.  (Valid for jobs only)
485
486              Licenses
487                     Licenses reserved for the job.  (Valid for jobs only)
488
489              MaxCPUs
490                     Prints  the  max  number  of  CPUs  allocated to the job.
491                     (Valid for jobs only)
492
493              MaxNodes
494                     Prints the max number of  nodes  allocated  to  the  job.
495                     (Valid for jobs only)
496
497              MCSLabel
498                     Prints the MCS_label of the job.  (Valid for jobs only)
499
500              mem-per-tres
501                     Print the memory (in MB) required per trackable resources
502                     allocated to the job or job step.
503
504              MinCpus
505                     Minimum number of CPUs (processors) per node requested by
506                     the  job.   This  reports the value of the srun --mincpus
507                     option with a default value of  zero.   (Valid  for  jobs
508                     only)
509
510              MinMemory
511                     Minimum  size  of  memory  (in  MB) requested by the job.
512                     (Valid for jobs only)
513
514              MinTime
515                     Minimum time limit of the job (Valid for jobs only)
516
517              MinTmpDisk
518                     Minimum size of temporary disk space (in MB) requested by
519                     the job.  (Valid for jobs only)
520
521              Name   Job or job step name.  (Valid for jobs and job steps)
522
523              Network
524                     The  network that the job is running on.  (Valid for jobs
525                     and job steps)
526
527              Nice   Nice value (adjustment to a job's  scheduling  priority).
528                     (Valid for jobs only)
529
530              NodeList
531                     List  of  nodes  allocated to the job or job step. In the
532                     case of a COMPLETING job, the list of nodes will comprise
533                     only  those nodes that have not yet been returned to ser‐
534                     vice.  (Valid for jobs only)
535
536              Nodes  List of nodes allocated to the job or job  step.  In  the
537                     case of a COMPLETING job, the list of nodes will comprise
538                     only those nodes that have not yet been returned to  ser‐
539                     vice.  (Valid job steps only)
540
541              NTPerBoard
542                     The  number  of  tasks  per  board  allocated to the job.
543                     (Valid for jobs only)
544
545              NTPerCore
546                     The number of  tasks  per  core  allocated  to  the  job.
547                     (Valid for jobs only)
548
549              NTPerNode
550                     The  number  of  tasks  per  node  allocated  to the job.
551                     (Valid for jobs only)
552
553              NTPerSocket
554                     The number of tasks per  socket  allocated  to  the  job.
555                     (Valid for jobs only)
556
557              NumCPUs
558                     Number of CPUs (processors) requested by the job or allo‐
559                     cated to it if already running.  As a job is  completing,
560                     this number will reflect the current number of CPUs allo‐
561                     cated.  (Valid for jobs and job steps)
562
563              NumNodes
564                     Number of nodes allocated to the job or the minimum  num‐
565                     ber of nodes required by a pending job. The actual number
566                     of nodes allocated to a pending job may exceed this  num‐
567                     ber  if the job specified a node range count (e.g.  mini‐
568                     mum and maximum node counts) or the job specifies a  pro‐
569                     cessor  count  instead  of a node count. As a job is com‐
570                     pleting this number will reflect the  current  number  of
571                     nodes allocated.  (Valid for jobs only)
572
573              NumTasks
574                     Number of tasks requested by a job or job step.  This re‐
575                     ports the value of the --ntasks option.  (Valid for  jobs
576                     and job steps)
577
578              Origin Cluster name where federated job originated from.  (Valid
579                     for federated jobs only)
580
581              OriginRaw
582                     Cluster ID where federated job originated  from.   (Valid
583                     for federated jobs only)
584
585              OverSubscribe
586                     Can  the  compute  resources allocated to the job be over
587                     subscribed by other jobs.  The resources to be over  sub‐
588                     scribed can be nodes, sockets, cores, or hyperthreads de‐
589                     pending upon configuration.  The value will be  "YES"  if
590                     the  job  was  submitted with the oversubscribe option or
591                     the partition  is  configured  with  OverSubscribe=Force,
592                     "NO" if the job requires exclusive node access, "USER" if
593                     the allocated compute nodes are  dedicated  to  a  single
594                     user,  "MCS" if the allocated compute nodes are dedicated
595                     to a single security class (See MCSPlugin and  MCSParame‐
596                     ters configuration parameters for more information), "OK"
597                     otherwise (typically allocated  dedicated  CPUs),  (Valid
598                     for jobs only)
599
600              Partition
601                     Partition  of  the  job or job step.  (Valid for jobs and
602                     job steps)
603
604              PendingTime
605                     The time (in seconds) between start time and submit  time
606                     of  the  job.   If  the job has not started yet, then the
607                     time (in seconds) between now and the submit time of  the
608                     job.  (Valid for jobs only)
609
610              PreemptTime
611                     The preempt time for the job.  (Valid for jobs only)
612
613              Prefer The preferred features of a pending job.  (Valid for jobs
614                     only)
615
616              Priority
617                     Priority of the job (converted to a floating point number
618                     between 0.0 and 1.0).  Also see prioritylong.  (Valid for
619                     jobs only)
620
621              PriorityLong
622                     Priority of the job (generally a very large unsigned  in‐
623                     teger).  Also see priority.  (Valid for jobs only)
624
625              Profile
626                     Profile of the job.  (Valid for jobs only)
627
628              QOS    Quality  of  service associated with the job.  (Valid for
629                     jobs only)
630
631              Reason The reason a job is in its current state.   See  the  JOB
632                     REASON  CODES section below for more information.  (Valid
633                     for jobs only)
634
635              ReasonList
636                     For pending jobs: the reason a job is waiting for  execu‐
637                     tion  is printed within parenthesis.  For terminated jobs
638                     with failure: an explanation as to why the job failed  is
639                     printed  within  parenthesis.   For all other job states:
640                     the list of allocate nodes.  See  the  JOB  REASON  CODES
641                     section  below  for  more  information.   (Valid for jobs
642                     only)
643
644              Reboot Indicates if the allocated nodes should be  rebooted  be‐
645                     fore starting the job.  (Valid on jobs only)
646
647              ReqNodes
648                     List  of  node  names  explicitly  requested  by the job.
649                     (Valid for jobs only)
650
651              ReqSwitch
652                     The max number of requested  switches  by  for  the  job.
653                     (Valid for jobs only)
654
655              Requeue
656                     Prints  whether  the  job  will  be  requeued on failure.
657                     (Valid for jobs only)
658
659              Reservation
660                     Reservation for the job.  (Valid for jobs only)
661
662              ResizeTime
663                     The amount of time changed for the job  to  run.   (Valid
664                     for jobs only)
665
666              RestartCnt
667                     The  number  of  restarts  for  the job.  (Valid for jobs
668                     only)
669
670              ResvPort
671                     Reserved ports of the job.  (Valid for job steps only)
672
673              SchedNodes
674                     For pending jobs, a list of the nodes expected to be used
675                     when the job is started.  (Valid for jobs only)
676
677              SCT    Number  of  requested sockets, cores, and threads (S:C:T)
678                     per node for the job.  When (S:C:T) has not been set, "*"
679                     is displayed.  (Valid for jobs only)
680
681              SelectJobInfo
682                     Node  selection  plugin specific data for a job. Possible
683                     data includes: Geometry requirement of  resource  alloca‐
684                     tion (X,Y,Z dimensions), Connection type (TORUS, MESH, or
685                     NAV == torus else mesh), Permit rotation of geometry (yes
686                     or  no),  Node use (VIRTUAL or COPROCESSOR), etc.  (Valid
687                     for jobs only)
688
689              SiblingsActive
690                     Cluster names of  where  federated  sibling  jobs  exist.
691                     (Valid for federated jobs only)
692
693              SiblingsActiveRaw
694                     Cluster  IDs  of  where  federated  sibling  jobs  exist.
695                     (Valid for federated jobs only)
696
697              SiblingsViable
698                     Cluster names of where federated sibling jobs are  viable
699                     to run.  (Valid for federated jobs only)
700
701              SiblingsViableRaw
702                     Cluster  IDs  of  where  federated sibling jobs viable to
703                     run.  (Valid for federated jobs only)
704
705              Sockets
706                     Number of sockets per node requested by  the  job.   This
707                     reports  the value of the srun --sockets-per-node option.
708                     When --sockets-per-node has not been  set,  "*"  is  dis‐
709                     played.  (Valid for jobs only)
710
711              SPerBoard
712                     Number of sockets per board allocated to the job.  (Valid
713                     for jobs only)
714
715              StartTime
716                     Actual or expected start time of the  job  or  job  step.
717                     (Valid for jobs and job steps)
718
719              State  Job state in extended form.  See the JOB STATE CODES sec‐
720                     tion below for a list of  possible  states.   (Valid  for
721                     jobs only)
722
723              StateCompact
724                     Job  state in compact form.  See the JOB STATE CODES sec‐
725                     tion below for a list of  possible  states.   (Valid  for
726                     jobs only)
727
728              STDERR The  directory  for  standard error to output to.  (Valid
729                     for jobs only)
730
731              STDIN  The directory for standard in.  (Valid for jobs only)
732
733              STDOUT The directory for standard out to output to.  (Valid  for
734                     jobs only)
735
736              StepID Job  or  job step ID.  In the case of job arrays, the job
737                     ID format will be of  the  form  "<base_job_id>_<index>".
738                     (Valid forjob steps only)
739
740              StepName
741                     Job step name.  (Valid for job steps only)
742
743              StepState
744                     The state of the job step.  (Valid for job steps only)
745
746              SubmitTime
747                     The  time that the job was submitted at.  (Valid for jobs
748                     only)
749
750              system_comment
751                     System comment associated with the job.  (Valid for  jobs
752                     only)
753
754              Threads
755                     Number  of  threads  per core requested by the job.  This
756                     reports the value of the srun --threads-per-core  option.
757                     When  --threads-per-core  has  not  been set, "*" is dis‐
758                     played.  (Valid for jobs only)
759
760              TimeLeft
761                     Time left for  the  job  to  execute  in  days-hours:min‐
762                     utes:seconds.   This  value  is calculated by subtracting
763                     the job's time used from its time limit.  The  value  may
764                     be "NOT_SET" if not yet established or "UNLIMITED" for no
765                     limit.  (Valid for jobs only)
766
767              TimeLimit
768                     Timelimit for the job or job step.  (Valid for  jobs  and
769                     job steps)
770
771              TimeUsed
772                     Time  used  by  the  job  or  job step in days-hours:min‐
773                     utes:seconds.  The days and hours  are  printed  only  as
774                     needed.   For job steps this field shows the elapsed time
775                     since execution began and thus will be inaccurate for job
776                     steps  which  have  been  suspended.   Clock skew between
777                     nodes in the cluster will cause the time  to  be  inaccu‐
778                     rate.  If the time is obviously wrong (e.g. negative), it
779                     displays as "INVALID".  (Valid for jobs and job steps)
780
781              tres-alloc
782                     Print the trackable resources allocated  to  the  job  if
783                     running.   If  not  running, then print the trackable re‐
784                     sources requested by the job.
785
786              tres-bind
787                     Print the trackable resources task binding  requested  by
788                     the job or job step.
789
790              tres-freq
791                     Print  the  trackable  resources frequencies requested by
792                     the job or job step.
793
794              tres-per-job
795                     Print the trackable resources requested by the job.
796
797              tres-per-node
798                     Print the trackable resources per node requested  by  the
799                     job or job step.
800
801              tres-per-socket
802                     Print the trackable resources per socket requested by the
803                     job or job step.
804
805              tres-per-step
806                     Print the trackable resources requested by the job step.
807
808              tres-per-task
809                     Print the trackable resources per task requested  by  the
810                     job or job step.
811
812              UserID User  ID  for a job or job step.  (Valid for jobs and job
813                     steps)
814
815              UserName
816                     User name for a job or job step.  (Valid for jobs and job
817                     steps)
818
819              Wait4Switch
820                     The  amount  of  time  to  wait for the desired number of
821                     switches.  (Valid for jobs only)
822
823              WCKey  Workload Characterization Key (wckey).  (Valid  for  jobs
824                     only)
825
826              WorkDir
827                     The job's working directory.  (Valid for jobs only)
828
829       --help Print a help message describing all options squeue.
830
831       --hide Do  not display information about jobs and job steps in all par‐
832              titions. By default, information about partitions that are  con‐
833              figured  as hidden or are not available to the user's group will
834              not be displayed (i.e. this is the default behavior).
835
836       -i, --iterate=<seconds>
837              Repeatedly gather and report the requested  information  at  the
838              interval  specified  (in  seconds).   By  default, prints a time
839              stamp with the header.
840
841       -j, --jobs=<job_id_list>
842              Requests a comma separated list of job IDs to display.  Defaults
843              to  all  jobs.   The  --jobs=<job_id_list> option may be used in
844              conjunction with the --steps option to  print  step  information
845              about  specific  jobs.   Note: If a list of job IDs is provided,
846              the jobs are displayed even if they are  on  hidden  partitions.
847              Since this option's argument is optional, for proper parsing the
848              single letter option must be followed immediately with the value
849              and  not  include a space between them. For example "-j1008" and
850              not "-j 1008".  The job ID format is "job_id[_array_id]".   Per‐
851              formance  of  the command can be measurably improved for systems
852              with large numbers of jobs when a single job  ID  is  specified.
853              By  default,  this  field size will be limited to 64 bytes.  Use
854              the environment  variable  SLURM_BITSTR_LEN  to  specify  larger
855              field sizes.
856
857       --json Dump job information as JSON. All other formatting and filtering
858              arguments will be ignored.
859
860       -L, --licenses=<license_list>
861              Request jobs requesting or using one or more of  the  named  li‐
862              censes.   The license list consists of a comma separated list of
863              license names.
864
865       --local
866              Show only jobs local to this cluster. Ignore other  clusters  in
867              this federation (if any). Overrides --federation.
868
869       -l, --long
870              Report  more  of the available information for the selected jobs
871              or job steps, subject to any constraints specified.
872
873       --me   Equivalent to --user=<my username>.
874
875       -n, --name=<name_list>
876              Request jobs or job steps having one  of  the  specified  names.
877              The list consists of a comma separated list of job names.
878
879       --noconvert
880              Don't  convert  units from their original type (e.g. 2048M won't
881              be converted to 2G).
882
883       -w, --nodelist=<hostlist>
884              Report only on jobs allocated to the specified node or  list  of
885              nodes.   This  may either be the NodeName or NodeHostname as de‐
886              fined in  slurm.conf(5)  in  the  event  that  they  differ.   A
887              node_name of localhost is mapped to the current host name.
888
889       -h, --noheader
890              Do not print a header on the output.
891
892       -p, --partition=<part_list>
893              Specify  the  partitions of the jobs or steps to view. Accepts a
894              comma separated list of partition names.
895
896       -P, --priority
897              For pending jobs submitted to multiple partitions, list the  job
898              once per partition. In addition, if jobs are sorted by priority,
899              consider both the partition and job priority. This option can be
900              used to produce a list of pending jobs in the same order consid‐
901              ered for scheduling by Slurm with appropriate additional options
902              (e.g. "--sort=-p,i --states=PD").
903
904       -q, --qos=<qos_list>
905              Specify the qos(s) of the jobs or steps to view. Accepts a comma
906              separated list of qos's.
907
908       -R, --reservation=<reservation_name>
909              Specify the reservation of the jobs to view.
910
911       --sibling
912              Show all sibling jobs on a federated cluster. Implies  --federa‐
913              tion.
914
915       -S, --sort=<sort_list>
916              Specification  of the order in which records should be reported.
917              This uses the same field specification as  the  <output_format>.
918              The  long  format option "cluster" can also be used to sort jobs
919              or job steps by cluster name (e.g.  federated  jobs).   Multiple
920              sorts may be performed by listing multiple sort fields separated
921              by commas.  The field specifications may be preceded by  "+"  or
922              "-"  for  ascending (default) and descending order respectively.
923              For example, a sort value of "P,U" will sort the records by par‐
924              tition name then by user id.  The default value of sort for jobs
925              is "P,t,-p" (increasing partition name then within a given  par‐
926              tition  by  increasing  job state and then decreasing priority).
927              The default value of sort for job  steps  is  "P,i"  (increasing
928              partition  name then within a given partition by increasing step
929              id).
930
931       --start
932              Report the expected start time and resources to be allocated for
933              pending jobs in order of increasing start time.  This is equiva‐
934              lent to the following options: --format="%.18i  %.9P  %.8j  %.8u
935              %.2t   %.19S  %.6D %20Y %R", --sort=S and --states=PENDING.  Any
936              of these options may be explicitly changed as desired by combin‐
937              ing  the  --start option with other option values (e.g. to use a
938              different output format).  The expected start  time  of  pending
939              jobs  is  only  available  if the Slurm is configured to use the
940              backfill scheduling plugin.
941
942       -t, --states=<state_list>
943              Specify the states of jobs to view.  Accepts a  comma  separated
944              list of state names or "all". If "all" is specified then jobs of
945              all states will be reported. If no state is specified then pend‐
946              ing,  running,  and  completing  jobs  are reported. See the JOB
947              STATE CODES section below for a list of valid states.  Both  ex‐
948              tended  and compact forms are valid.  Note the <state_list> sup‐
949              plied is case insensitive ("pd" and "PD" are equivalent).
950
951       -s, --steps
952              Specify the job steps to view.  This flag indicates that a comma
953              separated  list  of  job  steps to view follows without an equal
954              sign (see  examples).   The  job  step  format  is  "job_id[_ar‐
955              ray_id].step_id". Defaults to all job steps. Since this option's
956              argument is optional, for proper parsing the single  letter  op‐
957              tion must be followed immediately with the value and not include
958              a space  between  them.  For  example  "-s1008.0"  and  not  "-s
959              1008.0".
960
961       --usage
962              Print a brief help message listing the squeue options.
963
964       -u, --user=<user_list>
965              Request  jobs or job steps from a comma separated list of users.
966              The list can consist of user names or user id numbers.   Perfor‐
967              mance of the command can be measurably improved for systems with
968              large numbers of jobs when a single user is specified.
969
970       -v, --verbose
971              Report details of squeues actions.
972
973       -V , --version
974              Print version information and exit.
975
976       --yaml Dump job information as YAML. All other formatting and filtering
977              arguments will be ignored.
978

JOB REASON CODES

980       These codes identify the reason that a job is waiting for execution.  A
981       job may be waiting for more than one reason, in which case only one  of
982       those reasons is displayed.
983
984       The  Reasons  listed  below  are some of the more common ones you might
985       see.  For a full list of Reason codes see  our  Resource  Limits  page:
986       <https://slurm.schedmd.com/resource_limits.html>
987
988
989       AssocGrp*Limit        The  job's  association  has reached an aggregate
990                             limit on some resource.
991
992       AssociationJobLimit   The job's association has reached its maximum job
993                             count.
994
995       AssocMax*Limit        The  job requests a resource that violates a per-
996                             job limit on the requested association.
997
998       AssociationResourceLimit
999                             The job's association has reached  some  resource
1000                             limit.
1001
1002       AssociationTimeLimit  The job's association has reached its time limit.
1003
1004       BadConstraints        The job's constraints can not be satisfied.
1005
1006       BeginTime             The  job's  earliest  start time has not yet been
1007                             reached.
1008
1009       Cleaning              The job is being requeued and still  cleaning  up
1010                             from its previous execution.
1011
1012       Dependency            This job has a dependency on another job that has
1013                             not been satisfied.
1014
1015       DependencyNeverSatisfied
1016                             This job has a dependency  on  another  job  that
1017                             will never be satisfied.
1018
1019       FrontEndDown          No  front  end  node is available to execute this
1020                             job.
1021
1022       InactiveLimit         The job reached the system InactiveLimit.
1023
1024       InvalidAccount        The job's account is invalid.
1025
1026       InvalidQOS            The job's QOS is invalid.
1027
1028       JobHeldAdmin          The job is held by a system administrator.
1029
1030       JobHeldUser           The job is held by the user.
1031
1032       JobLaunchFailure      The job could not be launched.  This may  be  due
1033                             to  a  file system problem, invalid program name,
1034                             etc.
1035
1036       Licenses              The job is waiting for a license.
1037
1038       NodeDown              A node required by the job is down.
1039
1040       NonZeroExitCode       The job terminated with a non-zero exit code.
1041
1042       PartitionDown         The partition required by this job is in  a  DOWN
1043                             state.
1044
1045       PartitionInactive     The partition required by this job is in an Inac‐
1046                             tive state and not able to start jobs.
1047
1048       PartitionNodeLimit    The number of nodes required by this job is  out‐
1049                             side of its partition's current limits.  Can also
1050                             indicate that required nodes are DOWN or DRAINED.
1051
1052       PartitionTimeLimit    The job's time limit exceeds its partition's cur‐
1053                             rent time limit.
1054
1055       Priority              One  or  more higher priority jobs exist for this
1056                             partition or advanced reservation.
1057
1058       Prolog                Its PrologSlurmctld program is still running.
1059
1060       QOSGrp*Limit          The job's QOS has reached an aggregate  limit  on
1061                             some resource.
1062
1063       QOSJobLimit           The job's QOS has reached its maximum job count.
1064
1065       QOSMax*Limit          The  job requests a resource that violates a per-
1066                             job limit on the requested QOS.
1067
1068       QOSResourceLimit      The job's QOS has reached some resource limit.
1069
1070       QOSTimeLimit          The job's QOS has reached its time limit.
1071
1072       QOSUsageThreshold     Required QOS threshold has been breached.
1073
1074       ReqNodeNotAvail       Some node specifically required by the job is not
1075                             currently  available.   The node may currently be
1076                             in use, reserved for another job, in an  advanced
1077                             reservation,  DOWN,  DRAINED,  or not responding.
1078                             Nodes which are DOWN, DRAINED, or not  responding
1079                             will  be identified as part of the job's "reason"
1080                             field as "UnavailableNodes". Such nodes will typ‐
1081                             ically  require  the intervention of a system ad‐
1082                             ministrator to make available.
1083
1084       Reservation           The job is waiting its  advanced  reservation  to
1085                             become available.
1086
1087       Resources             The job is waiting for resources to become avail‐
1088                             able.
1089
1090       SystemFailure         Failure of the Slurm system, a file  system,  the
1091                             network, etc.
1092
1093       TimeLimit             The job exhausted its time limit.
1094
1095       WaitingForScheduling  No reason has been set for this job yet.  Waiting
1096                             for the scheduler to  determine  the  appropriate
1097                             reason.
1098

JOB STATE CODES

1100       Jobs  typically pass through several states in the course of their exe‐
1101       cution.  The typical states are PENDING, RUNNING,  SUSPENDED,  COMPLET‐
1102       ING, and COMPLETED.  An explanation of each state follows.
1103
1104
1105       BF  BOOT_FAIL       Job terminated due to launch failure, typically due
1106                           to a hardware failure (e.g. unable to boot the node
1107                           or block and the job can not be requeued).
1108
1109       CA  CANCELLED       Job  was explicitly cancelled by the user or system
1110                           administrator.  The job may or may  not  have  been
1111                           initiated.
1112
1113       CD  COMPLETED       Job  has terminated all processes on all nodes with
1114                           an exit code of zero.
1115
1116       CF  CONFIGURING     Job has been allocated resources, but  are  waiting
1117                           for them to become ready for use (e.g. booting).
1118
1119       CG  COMPLETING      Job is in the process of completing. Some processes
1120                           on some nodes may still be active.
1121
1122       DL  DEADLINE        Job terminated on deadline.
1123
1124       F   FAILED          Job terminated with non-zero  exit  code  or  other
1125                           failure condition.
1126
1127       NF  NODE_FAIL       Job  terminated due to failure of one or more allo‐
1128                           cated nodes.
1129
1130       OOM OUT_OF_MEMORY   Job experienced out of memory error.
1131
1132       PD  PENDING         Job is awaiting resource allocation.
1133
1134       PR  PREEMPTED       Job terminated due to preemption.
1135
1136       R   RUNNING         Job currently has an allocation.
1137
1138       RD  RESV_DEL_HOLD   Job is being held after requested  reservation  was
1139                           deleted.
1140
1141       RF  REQUEUE_FED     Job is being requeued by a federation.
1142
1143       RH  REQUEUE_HOLD    Held job is being requeued.
1144
1145       RQ  REQUEUED        Completing job is being requeued.
1146
1147       RS  RESIZING        Job is about to change size.
1148
1149       RV  REVOKED         Sibling was removed from cluster due to other clus‐
1150                           ter starting the job.
1151
1152       SI  SIGNALING       Job is being signaled.
1153
1154       SE  SPECIAL_EXIT    The job was requeued in a special state. This state
1155                           can  be set by users, typically in EpilogSlurmctld,
1156                           if the job has terminated with  a  particular  exit
1157                           value.
1158
1159       SO  STAGE_OUT       Job is staging out files.
1160
1161       ST  STOPPED         Job  has  an  allocation,  but  execution  has been
1162                           stopped with SIGSTOP signal.  CPUS  have  been  re‐
1163                           tained by this job.
1164
1165       S   SUSPENDED       Job  has an allocation, but execution has been sus‐
1166                           pended and CPUs have been released for other jobs.
1167
1168       TO  TIMEOUT         Job terminated upon reaching its time limit.
1169

PERFORMANCE

1171       Executing squeue sends a remote procedure call to slurmctld. If  enough
1172       calls  from squeue or other Slurm client commands that send remote pro‐
1173       cedure calls to the slurmctld daemon come in at once, it can result  in
1174       a  degradation of performance of the slurmctld daemon, possibly result‐
1175       ing in a denial of service.
1176
1177       Do not run squeue or other Slurm client commands that send remote  pro‐
1178       cedure  calls  to  slurmctld  from loops in shell scripts or other pro‐
1179       grams. Ensure that programs limit calls to squeue to the minimum neces‐
1180       sary for the information you are trying to gather.
1181
1182

ENVIRONMENT VARIABLES

1184       Some  squeue  options may be set via environment variables. These envi‐
1185       ronment variables, along with their corresponding options,  are  listed
1186       below.  (Note:  Command  line  options  will always override these set‐
1187       tings.)
1188
1189
1190       SLURM_BITSTR_LEN    Specifies the string length to be used for  holding
1191                           a  job  array's  task  ID  expression.  The default
1192                           value is 64 bytes.  A value of  0  will  print  the
1193                           full  expression  with any length required.  Larger
1194                           values may adversely impact the application perfor‐
1195                           mance.
1196
1197       SLURM_CLUSTERS      Same as --clusters
1198
1199       SLURM_CONF          The location of the Slurm configuration file.
1200
1201       SLURM_DEBUG_FLAGS   Specify  debug  flags  for  squeue  to use. See De‐
1202                           bugFlags in the slurm.conf(5) man page for  a  full
1203                           list  of  flags.  The  environment  variable  takes
1204                           precedence over the setting in the slurm.conf.
1205
1206       SLURM_TIME_FORMAT   Specify the format used to report  time  stamps.  A
1207                           value  of  standard,  the  default value, generates
1208                           output            in            the            form
1209                           "year-month-dateThour:minute:second".   A  value of
1210                           relative returns only "hour:minute:second"  if  the
1211                           current  day.   For other dates in the current year
1212                           it prints the "hour:minute"  preceded  by  "Tomorr"
1213                           (tomorrow),  "Ystday"  (yesterday), the name of the
1214                           day for the coming week (e.g. "Mon", "Tue",  etc.),
1215                           otherwise  the  date  (e.g.  "25  Apr").  For other
1216                           years it returns a date month and  year  without  a
1217                           time  (e.g.   "6 Jun 2012"). All of the time stamps
1218                           use a 24 hour format.
1219
1220                           A valid strftime() format can  also  be  specified.
1221                           For example, a value of "%a %T" will report the day
1222                           of the week and a time stamp (e.g. "Mon 12:34:56").
1223
1224       SQUEUE_ACCOUNT      -A <account_list>, --account=<account_list>
1225
1226       SQUEUE_ALL          -a, --all
1227
1228       SQUEUE_ARRAY        -r, --array
1229
1230       SQUEUE_NAMES        --name=<name_list>
1231
1232       SQUEUE_FEDERATION   --federation
1233
1234       SQUEUE_FORMAT       -o <output_format>, --format=<output_format>
1235
1236       SQUEUE_FORMAT2      -O <output_format>, --Format=<output_format>
1237
1238       SQUEUE_LICENSES     -p-l <license_list>, --license=<license_list>
1239
1240       SQUEUE_LOCAL        --local
1241
1242       SQUEUE_PARTITION    -p <part_list>, --partition=<part_list>
1243
1244       SQUEUE_PRIORITY     -P, --priority
1245
1246       SQUEUE_QOS          -p <qos_list>, --qos=<qos_list>
1247
1248       SQUEUE_SIBLING      --sibling
1249
1250       SQUEUE_SORT         -S <sort_list>, --sort=<sort_list>
1251
1252       SQUEUE_STATES       -t <state_list>, --states=<state_list>
1253
1254       SQUEUE_USERS        -u <user_list>, --users=<user_list>
1255

EXAMPLES

1257       Print the jobs scheduled in the debug partition and  in  the  COMPLETED
1258       state in the format with six right justified digits for the job id fol‐
1259       lowed by the priority with an arbitrary fields size:
1260
1261              $ squeue -p debug -t COMPLETED -o "%.6i %p"
1262               JOBID PRIORITY
1263               65543 99993
1264               65544 99992
1265               65545 99991
1266
1267
1268       Print the job steps in the debug partition sorted by user:
1269
1270              $ squeue -s -p debug -S u
1271                STEPID        NAME PARTITION     USER      TIME NODELIST
1272               65552.1       test1     debug    alice      0:23 dev[1-4]
1273               65562.2     big_run     debug      bob      0:18 dev22
1274               65550.1      param1     debug  candice   1:43:21 dev[6-12]
1275
1276
1277       Print information only about jobs 12345, 12346 and 12348:
1278
1279              $ squeue --jobs 12345,12346,12348
1280               JOBID PARTITION NAME USER ST  TIME  NODES NODELIST(REASON)
1281               12345     debug job1 dave  R   0:21     4 dev[9-12]
1282               12346     debug job2 dave PD   0:00     8 (Resources)
1283               12348     debug job3 ed   PD   0:00     4 (Priority)
1284
1285
1286       Print information only about job step 65552.1:
1287
1288              $ squeue --steps 65552.1
1289                STEPID     NAME PARTITION    USER    TIME  NODELIST
1290               65552.1    test2     debug   alice   12:49  dev[1-4]
1291
1292

COPYING

1294       Copyright (C) 2002-2007 The Regents of the  University  of  California.
1295       Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER).
1296       Copyright (C) 2008-2010 Lawrence Livermore National Security.
1297       Copyright (C) 2010-2022 SchedMD LLC.
1298
1299       This  file  is  part  of Slurm, a resource management program.  For de‐
1300       tails, see <https://slurm.schedmd.com/>.
1301
1302       Slurm is free software; you can redistribute it and/or modify it  under
1303       the  terms  of  the GNU General Public License as published by the Free
1304       Software Foundation; either version 2 of the License, or (at  your  op‐
1305       tion) any later version.
1306
1307       Slurm  is  distributed  in the hope that it will be useful, but WITHOUT
1308       ANY WARRANTY; without even the implied warranty of  MERCHANTABILITY  or
1309       FITNESS  FOR  A PARTICULAR PURPOSE.  See the GNU General Public License
1310       for more details.
1311

SEE ALSO

1313       scancel(1), scontrol(1), sinfo(1),  srun(1),  slurm_load_ctl_conf  (3),
1314       slurm_load_jobs (3), slurm_load_node (3), slurm_load_partitions (3)
1315
1316
1317
1318October 2022                    Slurm Commands                       squeue(1)
Impressum