1squeue(1)                       Slurm Commands                       squeue(1)
2
3
4

NAME

6       squeue  -  view  information about jobs located in the Slurm scheduling
7       queue.
8
9

SYNOPSIS

11       squeue [OPTIONS...]
12
13

DESCRIPTION

15       squeue is used to view job and job step information for jobs managed by
16       Slurm.
17
18

OPTIONS

20       -A, --account=<account_list>
21              Specify  the accounts of the jobs to view. Accepts a comma sepa‐
22              rated list of account names. This has no effect when listing job
23              steps.
24
25       -a, --all
26              Display  information about jobs and job steps in all partitions.
27              This causes information to be displayed  about  partitions  that
28              are  configured  as hidden, partitions that are unavailable to a
29              user's group, and federated jobs that are in a "revoked" state.
30
31       -r, --array
32              Display one job array element per line.   Without  this  option,
33              the  display  will be optimized for use with job arrays (pending
34              job array elements will be combined on one line of  output  with
35              the array index values printed using a regular expression).
36
37       --array-unique
38              Display  one  unique pending job array element per line. Without
39              this option, the pending job array elements will be grouped into
40              the  master array job to optimize the display.  This can also be
41              set with the environment variable SQUEUE_ARRAY_UNIQUE.
42
43       -M, --clusters=<cluster_name>
44              Clusters to issue commands to.  Multiple cluster  names  may  be
45              comma  separated.   A  value  of  'all' will query to run on all
46              clusters.  This option implicitly sets the --local option.
47
48       --federation
49              Show jobs from the federation if a member of one.
50
51       -o, --format=<output_format>
52              Specify the information to be displayed, its size  and  position
53              (right  or  left  justified).   Also  see the -O, --Format=<out‐
54              put_format> option described below (which supports  less  flexi‐
55              bility  in  formatting,  but supports access to all fields).  If
56              the command is executed in a federated cluster  environment  and
57              information  about  more than one cluster is to be displayed and
58              the -h, --noheader option is used, then the cluster name will be
59              displayed before the default output formats shown below.
60
61              The default formats with various options are:
62
63              default        "%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R"
64
65              -l, --long     "%.18i %.9P %.8j %.8u %.8T %.10M %.9l %.6D %R"
66
67              -s, --steps    "%.15i %.8j %.9P %.8u %.9M %N"
68
69       The format of each field is "%[[.]size]type[suffix]"
70
71                 size   Minimum  field size. If no size is specified, whatever
72                        is needed to print the information will be used.
73
74                 .      Indicates the output should  be  right  justified  and
75                        size  must  be  specified.   By default output is left
76                        justified.
77
78                 suffix Arbitrary string to append to the end of the field.
79
80       Note that many of these type specifications are  valid  only  for  jobs
81       while  others  are valid only for job steps.  Valid type specifications
82       include:
83
84              %all  Print all fields available for this data type with a  ver‐
85                    tical bar separating each field.
86
87              %a    Account associated with the job.  (Valid for jobs only)
88
89              %A    Number  of  tasks created by a job step.  This reports the
90                    value of the srun --ntasks option.  (Valid for  job  steps
91                    only)
92
93              %A    Job id.  This will have a unique value for each element of
94                    job arrays.  (Valid for jobs only)
95
96              %B    Executing (batch) host. For an allocated session, this  is
97                    the  host on which the session is executing (i.e. the node
98                    from which the srun or the salloc command  was  executed).
99                    For  a  batch  job,  this  is the node executing the batch
100                    script. In the case of a typical Linux cluster, this would
101                    be the compute node zero of the allocation. In the case of
102                    a Cray ALPS system, this would be the front-end host whose
103                    slurmd daemon executes the job script.
104
105              %c    Minimum  number of CPUs (processors) per node requested by
106                    the job.  This reports the value of the srun --mincpus op‐
107                    tion with a default value of zero.  (Valid for jobs only)
108
109              %C    Number  of CPUs (processors) requested by the job or allo‐
110                    cated to it if already running.  As a  job  is  completing
111                    this  number will reflect the current number of CPUs allo‐
112                    cated.  (Valid for jobs only)
113
114              %d    Minimum size of temporary disk space (in MB) requested  by
115                    the job.  (Valid for jobs only)
116
117              %D    Number of nodes allocated to the job or the minimum number
118                    of nodes required by a pending job. The actual  number  of
119                    nodes allocated to a pending job may exceed this number if
120                    the job specified a node range count  (e.g.   minimum  and
121                    maximum  node  counts)  or  the  job specifies a processor
122                    count instead of a node count. As a job is completing this
123                    number will reflect the current number of nodes allocated.
124                    (Valid for jobs only)
125
126              %e    Time at which the job ended or is expected to  end  (based
127                    upon its time limit).  (Valid for jobs only)
128
129              %E    Job dependencies remaining. This job will not begin execu‐
130                    tion until these dependent jobs complete. In the case of a
131                    job  that  can not run due to job dependencies never being
132                    satisfied, the full original job dependency  specification
133                    will  be reported. A value of NULL implies this job has no
134                    dependencies.  (Valid for jobs only)
135
136              %f    Features required by the job.  (Valid for jobs only)
137
138              %F    Job array's job ID. This is the base job ID.  For  non-ar‐
139                    ray jobs, this is the job ID.  (Valid for jobs only)
140
141              %g    Group name of the job.  (Valid for jobs only)
142
143              %G    Group ID of the job.  (Valid for jobs only)
144
145              %h    Can  the  compute  resources  allocated to the job be over
146                    subscribed by other jobs.  The resources to be  over  sub‐
147                    scribed  can be nodes, sockets, cores, or hyperthreads de‐
148                    pending upon configuration.  The value will  be  "YES"  if
149                    the job was submitted with the oversubscribe option or the
150                    partition is configured with OverSubscribe=Force, "NO"  if
151                    the  job requires exclusive node access, "USER" if the al‐
152                    located compute nodes are  dedicated  to  a  single  user,
153                    "MCS"  if  the  allocated compute nodes are dedicated to a
154                    single security class  (See  MCSPlugin  and  MCSParameters
155                    configuration  parameters for more information), "OK" oth‐
156                    erwise (typically allocated dedicated  CPUs),  (Valid  for
157                    jobs only)
158
159              %H    Number of sockets per node requested by the job.  This re‐
160                    ports the value of  the  srun  --sockets-per-node  option.
161                    When  --sockets-per-node  has  not  been  set, "*" is dis‐
162                    played.  (Valid for jobs only)
163
164              %i    Job or job step id.  In the case of job arrays, the job ID
165                    format  will  be  of the form "<base_job_id>_<index>".  By
166                    default, the job array index field size will be limited to
167                    64  bytes.   Use the environment variable SLURM_BITSTR_LEN
168                    to specify larger field sizes.  (Valid for  jobs  and  job
169                    steps)  In  the case of heterogeneous job allocations, the
170                    job ID format will be of the form "#+#"  where  the  first
171                    number  is  the  "heterogeneous job leader" and the second
172                    number the zero origin offset for each  component  of  the
173                    job.
174
175              %I    Number of cores per socket requested by the job.  This re‐
176                    ports the value of  the  srun  --cores-per-socket  option.
177                    When  --cores-per-socket  has  not  been  set, "*" is dis‐
178                    played.  (Valid for jobs only)
179
180              %j    Job or job step name.  (Valid for jobs and job steps)
181
182              %J    Number of threads per core requested by the job.  This re‐
183                    ports  the  value  of  the srun --threads-per-core option.
184                    When --threads-per-core has not  been  set,  "*"  is  dis‐
185                    played.  (Valid for jobs only)
186
187              %k    Comment associated with the job.  (Valid for jobs only)
188
189              %K    Job array index.  By default, this field size will be lim‐
190                    ited to 64 bytes.  Use the environment variable SLURM_BIT‐
191                    STR_LEN  to  specify  larger field sizes.  (Valid for jobs
192                    only)
193
194              %l    Time limit of the  job  or  job  step  in  days-hours:min‐
195                    utes:seconds.   The  value may be "NOT_SET" if not yet es‐
196                    tablished or "UNLIMITED" for no limit.   (Valid  for  jobs
197                    and job steps)
198
199              %L    Time  left  for  the  job  to  execute  in days-hours:min‐
200                    utes:seconds.  This value is calculated by subtracting the
201                    job's  time  used  from  its time limit.  The value may be
202                    "NOT_SET" if not yet established  or  "UNLIMITED"  for  no
203                    limit.  (Valid for jobs only)
204
205              %m    Minimum  size  of  memory  (in  MB)  requested by the job.
206                    (Valid for jobs only)
207
208              %M    Time used by  the  job  or  job  step  in  days-hours:min‐
209                    utes:seconds.   The  days  and  hours  are printed only as
210                    needed.  For job steps this field shows the  elapsed  time
211                    since  execution began and thus will be inaccurate for job
212                    steps which have been suspended.  Clock skew between nodes
213                    in  the  cluster will cause the time to be inaccurate.  If
214                    the time is obviously wrong (e.g. negative),  it  displays
215                    as "INVALID".  (Valid for jobs and job steps)
216
217              %n    List  of  node  names  explicitly  requested  by  the job.
218                    (Valid for jobs only)
219
220              %N    List of nodes allocated to the job or  job  step.  In  the
221                    case  of a COMPLETING job, the list of nodes will comprise
222                    only those nodes that have not yet been returned  to  ser‐
223                    vice.  (Valid for jobs and job steps)
224
225              %o    The command to be executed.
226
227              %O    Are  contiguous  nodes  requested  by the job.  (Valid for
228                    jobs only)
229
230              %p    Priority of the job (converted to a floating point  number
231                    between 0.0 and 1.0).  Also see %Q.  (Valid for jobs only)
232
233              %P    Partition of the job or job step.  (Valid for jobs and job
234                    steps)
235
236              %q    Quality of service associated with the  job.   (Valid  for
237                    jobs only)
238
239              %Q    Priority of the job (generally a very large unsigned inte‐
240                    ger).  Also see %p.  (Valid for jobs only)
241
242              %r    The reason a job is in its current  state.   See  the  JOB
243                    REASON  CODES  section below for more information.  (Valid
244                    for jobs only)
245
246              %R    For pending jobs: the reason a job is waiting  for  execu‐
247                    tion  is  printed within parenthesis.  For terminated jobs
248                    with failure: an explanation as to why the job  failed  is
249                    printed within parenthesis.  For all other job states: the
250                    list of allocate nodes.  See the JOB REASON CODES  section
251                    below for more information.  (Valid for jobs only)
252
253              %s    Node  selection  plugin  specific data for a job. Possible
254                    data includes: Geometry requirement of resource allocation
255                    (X,Y,Z  dimensions),  Connection type (TORUS, MESH, or NAV
256                    == torus else mesh), Permit rotation of geometry  (yes  or
257                    no),  Node  use (VIRTUAL or COPROCESSOR), etc.  (Valid for
258                    jobs only)
259
260              %S    Actual or expected start time of  the  job  or  job  step.
261                    (Valid for jobs and job steps)
262
263              %t    Job  state  in compact form.  See the JOB STATE CODES sec‐
264                    tion below for a list of possible states.  (Valid for jobs
265                    only)
266
267              %T    Job  state in extended form.  See the JOB STATE CODES sec‐
268                    tion below for a list of possible states.  (Valid for jobs
269                    only)
270
271              %u    User  name for a job or job step.  (Valid for jobs and job
272                    steps)
273
274              %U    User ID for a job or job step.  (Valid for  jobs  and  job
275                    steps)
276
277              %v    Reservation for the job.  (Valid for jobs only)
278
279              %V    The job's submission time.
280
281              %w    Workload  Characterization  Key  (wckey).  (Valid for jobs
282                    only)
283
284              %W    Licenses reserved for the job.  (Valid for jobs only)
285
286              %x    List of node names explicitly excluded by the job.  (Valid
287                    for jobs only)
288
289              %X    Count  of cores reserved on each node for system use (core
290                    specialization).  (Valid for jobs only)
291
292              %y    Nice value (adjustment to a  job's  scheduling  priority).
293                    (Valid for jobs only)
294
295              %Y    For  pending jobs, a list of the nodes expected to be used
296                    when the job is started.
297
298              %z    Number of requested sockets, cores,  and  threads  (S:C:T)
299                    per  node for the job.  When (S:C:T) has not been set, "*"
300                    is displayed.  (Valid for jobs only)
301
302              %Z    The job's working directory.
303
304       -O, --Format=<output_format>
305              Specify the information to  be  displayed.   Also  see  the  -o,
306              --format=<output_format>  option described above (which supports
307              greater flexibility in formatting, but does not  support  access
308              to  all fields because we ran out of letters).  Requests a comma
309              separated list of job information to be displayed.
310
311              The format of each field is "type[:[.][size][sufix]]"
312
313                 size   Minimum field size. If no size is specified, 20  char‐
314                        acters will be allocated to print the information.
315
316                 .      Indicates  the  output  should  be right justified and
317                        size must be specified.  By  default  output  is  left
318                        justified.
319
320                 sufix  Arbitrary string to append to the end of the field.
321
322       Note  that  many  of  these type specifications are valid only for jobs
323       while others are valid only for job steps.  Valid  type  specifications
324       include:
325
326              Account
327                     Print  the  account  associated with the job.  (Valid for
328                     jobs only)
329
330              AccrueTime
331                     Print the accrue time associated with  the  job.   (Valid
332                     for jobs only)
333
334              admin_comment
335                     Administrator  comment  associated  with the job.  (Valid
336                     for jobs only)
337
338              AllocNodes
339                     Print the nodes allocated to the job.   (Valid  for  jobs
340                     only)
341
342              AllocSID
343                     Print  the session ID used to submit the job.  (Valid for
344                     jobs only)
345
346              ArrayJobID
347                     Prints the job ID of the job array.  (Valid for jobs  and
348                     job steps)
349
350              ArrayTaskID
351                     Prints the task ID of the job array.  (Valid for jobs and
352                     job steps)
353
354              AssocID
355                     Prints the ID of the job association.   (Valid  for  jobs
356                     only)
357
358              BatchFlag
359                     Prints  whether  the batch flag has been set.  (Valid for
360                     jobs only)
361
362              BatchHost
363                     Executing (batch) host. For an allocated session, this is
364                     the host on which the session is executing (i.e. the node
365                     from which the srun or the salloc command was  executed).
366                     For  a  batch  job,  this is the node executing the batch
367                     script. In the case of  a  typical  Linux  cluster,  this
368                     would  be the compute node zero of the allocation. In the
369                     case of a Cray ALPS system, this would be  the  front-end
370                     host whose slurmd daemon executes the job script.  (Valid
371                     for jobs only)
372
373              BoardsPerNode
374                     Prints the number of boards per  node  allocated  to  the
375                     job.  (Valid for jobs only)
376
377              BurstBuffer
378                     Burst Buffer specification (Valid for jobs only)
379
380              BurstBufferState
381                     Burst Buffer state (Valid for jobs only)
382
383              Cluster
384                     Name of the cluster that is running the job or job step.
385
386              ClusterFeature
387                     Cluster  features  required  by the job.  (Valid for jobs
388                     only)
389
390              Command
391                     The command to be executed.  (Valid for jobs only)
392
393              Comment
394                     Comment associated with the job.  (Valid for jobs only)
395
396              Contiguous
397                     Are contiguous nodes requested by the  job.   (Valid  for
398                     jobs only)
399
400              Container
401                     OCI container bundle path.
402
403              Cores  Number  of  cores  per socket requested by the job.  This
404                     reports the value of the srun --cores-per-socket  option.
405                     When  --cores-per-socket  has  not  been set, "*" is dis‐
406                     played.  (Valid for jobs only)
407
408              CoreSpec
409                     Count of cores reserved on each node for system use (core
410                     specialization).  (Valid for jobs only)
411
412              CPUFreq
413                     Prints  the  frequency of the allocated CPUs.  (Valid for
414                     job steps only)
415
416              cpus-per-task
417                     Prints the number of CPUs per tasks allocated to the job.
418                     (Valid for jobs only)
419
420              cpus-per-tres
421                     Print  the  memory required per trackable resources allo‐
422                     cated to the job or job step.
423
424              Deadline
425                     Prints the deadline affected to the job (Valid  for  jobs
426                     only)
427
428              DelayBoot
429                     Delay boot time.  (Valid for jobs only)
430
431              Dependency
432                     Job  dependencies remaining. This job will not begin exe‐
433                     cution until these dependent jobs complete. In  the  case
434                     of  a  job that can not run due to job dependencies never
435                     being satisfied, the full original job dependency  speci‐
436                     fication  will  be reported. A value of NULL implies this
437                     job has no dependencies.  (Valid for jobs only)
438
439              DerivedEC
440                     Derived exit code for the job, which is the highest  exit
441                     code of any job step.  (Valid for jobs only)
442
443              EligibleTime
444                     Time  the  job  is eligible for running.  (Valid for jobs
445                     only)
446
447              EndTime
448                     The time of job termination, actual or expected.   (Valid
449                     for jobs only)
450
451              exit_code
452                     The exit code for the job.  (Valid for jobs only)
453
454              Feature
455                     Features required by the job.  (Valid for jobs only)
456
457              GroupID
458                     Group ID of the job.  (Valid for jobs only)
459
460              GroupName
461                     Group name of the job.  (Valid for jobs only)
462
463              HetJobID
464                     Job ID of the heterogeneous job leader.
465
466              HetJobIDSet
467                     Expression  identifying  all  components job IDs within a
468                     heterogeneous job.
469
470              HetJobOffset
471                     Zero origin offset within a collection  of  heterogeneous
472                     job components.
473
474              JobArrayID
475                     Job array's job ID. This is the base job ID.  For non-ar‐
476                     ray jobs, this is the job ID.  (Valid for jobs only)
477
478              JobID  Job ID.  This will have a unique value for  each  element
479                     of  job  arrays and each component of heterogeneous jobs.
480                     (Valid for jobs only)
481
482              LastSchedEval
483                     Prints the last time the job was evaluated  for  schedul‐
484                     ing.  (Valid for jobs only)
485
486              Licenses
487                     Licenses reserved for the job.  (Valid for jobs only)
488
489              MaxCPUs
490                     Prints  the  max  number  of  CPUs  allocated to the job.
491                     (Valid for jobs only)
492
493              MaxNodes
494                     Prints the max number of  nodes  allocated  to  the  job.
495                     (Valid for jobs only)
496
497              MCSLabel
498                     Prints the MCS_label of the job.  (Valid for jobs only)
499
500              mem-per-tres
501                     Print the memory (in MB) required per trackable resources
502                     allocated to the job or job step.
503
504              MinCpus
505                     Minimum number of CPUs (processors) per node requested by
506                     the  job.   This  reports the value of the srun --mincpus
507                     option with a default value of  zero.   (Valid  for  jobs
508                     only)
509
510              MinMemory
511                     Minimum  size  of  memory  (in  MB) requested by the job.
512                     (Valid for jobs only)
513
514              MinTime
515                     Minimum time limit of the job (Valid for jobs only)
516
517              MinTmpDisk
518                     Minimum size of temporary disk space (in MB) requested by
519                     the job.  (Valid for jobs only)
520
521              Name   Job or job step name.  (Valid for jobs and job steps)
522
523              Network
524                     The  network that the job is running on.  (Valid for jobs
525                     and job steps)
526
527              Nice   Nice value (adjustment to a job's  scheduling  priority).
528                     (Valid for jobs only)
529
530              NodeList
531                     List  of  nodes  allocated to the job or job step. In the
532                     case of a COMPLETING job, the list of nodes will comprise
533                     only  those nodes that have not yet been returned to ser‐
534                     vice.  (Valid for jobs only)
535
536              Nodes  List of nodes allocated to the job or job  step.  In  the
537                     case of a COMPLETING job, the list of nodes will comprise
538                     only those nodes that have not yet been returned to  ser‐
539                     vice.  (Valid job steps only)
540
541              NTPerBoard
542                     The  number  of  tasks  per  board  allocated to the job.
543                     (Valid for jobs only)
544
545              NTPerCore
546                     The number of  tasks  per  core  allocated  to  the  job.
547                     (Valid for jobs only)
548
549              NTPerNode
550                     The  number  of  tasks  per  node  allocated  to the job.
551                     (Valid for jobs only)
552
553              NTPerSocket
554                     The number of tasks per  socket  allocated  to  the  job.
555                     (Valid for jobs only)
556
557              NumCPUs
558                     Number of CPUs (processors) requested by the job or allo‐
559                     cated to it if already running.  As a job is  completing,
560                     this number will reflect the current number of CPUs allo‐
561                     cated.  (Valid for jobs and job steps)
562
563              NumNodes
564                     Number of nodes allocated to the job or the minimum  num‐
565                     ber of nodes required by a pending job. The actual number
566                     of nodes allocated to a pending job may exceed this  num‐
567                     ber  if the job specified a node range count (e.g.  mini‐
568                     mum and maximum node counts) or the job specifies a  pro‐
569                     cessor  count  instead  of a node count. As a job is com‐
570                     pleting this number will reflect the  current  number  of
571                     nodes allocated.  (Valid for jobs only)
572
573              NumTasks
574                     Number of tasks requested by a job or job step.  This re‐
575                     ports the value of the --ntasks option.  (Valid for  jobs
576                     and job steps)
577
578              Origin Cluster name where federated job originated from.  (Valid
579                     for federated jobs only)
580
581              OriginRaw
582                     Cluster ID where federated job originated  from.   (Valid
583                     for federated jobs only)
584
585              OverSubscribe
586                     Can  the  compute  resources allocated to the job be over
587                     subscribed by other jobs.  The resources to be over  sub‐
588                     scribed can be nodes, sockets, cores, or hyperthreads de‐
589                     pending upon configuration.  The value will be  "YES"  if
590                     the  job  was  submitted with the oversubscribe option or
591                     the partition  is  configured  with  OverSubscribe=Force,
592                     "NO" if the job requires exclusive node access, "USER" if
593                     the allocated compute nodes are  dedicated  to  a  single
594                     user,  "MCS" if the allocated compute nodes are dedicated
595                     to a single security class (See MCSPlugin and  MCSParame‐
596                     ters configuration parameters for more information), "OK"
597                     otherwise (typically allocated  dedicated  CPUs),  (Valid
598                     for jobs only)
599
600              Partition
601                     Partition  of  the  job or job step.  (Valid for jobs and
602                     job steps)
603
604              PreemptTime
605                     The preempt time for the job.  (Valid for jobs only)
606
607              PendingTime
608                     The time (in seconds) between start time and submit  time
609                     of  the  job.   If  the job has not started yet, then the
610                     time (in seconds) between now and the submit time of  the
611                     job.  (Valid for jobs only)
612
613              Priority
614                     Priority of the job (converted to a floating point number
615                     between 0.0 and 1.0).  Also see prioritylong.  (Valid for
616                     jobs only)
617
618              PriorityLong
619                     Priority  of the job (generally a very large unsigned in‐
620                     teger).  Also see priority.  (Valid for jobs only)
621
622              Profile
623                     Profile of the job.  (Valid for jobs only)
624
625              QOS    Quality of service associated with the job.   (Valid  for
626                     jobs only)
627
628              Reason The  reason  a  job is in its current state.  See the JOB
629                     REASON CODES section below for more information.   (Valid
630                     for jobs only)
631
632              ReasonList
633                     For  pending jobs: the reason a job is waiting for execu‐
634                     tion is printed within parenthesis.  For terminated  jobs
635                     with  failure: an explanation as to why the job failed is
636                     printed within parenthesis.  For all  other  job  states:
637                     the  list  of  allocate  nodes.  See the JOB REASON CODES
638                     section below for  more  information.   (Valid  for  jobs
639                     only)
640
641              Reboot Indicates  if  the allocated nodes should be rebooted be‐
642                     fore starting the job.  (Valid on jobs only)
643
644              ReqNodes
645                     List of node  names  explicitly  requested  by  the  job.
646                     (Valid for jobs only)
647
648              ReqSwitch
649                     The  max  number  of  requested  switches by for the job.
650                     (Valid for jobs only)
651
652              Requeue
653                     Prints whether the  job  will  be  requeued  on  failure.
654                     (Valid for jobs only)
655
656              Reservation
657                     Reservation for the job.  (Valid for jobs only)
658
659              ResizeTime
660                     The  amount  of  time changed for the job to run.  (Valid
661                     for jobs only)
662
663              RestartCnt
664                     The number of restarts for  the  job.   (Valid  for  jobs
665                     only)
666
667              ResvPort
668                     Reserved ports of the job.  (Valid for job steps only)
669
670              SchedNodes
671                     For pending jobs, a list of the nodes expected to be used
672                     when the job is started.  (Valid for jobs only)
673
674              SCT    Number of requested sockets, cores, and  threads  (S:C:T)
675                     per node for the job.  When (S:C:T) has not been set, "*"
676                     is displayed.  (Valid for jobs only)
677
678              SelectJobInfo
679                     Node selection plugin specific data for a  job.  Possible
680                     data  includes:  Geometry requirement of resource alloca‐
681                     tion (X,Y,Z dimensions), Connection type (TORUS, MESH, or
682                     NAV == torus else mesh), Permit rotation of geometry (yes
683                     or no), Node use (VIRTUAL or COPROCESSOR),  etc.   (Valid
684                     for jobs only)
685
686              SiblingsActive
687                     Cluster  names  of  where  federated  sibling jobs exist.
688                     (Valid for federated jobs only)
689
690              SiblingsActiveRaw
691                     Cluster  IDs  of  where  federated  sibling  jobs  exist.
692                     (Valid for federated jobs only)
693
694              SiblingsViable
695                     Cluster  names of where federated sibling jobs are viable
696                     to run.  (Valid for federated jobs only)
697
698              SiblingsViableRaw
699                     Cluster IDs of where federated  sibling  jobs  viable  to
700                     run.  (Valid for federated jobs only)
701
702              Sockets
703                     Number  of  sockets  per node requested by the job.  This
704                     reports the value of the srun --sockets-per-node  option.
705                     When  --sockets-per-node  has  not  been set, "*" is dis‐
706                     played.  (Valid for jobs only)
707
708              SPerBoard
709                     Number of sockets per board allocated to the job.  (Valid
710                     for jobs only)
711
712              StartTime
713                     Actual  or  expected  start  time of the job or job step.
714                     (Valid for jobs and job steps)
715
716              State  Job state in extended form.  See the JOB STATE CODES sec‐
717                     tion  below  for  a  list of possible states.  (Valid for
718                     jobs only)
719
720              StateCompact
721                     Job state in compact form.  See the JOB STATE CODES  sec‐
722                     tion  below  for  a  list of possible states.  (Valid for
723                     jobs only)
724
725              STDERR The directory for standard error to  output  to.   (Valid
726                     for jobs only)
727
728              STDIN  The directory for standard in.  (Valid for jobs only)
729
730              STDOUT The  directory for standard out to output to.  (Valid for
731                     jobs only)
732
733              StepID Job or job step ID.  In the case of job arrays,  the  job
734                     ID  format  will  be of the form "<base_job_id>_<index>".
735                     (Valid forjob steps only)
736
737              StepName
738                     Job step name.  (Valid for job steps only)
739
740              StepState
741                     The state of the job step.  (Valid for job steps only)
742
743              SubmitTime
744                     The time that the job was submitted at.  (Valid for  jobs
745                     only)
746
747              system_comment
748                     System  comment associated with the job.  (Valid for jobs
749                     only)
750
751              Threads
752                     Number of threads per core requested by  the  job.   This
753                     reports  the value of the srun --threads-per-core option.
754                     When --threads-per-core has not been  set,  "*"  is  dis‐
755                     played.  (Valid for jobs only)
756
757              TimeLeft
758                     Time  left  for  the  job  to  execute in days-hours:min‐
759                     utes:seconds.  This value is  calculated  by  subtracting
760                     the  job's  time used from its time limit.  The value may
761                     be "NOT_SET" if not yet established or "UNLIMITED" for no
762                     limit.  (Valid for jobs only)
763
764              TimeLimit
765                     Timelimit  for  the job or job step.  (Valid for jobs and
766                     job steps)
767
768              TimeUsed
769                     Time used by the  job  or  job  step  in  days-hours:min‐
770                     utes:seconds.   The  days  and  hours are printed only as
771                     needed.  For job steps this field shows the elapsed  time
772                     since execution began and thus will be inaccurate for job
773                     steps which have  been  suspended.   Clock  skew  between
774                     nodes  in  the  cluster will cause the time to be inaccu‐
775                     rate.  If the time is obviously wrong (e.g. negative), it
776                     displays as "INVALID".  (Valid for jobs and job steps)
777
778              tres-alloc
779                     Print  the  trackable  resources  allocated to the job if
780                     running.  If not running, then print  the  trackable  re‐
781                     sources requested by the job.
782
783              tres-bind
784                     Print  the  trackable resources task binding requested by
785                     the job or job step.
786
787              tres-freq
788                     Print the trackable resources  frequencies  requested  by
789                     the job or job step.
790
791              tres-per-job
792                     Print the trackable resources requested by the job.
793
794              tres-per-node
795                     Print  the  trackable resources per node requested by the
796                     job or job step.
797
798              tres-per-socket
799                     Print the trackable resources per socket requested by the
800                     job or job step.
801
802              tres-per-step
803                     Print the trackable resources requested by the job step.
804
805              tres-per-task
806                     Print  the  trackable resources per task requested by the
807                     job or job step.
808
809              UserID User ID for a job or job step.  (Valid for jobs  and  job
810                     steps)
811
812              UserName
813                     User name for a job or job step.  (Valid for jobs and job
814                     steps)
815
816              Wait4Switch
817                     The amount of time to wait  for  the  desired  number  of
818                     switches.  (Valid for jobs only)
819
820              WCKey  Workload  Characterization  Key (wckey).  (Valid for jobs
821                     only)
822
823              WorkDir
824                     The job's working directory.  (Valid for jobs only)
825
826       --help Print a help message describing all options squeue.
827
828       --hide Do not display information about jobs and job steps in all  par‐
829              titions.  By default, information about partitions that are con‐
830              figured as hidden or are not available to the user's group  will
831              not be displayed (i.e. this is the default behavior).
832
833       -i, --iterate=<seconds>
834              Repeatedly  gather  and  report the requested information at the
835              interval specified (in seconds).   By  default,  prints  a  time
836              stamp with the header.
837
838       -j, --jobs=<job_id_list>
839              Requests a comma separated list of job IDs to display.  Defaults
840              to all jobs.  The --jobs=<job_id_list> option  may  be  used  in
841              conjunction  with  the  --steps option to print step information
842              about specific jobs.  Note: If a list of job  IDs  is  provided,
843              the  jobs  are  displayed even if they are on hidden partitions.
844              Since this option's argument is optional, for proper parsing the
845              single letter option must be followed immediately with the value
846              and not include a space between them. For example  "-j1008"  and
847              not  "-j 1008".  The job ID format is "job_id[_array_id]".  Per‐
848              formance of the command can be measurably improved  for  systems
849              with  large  numbers  of jobs when a single job ID is specified.
850              By default, this field size will be limited to  64  bytes.   Use
851              the  environment  variable  SLURM_BITSTR_LEN  to  specify larger
852              field sizes.
853
854       --json Dump job information as JSON. All other formatting and filtering
855              arguments will be ignored.
856
857       -L, --licenses=<license_list>
858              Request  jobs  requesting  or using one or more of the named li‐
859              censes.  The license list consists of a comma separated list  of
860              license names.
861
862       --local
863              Show  only  jobs local to this cluster. Ignore other clusters in
864              this federation (if any). Overrides --federation.
865
866       -l, --long
867              Report more of the available information for the  selected  jobs
868              or job steps, subject to any constraints specified.
869
870       --me   Equivalent to --user=<my username>.
871
872       -n, --name=<name_list>
873              Request  jobs  or  job  steps having one of the specified names.
874              The list consists of a comma separated list of job names.
875
876       --noconvert
877              Don't convert units from their original type (e.g.  2048M  won't
878              be converted to 2G).
879
880       -w, --nodelist=<hostlist>
881              Report  only  on jobs allocated to the specified node or list of
882              nodes.  This may either be the NodeName or NodeHostname  as  de‐
883              fined  in  slurm.conf(5)  in  the  event  that  they  differ.  A
884              node_name of localhost is mapped to the current host name.
885
886       -h, --noheader
887              Do not print a header on the output.
888
889       -p, --partition=<part_list>
890              Specify the partitions of the jobs or steps to view.  Accepts  a
891              comma separated list of partition names.
892
893       -P, --priority
894              For  pending jobs submitted to multiple partitions, list the job
895              once per partition. In addition, if jobs are sorted by priority,
896              consider both the partition and job priority. This option can be
897              used to produce a list of pending jobs in the same order consid‐
898              ered for scheduling by Slurm with appropriate additional options
899              (e.g. "--sort=-p,i --states=PD").
900
901       -q, --qos=<qos_list>
902              Specify the qos(s) of the jobs or steps to view. Accepts a comma
903              separated list of qos's.
904
905       -R, --reservation=<reservation_name>
906              Specify the reservation of the jobs to view.
907
908       --sibling
909              Show  all sibling jobs on a federated cluster. Implies --federa‐
910              tion.
911
912       -S, --sort=<sort_list>
913              Specification of the order in which records should be  reported.
914              This  uses  the same field specification as the <output_format>.
915              The long format option "cluster" can also be used to  sort  jobs
916              or  job  steps  by cluster name (e.g. federated jobs).  Multiple
917              sorts may be performed by listing multiple sort fields separated
918              by  commas.   The field specifications may be preceded by "+" or
919              "-" for ascending (default) and descending  order  respectively.
920              For example, a sort value of "P,U" will sort the records by par‐
921              tition name then by user id.  The default value of sort for jobs
922              is  "P,t,-p" (increasing partition name then within a given par‐
923              tition by increasing job state and  then  decreasing  priority).
924              The  default  value  of  sort for job steps is "P,i" (increasing
925              partition name then within a given partition by increasing  step
926              id).
927
928       --start
929              Report the expected start time and resources to be allocated for
930              pending jobs in order of increasing start time.  This is equiva‐
931              lent  to  the  following options: --format="%.18i %.9P %.8j %.8u
932              %.2t  %.19S %.6D %20Y %R", --sort=S and  --states=PENDING.   Any
933              of these options may be explicitly changed as desired by combin‐
934              ing the --start option with other option values (e.g. to  use  a
935              different  output  format).   The expected start time of pending
936              jobs is only available if the Slurm is  configured  to  use  the
937              backfill scheduling plugin.
938
939       -t, --states=<state_list>
940              Specify  the  states of jobs to view.  Accepts a comma separated
941              list of state names or "all". If "all" is specified then jobs of
942              all states will be reported. If no state is specified then pend‐
943              ing, running, and completing jobs  are  reported.  See  the  JOB
944              STATE  CODES  section below for a list of valid states. Both ex‐
945              tended and compact forms are valid.  Note the <state_list>  sup‐
946              plied is case insensitive ("pd" and "PD" are equivalent).
947
948       -s, --steps
949              Specify the job steps to view.  This flag indicates that a comma
950              separated list of job steps to view  follows  without  an  equal
951              sign  (see  examples).   The  job  step  format  is "job_id[_ar‐
952              ray_id].step_id". Defaults to all job steps. Since this option's
953              argument  is  optional, for proper parsing the single letter op‐
954              tion must be followed immediately with the value and not include
955              a  space  between  them.  For  example  "-s1008.0"  and  not "-s
956              1008.0".
957
958       --usage
959              Print a brief help message listing the squeue options.
960
961       -u, --user=<user_list>
962              Request jobs or job steps from a comma separated list of  users.
963              The  list can consist of user names or user id numbers.  Perfor‐
964              mance of the command can be measurably improved for systems with
965              large numbers of jobs when a single user is specified.
966
967       -v, --verbose
968              Report details of squeues actions.
969
970       -V , --version
971              Print version information and exit.
972
973       --yaml Dump job information as YAML. All other formatting and filtering
974              arguments will be ignored.
975

JOB REASON CODES

977       These codes identify the reason that a job is waiting for execution.  A
978       job  may be waiting for more than one reason, in which case only one of
979       those reasons is displayed.
980
981
982       AssociationJobLimit   The job's association has reached its maximum job
983                             count.
984
985       AssociationResourceLimit
986                             The  job's  association has reached some resource
987                             limit.
988
989       AssociationTimeLimit  The job's association has reached its time limit.
990
991       BadConstraints        The job's constraints can not be satisfied.
992
993       BeginTime             The job's earliest start time has  not  yet  been
994                             reached.
995
996       Cleaning              The  job  is being requeued and still cleaning up
997                             from its previous execution.
998
999       Dependency            This job is waiting for a dependent job  to  com‐
1000                             plete.
1001
1002       FrontEndDown          No  front  end  node is available to execute this
1003                             job.
1004
1005       InactiveLimit         The job reached the system InactiveLimit.
1006
1007       InvalidAccount        The job's account is invalid.
1008
1009       InvalidQOS            The job's QOS is invalid.
1010
1011       JobHeldAdmin          The job is held by a system administrator.
1012
1013       JobHeldUser           The job is held by the user.
1014
1015       JobLaunchFailure      The job could not be launched.  This may  be  due
1016                             to  a  file system problem, invalid program name,
1017                             etc.
1018
1019       Licenses              The job is waiting for a license.
1020
1021       NodeDown              A node required by the job is down.
1022
1023       NonZeroExitCode       The job terminated with a non-zero exit code.
1024
1025       PartitionDown         The partition required by this job is in  a  DOWN
1026                             state.
1027
1028       PartitionInactive     The partition required by this job is in an Inac‐
1029                             tive state and not able to start jobs.
1030
1031       PartitionNodeLimit    The number of nodes required by this job is  out‐
1032                             side of its partition's current limits.  Can also
1033                             indicate that required nodes are DOWN or DRAINED.
1034
1035       PartitionTimeLimit    The job's time limit exceeds its partition's cur‐
1036                             rent time limit.
1037
1038       Priority              One  or  more higher priority jobs exist for this
1039                             partition or advanced reservation.
1040
1041       Prolog                Its PrologSlurmctld program is still running.
1042
1043       QOSJobLimit           The job's QOS has reached its maximum job count.
1044
1045       QOSResourceLimit      The job's QOS has reached some resource limit.
1046
1047       QOSTimeLimit          The job's QOS has reached its time limit.
1048
1049       ReqNodeNotAvail       Some node specifically required by the job is not
1050                             currently  available.   The node may currently be
1051                             in use, reserved for another job, in an  advanced
1052                             reservation,  DOWN,  DRAINED,  or not responding.
1053                             Nodes which are DOWN, DRAINED, or not  responding
1054                             will  be identified as part of the job's "reason"
1055                             field as "UnavailableNodes". Such nodes will typ‐
1056                             ically  require  the intervention of a system ad‐
1057                             ministrator to make available.
1058
1059       Reservation           The job is waiting its  advanced  reservation  to
1060                             become available.
1061
1062       Resources             The job is waiting for resources to become avail‐
1063                             able.
1064
1065       SystemFailure         Failure of the Slurm system, a file  system,  the
1066                             network, etc.
1067
1068       TimeLimit             The job exhausted its time limit.
1069
1070       QOSUsageThreshold     Required QOS threshold has been breached.
1071
1072       WaitingForScheduling  No reason has been set for this job yet.  Waiting
1073                             for the scheduler to  determine  the  appropriate
1074                             reason.
1075

JOB STATE CODES

1077       Jobs  typically pass through several states in the course of their exe‐
1078       cution.  The typical states are PENDING, RUNNING,  SUSPENDED,  COMPLET‐
1079       ING, and COMPLETED.  An explanation of each state follows.
1080
1081
1082       BF  BOOT_FAIL       Job terminated due to launch failure, typically due
1083                           to a hardware failure (e.g. unable to boot the node
1084                           or block and the job can not be requeued).
1085
1086       CA  CANCELLED       Job  was explicitly cancelled by the user or system
1087                           administrator.  The job may or may  not  have  been
1088                           initiated.
1089
1090       CD  COMPLETED       Job  has terminated all processes on all nodes with
1091                           an exit code of zero.
1092
1093       CF  CONFIGURING     Job has been allocated resources, but  are  waiting
1094                           for them to become ready for use (e.g. booting).
1095
1096       CG  COMPLETING      Job is in the process of completing. Some processes
1097                           on some nodes may still be active.
1098
1099       DL  DEADLINE        Job terminated on deadline.
1100
1101       F   FAILED          Job terminated with non-zero  exit  code  or  other
1102                           failure condition.
1103
1104       NF  NODE_FAIL       Job  terminated due to failure of one or more allo‐
1105                           cated nodes.
1106
1107       OOM OUT_OF_MEMORY   Job experienced out of memory error.
1108
1109       PD  PENDING         Job is awaiting resource allocation.
1110
1111       PR  PREEMPTED       Job terminated due to preemption.
1112
1113       R   RUNNING         Job currently has an allocation.
1114
1115       RD  RESV_DEL_HOLD   Job is being held after requested  reservation  was
1116                           deleted.
1117
1118       RF  REQUEUE_FED     Job is being requeued by a federation.
1119
1120       RH  REQUEUE_HOLD    Held job is being requeued.
1121
1122       RQ  REQUEUED        Completing job is being requeued.
1123
1124       RS  RESIZING        Job is about to change size.
1125
1126       RV  REVOKED         Sibling was removed from cluster due to other clus‐
1127                           ter starting the job.
1128
1129       SI  SIGNALING       Job is being signaled.
1130
1131       SE  SPECIAL_EXIT    The job was requeued in a special state. This state
1132                           can  be set by users, typically in EpilogSlurmctld,
1133                           if the job has terminated with  a  particular  exit
1134                           value.
1135
1136       SO  STAGE_OUT       Job is staging out files.
1137
1138       ST  STOPPED         Job  has  an  allocation,  but  execution  has been
1139                           stopped with SIGSTOP signal.  CPUS  have  been  re‐
1140                           tained by this job.
1141
1142       S   SUSPENDED       Job  has an allocation, but execution has been sus‐
1143                           pended and CPUs have been released for other jobs.
1144
1145       TO  TIMEOUT         Job terminated upon reaching its time limit.
1146

PERFORMANCE

1148       Executing squeue sends a remote procedure call to slurmctld. If  enough
1149       calls  from squeue or other Slurm client commands that send remote pro‐
1150       cedure calls to the slurmctld daemon come in at once, it can result  in
1151       a  degradation of performance of the slurmctld daemon, possibly result‐
1152       ing in a denial of service.
1153
1154       Do not run squeue or other Slurm client commands that send remote  pro‐
1155       cedure  calls  to  slurmctld  from loops in shell scripts or other pro‐
1156       grams. Ensure that programs limit calls to squeue to the minimum neces‐
1157       sary for the information you are trying to gather.
1158
1159

ENVIRONMENT VARIABLES

1161       Some  squeue  options may be set via environment variables. These envi‐
1162       ronment variables, along with their corresponding options,  are  listed
1163       below.  (Note:  Command  line  options  will always override these set‐
1164       tings.)
1165
1166
1167       SLURM_BITSTR_LEN    Specifies the string length to be used for  holding
1168                           a  job  array's  task  ID  expression.  The default
1169                           value is 64 bytes.  A value of  0  will  print  the
1170                           full  expression  with any length required.  Larger
1171                           values may adversely impact the application perfor‐
1172                           mance.
1173
1174       SLURM_CLUSTERS      Same as --clusters
1175
1176       SLURM_CONF          The location of the Slurm configuration file.
1177
1178       SLURM_TIME_FORMAT   Specify  the  format  used to report time stamps. A
1179                           value of standard,  the  default  value,  generates
1180                           output            in            the            form
1181                           "year-month-dateThour:minute:second".  A  value  of
1182                           relative  returns  only "hour:minute:second" if the
1183                           current day.  For other dates in the  current  year
1184                           it  prints  the  "hour:minute" preceded by "Tomorr"
1185                           (tomorrow), "Ystday" (yesterday), the name  of  the
1186                           day  for the coming week (e.g. "Mon", "Tue", etc.),
1187                           otherwise the date  (e.g.  "25  Apr").   For  other
1188                           years  it  returns  a date month and year without a
1189                           time (e.g.  "6 Jun 2012"). All of the  time  stamps
1190                           use a 24 hour format.
1191
1192                           A  valid  strftime()  format can also be specified.
1193                           For example, a value of "%a %T" will report the day
1194                           of the week and a time stamp (e.g. "Mon 12:34:56").
1195
1196       SQUEUE_ACCOUNT      -A <account_list>, --account=<account_list>
1197
1198       SQUEUE_ALL          -a, --all
1199
1200       SQUEUE_ARRAY        -r, --array
1201
1202       SQUEUE_NAMES        --name=<name_list>
1203
1204       SQUEUE_FEDERATION   --federation
1205
1206       SQUEUE_FORMAT       -o <output_format>, --format=<output_format>
1207
1208       SQUEUE_FORMAT2      -O <output_format>, --Format=<output_format>
1209
1210       SQUEUE_LICENSES     -p-l <license_list>, --license=<license_list>
1211
1212       SQUEUE_LOCAL        --local
1213
1214       SQUEUE_PARTITION    -p <part_list>, --partition=<part_list>
1215
1216       SQUEUE_PRIORITY     -P, --priority
1217
1218       SQUEUE_QOS          -p <qos_list>, --qos=<qos_list>
1219
1220       SQUEUE_SIBLING      --sibling
1221
1222       SQUEUE_SORT         -S <sort_list>, --sort=<sort_list>
1223
1224       SQUEUE_STATES       -t <state_list>, --states=<state_list>
1225
1226       SQUEUE_USERS        -u <user_list>, --users=<user_list>
1227

EXAMPLES

1229       Print  the  jobs  scheduled in the debug partition and in the COMPLETED
1230       state in the format with six right justified digits for the job id fol‐
1231       lowed by the priority with an arbitrary fields size:
1232
1233              $ squeue -p debug -t COMPLETED -o "%.6i %p"
1234               JOBID PRIORITY
1235               65543 99993
1236               65544 99992
1237               65545 99991
1238
1239
1240       Print the job steps in the debug partition sorted by user:
1241
1242              $ squeue -s -p debug -S u
1243                STEPID        NAME PARTITION     USER      TIME NODELIST
1244               65552.1       test1     debug    alice      0:23 dev[1-4]
1245               65562.2     big_run     debug      bob      0:18 dev22
1246               65550.1      param1     debug  candice   1:43:21 dev[6-12]
1247
1248
1249       Print information only about jobs 12345, 12346 and 12348:
1250
1251              $ squeue --jobs 12345,12346,12348
1252               JOBID PARTITION NAME USER ST  TIME  NODES NODELIST(REASON)
1253               12345     debug job1 dave  R   0:21     4 dev[9-12]
1254               12346     debug job2 dave PD   0:00     8 (Resources)
1255               12348     debug job3 ed   PD   0:00     4 (Priority)
1256
1257
1258       Print information only about job step 65552.1:
1259
1260              $ squeue --steps 65552.1
1261                STEPID     NAME PARTITION    USER    TIME  NODELIST
1262               65552.1    test2     debug   alice   12:49  dev[1-4]
1263
1264

COPYING

1266       Copyright  (C)  2002-2007  The Regents of the University of California.
1267       Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER).
1268       Copyright (C) 2008-2010 Lawrence Livermore National Security.
1269       Copyright (C) 2010-2022 SchedMD LLC.
1270
1271       This file is part of Slurm, a resource  management  program.   For  de‐
1272       tails, see <https://slurm.schedmd.com/>.
1273
1274       Slurm  is free software; you can redistribute it and/or modify it under
1275       the terms of the GNU General Public License as published  by  the  Free
1276       Software  Foundation;  either version 2 of the License, or (at your op‐
1277       tion) any later version.
1278
1279       Slurm is distributed in the hope that it will be  useful,  but  WITHOUT
1280       ANY  WARRANTY;  without even the implied warranty of MERCHANTABILITY or
1281       FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General  Public  License
1282       for more details.
1283

SEE ALSO

1285       scancel(1),  scontrol(1),  sinfo(1),  srun(1), slurm_load_ctl_conf (3),
1286       slurm_load_jobs (3), slurm_load_node (3), slurm_load_partitions (3)
1287
1288
1289
1290May 2021                        Slurm Commands                       squeue(1)
Impressum