1squeue(1)                       Slurm Commands                       squeue(1)
2
3
4

NAME

6       squeue  -  view  information about jobs located in the Slurm scheduling
7       queue.
8
9

SYNOPSIS

11       squeue [OPTIONS...]
12
13

DESCRIPTION

15       squeue is used to view job and job step information for jobs managed by
16       Slurm.
17
18

OPTIONS

20       -A <account_list>, --account=<account_list>
21              Specify  the accounts of the jobs to view. Accepts a comma sepa‐
22              rated list of account names. This has no effect when listing job
23              steps.
24
25
26       -a, --all
27              Display  information about jobs and job steps in all partitions.
28              This causes information to be displayed  about  partitions  that
29              are  configured  as hidden, partitions that are unavailable to a
30              user's group, and federated jobs that are in a "revoked" state.
31
32
33       -r, --array
34              Display one job array element per line.   Without  this  option,
35              the  display  will be optimized for use with job arrays (pending
36              job array elements will be combined on one line of  output  with
37              the array index values printed using a regular expression).
38
39
40       --array-unique
41              Display  one  unique pending job array element per line. Without
42              this option, the pending job array elements will be grouped into
43              the  master array job to optimize the display.  This can also be
44              set with the environment variable SQUEUE_ARRAY_UNIQUE.
45
46
47       --federation
48              Show jobs from the federation if a member of one.
49
50
51       -h, --noheader
52              Do not print a header on the output.
53
54
55       --help Print a help message describing all options squeue.
56
57
58       --hide Do not display information about jobs and job steps in all  par‐
59              titions.  By default, information about partitions that are con‐
60              figured as hidden or are not available to the user's group  will
61              not be displayed (i.e. this is the default behavior).
62
63
64       -i <seconds>, --iterate=<seconds>
65              Repeatedly  gather  and  report the requested information at the
66              interval specified (in seconds).   By  default,  prints  a  time
67              stamp with the header.
68
69
70       -j <job_id_list>, --jobs=<job_id_list>
71              Requests a comma separated list of job IDs to display.  Defaults
72              to all jobs.  The --jobs=<job_id_list> option  may  be  used  in
73              conjunction  with  the  --steps option to print step information
74              about specific jobs.  Note: If a list of job  IDs  is  provided,
75              the  jobs  are  displayed even if they are on hidden partitions.
76              Since this option's argument is optional, for proper parsing the
77              single letter option must be followed immediately with the value
78              and not include a space between them. For example  "-j1008"  and
79              not  "-j 1008".  The job ID format is "job_id[_array_id]".  Per‐
80              formance of the command can be measurably improved  for  systems
81              with  large  numbers  of jobs when a single job ID is specified.
82              By default, this field size will be limited to  64  bytes.   Use
83              the  environment  variable  SLURM_BITSTR_LEN  to  specify larger
84              field sizes.
85
86
87
88       --local
89              Show only jobs local to this cluster. Ignore other  clusters  in
90              this federation (if any). Overrides --federation.
91
92
93       -l, --long
94              Report  more  of the available information for the selected jobs
95              or job steps, subject to any constraints specified.
96
97
98       -L, --licenses=<license_list>
99              Request jobs requesting or using one or more of  the  named  li‐
100              censes.   The license list consists of a comma separated list of
101              license names.
102
103
104       --me   Equivalent to --user=<my username>.
105
106
107       -M, --clusters=<cluster_name>
108              Clusters to issue commands to.  Multiple cluster  names  may  be
109              comma  separated.   A  value  of  'all' will query to run on all
110              clusters.  This option implicitly sets the --local option.
111
112
113       -n, --name=<name_list>
114              Request jobs or job steps having one  of  the  specified  names.
115              The list consists of a comma separated list of job names.
116
117
118       --noconvert
119              Don't  convert  units from their original type (e.g. 2048M won't
120              be converted to 2G).
121
122
123       -o <output_format>, --format=<output_format>
124              Specify the information to be displayed, its size  and  position
125              (right  or  left  justified).   Also see the -O <output_format>,
126              --Format=<output_format> option described below (which  supports
127              less  flexibility  in  formatting,  but  supports  access to all
128              fields).  If the command is executed in a federated cluster  en‐
129              vironment  and  information about more than one cluster is to be
130              displayed and the -h, --noheader option is used, then the  clus‐
131              ter  name  will  be  displayed before the default output formats
132              shown below.
133
134              The default formats with various options are:
135
136              default        "%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R"
137
138              -l, --long     "%.18i %.9P %.8j %.8u %.8T %.10M %.9l %.6D %R"
139
140              -s, --steps    "%.15i %.8j %.9P %.8u %.9M %N"
141
142
143              The format of each field is "%[[.]size]type[suffix]"
144
145                 size   Minimum field size. If no size is specified,  whatever
146                        is needed to print the information will be used.
147
148                 .      Indicates  the  output  should  be right justified and
149                        size must be specified.  By  default  output  is  left
150                        justified.
151
152                 suffix Arbitrary string to append to the end of the field.
153
154
155              Note  that  many of these type specifications are valid only for
156              jobs while others are valid only  for  job  steps.   Valid  type
157              specifications include:
158
159
160              %all  Print  all fields available for this data type with a ver‐
161                    tical bar separating each field.
162
163              %a    Account associated with the job.  (Valid for jobs only)
164
165              %A    Number of tasks created by a job step.  This  reports  the
166                    value  of  the srun --ntasks option.  (Valid for job steps
167                    only)
168
169              %A    Job id.  This will have a unique value for each element of
170                    job arrays.  (Valid for jobs only)
171
172              %B    Executing  (batch) host. For an allocated session, this is
173                    the host on which the session is executing (i.e. the  node
174                    from  which  the srun or the salloc command was executed).
175                    For a batch job, this is  the  node  executing  the  batch
176                    script. In the case of a typical Linux cluster, this would
177                    be the compute node zero of the allocation. In the case of
178                    a Cray ALPS system, this would be the front-end host whose
179                    slurmd daemon executes the job script.
180
181              %c    Minimum number of CPUs (processors) per node requested  by
182                    the job.  This reports the value of the srun --mincpus op‐
183                    tion with a default value of zero.  (Valid for jobs only)
184
185              %C    Number of CPUs (processors) requested by the job or  allo‐
186                    cated  to  it  if already running.  As a job is completing
187                    this number will reflect the current number of CPUs  allo‐
188                    cated.  (Valid for jobs only)
189
190              %d    Minimum  size of temporary disk space (in MB) requested by
191                    the job.  (Valid for jobs only)
192
193              %D    Number of nodes allocated to the job or the minimum number
194                    of  nodes  required by a pending job. The actual number of
195                    nodes allocated to a pending job may exceed this number if
196                    the  job  specified  a node range count (e.g.  minimum and
197                    maximum node counts) or  the  job  specifies  a  processor
198                    count instead of a node count. As a job is completing this
199                    number will reflect the current number of nodes allocated.
200                    (Valid for jobs only)
201
202              %e    Time  at  which the job ended or is expected to end (based
203                    upon its time limit).  (Valid for jobs only)
204
205              %E    Job dependencies remaining. This job will not begin execu‐
206                    tion until these dependent jobs complete. In the case of a
207                    job that can not run due to job dependencies  never  being
208                    satisfied,  the full original job dependency specification
209                    will be reported. A value of NULL implies this job has  no
210                    dependencies.  (Valid for jobs only)
211
212              %f    Features required by the job.  (Valid for jobs only)
213
214              %F    Job  array's job ID. This is the base job ID.  For non-ar‐
215                    ray jobs, this is the job ID.  (Valid for jobs only)
216
217              %g    Group name of the job.  (Valid for jobs only)
218
219              %G    Group ID of the job.  (Valid for jobs only)
220
221              %h    Can the compute resources allocated to  the  job  be  over
222                    subscribed  by  other jobs.  The resources to be over sub‐
223                    scribed can be nodes, sockets, cores, or hyperthreads  de‐
224                    pending  upon  configuration.   The value will be "YES" if
225                    the job was submitted with the oversubscribe option or the
226                    partition  is configured with OverSubscribe=Force, "NO" if
227                    the job requires exclusive node access, "USER" if the  al‐
228                    located  compute  nodes  are  dedicated  to a single user,
229                    "MCS" if the allocated compute nodes are  dedicated  to  a
230                    single  security  class  (See  MCSPlugin and MCSParameters
231                    configuration parameters for more information), "OK"  oth‐
232                    erwise  (typically  allocated  dedicated CPUs), (Valid for
233                    jobs only)
234
235              %H    Number of sockets per node requested by the job.  This re‐
236                    ports  the  value  of  the srun --sockets-per-node option.
237                    When --sockets-per-node has not  been  set,  "*"  is  dis‐
238                    played.  (Valid for jobs only)
239
240              %i    Job or job step id.  In the case of job arrays, the job ID
241                    format will be of the  form  "<base_job_id>_<index>".   By
242                    default, the job array index field size will be limited to
243                    64 bytes.  Use the environment  variable  SLURM_BITSTR_LEN
244                    to  specify  larger  field sizes.  (Valid for jobs and job
245                    steps) In the case of heterogeneous job  allocations,  the
246                    job  ID  format  will be of the form "#+#" where the first
247                    number is the "heterogeneous job leader"  and  the  second
248                    number  the  zero  origin offset for each component of the
249                    job.
250
251              %I    Number of cores per socket requested by the job.  This re‐
252                    ports  the  value  of  the srun --cores-per-socket option.
253                    When --cores-per-socket has not  been  set,  "*"  is  dis‐
254                    played.  (Valid for jobs only)
255
256              %j    Job or job step name.  (Valid for jobs and job steps)
257
258              %J    Number of threads per core requested by the job.  This re‐
259                    ports the value of  the  srun  --threads-per-core  option.
260                    When  --threads-per-core  has  not  been  set, "*" is dis‐
261                    played.  (Valid for jobs only)
262
263              %k    Comment associated with the job.  (Valid for jobs only)
264
265              %K    Job array index.  By default, this field size will be lim‐
266                    ited to 64 bytes.  Use the environment variable SLURM_BIT‐
267                    STR_LEN to specify larger field sizes.   (Valid  for  jobs
268                    only)
269
270              %l    Time  limit  of  the  job  or  job step in days-hours:min‐
271                    utes:seconds.  The value may be "NOT_SET" if not  yet  es‐
272                    tablished  or  "UNLIMITED"  for no limit.  (Valid for jobs
273                    and job steps)
274
275              %L    Time left  for  the  job  to  execute  in  days-hours:min‐
276                    utes:seconds.  This value is calculated by subtracting the
277                    job's time used from its time limit.   The  value  may  be
278                    "NOT_SET"  if  not  yet  established or "UNLIMITED" for no
279                    limit.  (Valid for jobs only)
280
281              %m    Minimum size of memory  (in  MB)  requested  by  the  job.
282                    (Valid for jobs only)
283
284              %M    Time  used  by  the  job  or  job  step in days-hours:min‐
285                    utes:seconds.  The days and  hours  are  printed  only  as
286                    needed.   For  job steps this field shows the elapsed time
287                    since execution began and thus will be inaccurate for  job
288                    steps which have been suspended.  Clock skew between nodes
289                    in the cluster will cause the time to be  inaccurate.   If
290                    the  time  is obviously wrong (e.g. negative), it displays
291                    as "INVALID".  (Valid for jobs and job steps)
292
293              %n    List of  node  names  explicitly  requested  by  the  job.
294                    (Valid for jobs only)
295
296              %N    List  of  nodes  allocated  to the job or job step. In the
297                    case of a COMPLETING job, the list of nodes will  comprise
298                    only  those  nodes that have not yet been returned to ser‐
299                    vice.  (Valid for jobs and job steps)
300
301              %o    The command to be executed.
302
303              %O    Are contiguous nodes requested by  the  job.   (Valid  for
304                    jobs only)
305
306              %p    Priority  of the job (converted to a floating point number
307                    between 0.0 and 1.0).  Also see %Q.  (Valid for jobs only)
308
309              %P    Partition of the job or job step.  (Valid for jobs and job
310                    steps)
311
312              %q    Quality  of  service  associated with the job.  (Valid for
313                    jobs only)
314
315              %Q    Priority of the job (generally a very large unsigned inte‐
316                    ger).  Also see %p.  (Valid for jobs only)
317
318              %r    The  reason  a  job  is in its current state.  See the JOB
319                    REASON CODES section below for more  information.   (Valid
320                    for jobs only)
321
322              %R    For  pending  jobs: the reason a job is waiting for execu‐
323                    tion is printed within parenthesis.  For  terminated  jobs
324                    with  failure:  an explanation as to why the job failed is
325                    printed within parenthesis.  For all other job states: the
326                    list  of allocate nodes.  See the JOB REASON CODES section
327                    below for more information.  (Valid for jobs only)
328
329              %s    Node selection plugin specific data for  a  job.  Possible
330                    data includes: Geometry requirement of resource allocation
331                    (X,Y,Z dimensions), Connection type (TORUS, MESH,  or  NAV
332                    ==  torus  else mesh), Permit rotation of geometry (yes or
333                    no), Node use (VIRTUAL or COPROCESSOR), etc.   (Valid  for
334                    jobs only)
335
336              %S    Actual  or  expected  start  time  of the job or job step.
337                    (Valid for jobs and job steps)
338
339              %t    Job state in compact form.  See the JOB STATE  CODES  sec‐
340                    tion below for a list of possible states.  (Valid for jobs
341                    only)
342
343              %T    Job state in extended form.  See the JOB STATE CODES  sec‐
344                    tion below for a list of possible states.  (Valid for jobs
345                    only)
346
347              %u    User name for a job or job step.  (Valid for jobs and  job
348                    steps)
349
350              %U    User  ID  for  a job or job step.  (Valid for jobs and job
351                    steps)
352
353              %v    Reservation for the job.  (Valid for jobs only)
354
355              %V    The job's submission time.
356
357              %w    Workload Characterization Key (wckey).   (Valid  for  jobs
358                    only)
359
360              %W    Licenses reserved for the job.  (Valid for jobs only)
361
362              %x    List of node names explicitly excluded by the job.  (Valid
363                    for jobs only)
364
365              %X    Count of cores reserved on each node for system use  (core
366                    specialization).  (Valid for jobs only)
367
368              %y    Nice  value  (adjustment  to a job's scheduling priority).
369                    (Valid for jobs only)
370
371              %Y    For pending jobs, a list of the nodes expected to be  used
372                    when the job is started.
373
374              %z    Number  of  requested  sockets, cores, and threads (S:C:T)
375                    per node for the job.  When (S:C:T) has not been set,  "*"
376                    is displayed.  (Valid for jobs only)
377
378              %Z    The job's working directory.
379
380
381
382       -O <output_format>, --Format=<output_format>
383              Specify  the information to be displayed.  Also see the -o <out‐
384              put_format>,  --format=<output_format>  option  described  below
385              (which  supports greater flexibility in formatting, but does not
386              support access to all fields because we  ran  out  of  letters).
387              Requests  a  comma  separated list of job information to be dis‐
388              played.
389
390
391              The format of each field is "type[:[.][size][sufix]]"
392
393                 size   Minimum field size. If no size is specified, 20  char‐
394                        acters will be allocated to print the information.
395
396                 .      Indicates  the  output  should  be right justified and
397                        size must be specified.  By  default  output  is  left
398                        justified.
399
400                 sufix  Arbitrary string to append to the end of the field.
401
402
403              Note  that  many of these type specifications are valid only for
404              jobs while others are valid only  for  job  steps.   Valid  type
405              specifications include:
406
407
408              Account
409                     Print  the  account  associated with the job.  (Valid for
410                     jobs only)
411
412              AccrueTime
413                     Print the accrue time associated with  the  job.   (Valid
414                     for jobs only)
415
416              admin_comment
417                     Administrator  comment  associated  with the job.  (Valid
418                     for jobs only)
419
420              AllocNodes
421                     Print the nodes allocated to the job.   (Valid  for  jobs
422                     only)
423
424              AllocSID
425                     Print  the session ID used to submit the job.  (Valid for
426                     jobs only)
427
428              ArrayJobID
429                     Prints the job ID of the job array.  (Valid for jobs  and
430                     job steps)
431
432              ArrayTaskID
433                     Prints the task ID of the job array.  (Valid for jobs and
434                     job steps)
435
436              AssocID
437                     Prints the ID of the job association.   (Valid  for  jobs
438                     only)
439
440              BatchFlag
441                     Prints  whether  the batch flag has been set.  (Valid for
442                     jobs only)
443
444              BatchHost
445                     Executing (batch) host. For an allocated session, this is
446                     the host on which the session is executing (i.e. the node
447                     from which the srun or the salloc command was  executed).
448                     For  a  batch  job,  this is the node executing the batch
449                     script. In the case of  a  typical  Linux  cluster,  this
450                     would  be the compute node zero of the allocation. In the
451                     case of a Cray ALPS system, this would be  the  front-end
452                     host whose slurmd daemon executes the job script.  (Valid
453                     for jobs only)
454
455              BoardsPerNode
456                     Prints the number of boards per  node  allocated  to  the
457                     job.  (Valid for jobs only)
458
459              BurstBuffer
460                     Burst Buffer specification (Valid for jobs only)
461
462              BurstBufferState
463                     Burst Buffer state (Valid for jobs only)
464
465              Cluster
466                     Name of the cluster that is running the job or job step.
467
468              ClusterFeature
469                     Cluster  features  required  by the job.  (Valid for jobs
470                     only)
471
472              Command
473                     The command to be executed.  (Valid for jobs only)
474
475              Comment
476                     Comment associated with the job.  (Valid for jobs only)
477
478              Contiguous
479                     Are contiguous nodes requested by the  job.   (Valid  for
480                     jobs only)
481
482              Cores  Number  of  cores  per socket requested by the job.  This
483                     reports the value of the srun --cores-per-socket  option.
484                     When  --cores-per-socket  has  not  been set, "*" is dis‐
485                     played.  (Valid for jobs only)
486
487              CoreSpec
488                     Count of cores reserved on each node for system use (core
489                     specialization).  (Valid for jobs only)
490
491              CPUFreq
492                     Prints  the  frequency of the allocated CPUs.  (Valid for
493                     job steps only)
494
495              cpus-per-task
496                     Prints the number of CPUs per tasks allocated to the job.
497                     (Valid for jobs only)
498
499              cpus-per-tres
500                     Print  the  memory required per trackable resources allo‐
501                     cated to the job or job step.
502
503              Deadline
504                     Prints the deadline affected to the job (Valid  for  jobs
505                     only)
506
507              DelayBoot
508                     Delay boot time.  (Valid for jobs only)
509
510              Dependency
511                     Job  dependencies remaining. This job will not begin exe‐
512                     cution until these dependent jobs complete. In  the  case
513                     of  a  job that can not run due to job dependencies never
514                     being satisfied, the full original job dependency  speci‐
515                     fication  will  be reported. A value of NULL implies this
516                     job has no dependencies.  (Valid for jobs only)
517
518              DerivedEC
519                     Derived exit code for the job, which is the highest  exit
520                     code of any job step.  (Valid for jobs only)
521
522              EligibleTime
523                     Time  the  job  is eligible for running.  (Valid for jobs
524                     only)
525
526              EndTime
527                     The time of job termination, actual or expected.   (Valid
528                     for jobs only)
529
530              exit_code
531                     The exit code for the job.  (Valid for jobs only)
532
533              Feature
534                     Features required by the job.  (Valid for jobs only)
535
536              GroupID
537                     Group ID of the job.  (Valid for jobs only)
538
539              GroupName
540                     Group name of the job.  (Valid for jobs only)
541
542              HetJobID
543                     Job ID of the heterogeneous job leader.
544
545              HetJobIDSet
546                     Expression  identifying  all  components job IDs within a
547                     heterogeneous job.
548
549              HetJobOffset
550                     Zero origin offset within a collection  of  heterogeneous
551                     job components.
552
553              JobArrayID
554                     Job array's job ID. This is the base job ID.  For non-ar‐
555                     ray jobs, this is the job ID.  (Valid for jobs only)
556
557              JobID  Job ID.  This will have a unique value for  each  element
558                     of  job  arrays and each component of heterogeneous jobs.
559                     (Valid for jobs only)
560
561              LastSchedEval
562                     Prints the last time the job was evaluated  for  schedul‐
563                     ing.  (Valid for jobs only)
564
565              Licenses
566                     Licenses reserved for the job.  (Valid for jobs only)
567
568              MaxCPUs
569                     Prints  the  max  number  of  CPUs  allocated to the job.
570                     (Valid for jobs only)
571
572              MaxNodes
573                     Prints the max number of  nodes  allocated  to  the  job.
574                     (Valid for jobs only)
575
576              MCSLabel
577                     Prints the MCS_label of the job.  (Valid for jobs only)
578
579              mem-per-tres
580                     Print the memory (in MB) required per trackable resources
581                     allocated to the job or job step.
582
583              MinCpus
584                     Minimum number of CPUs (processors) per node requested by
585                     the  job.   This  reports the value of the srun --mincpus
586                     option with a default value of  zero.   (Valid  for  jobs
587                     only)
588
589              MinMemory
590                     Minimum  size  of  memory  (in  MB) requested by the job.
591                     (Valid for jobs only)
592
593              MinTime
594                     Minimum time limit of the job (Valid for jobs only)
595
596              MinTmpDisk
597                     Minimum size of temporary disk space (in MB) requested by
598                     the job.  (Valid for jobs only)
599
600              Name   Job or job step name.  (Valid for jobs and job steps)
601
602              Network
603                     The  network that the job is running on.  (Valid for jobs
604                     and job steps)
605
606              Nice   Nice value (adjustment to a job's  scheduling  priority).
607                     (Valid for jobs only)
608
609              NodeList
610                     List  of  nodes  allocated to the job or job step. In the
611                     case of a COMPLETING job, the list of nodes will comprise
612                     only  those nodes that have not yet been returned to ser‐
613                     vice.  (Valid for jobs only)
614
615              Nodes  List of nodes allocated to the job or job  step.  In  the
616                     case of a COMPLETING job, the list of nodes will comprise
617                     only those nodes that have not yet been returned to  ser‐
618                     vice.  (Valid job steps only)
619
620              NTPerBoard
621                     The  number  of  tasks  per  board  allocated to the job.
622                     (Valid for jobs only)
623
624              NTPerCore
625                     The number of  tasks  per  core  allocated  to  the  job.
626                     (Valid for jobs only)
627
628              NTPerNode
629                     The  number  of  tasks  per  node  allocated  to the job.
630                     (Valid for jobs only)
631
632              NTPerSocket
633                     The number of tasks per  socket  allocated  to  the  job.
634                     (Valid for jobs only)
635
636              NumCPUs
637                     Number of CPUs (processors) requested by the job or allo‐
638                     cated to it if already running.  As a job is  completing,
639                     this number will reflect the current number of CPUs allo‐
640                     cated.  (Valid for jobs and job steps)
641
642              NumNodes
643                     Number of nodes allocated to the job or the minimum  num‐
644                     ber of nodes required by a pending job. The actual number
645                     of nodes allocated to a pending job may exceed this  num‐
646                     ber  if the job specified a node range count (e.g.  mini‐
647                     mum and maximum node counts) or the job specifies a  pro‐
648                     cessor  count  instead  of a node count. As a job is com‐
649                     pleting this number will reflect the  current  number  of
650                     nodes allocated.  (Valid for jobs only)
651
652              NumTasks
653                     Number of tasks requested by a job or job step.  This re‐
654                     ports the value of the --ntasks option.  (Valid for  jobs
655                     and job steps)
656
657              Origin Cluster name where federated job originated from.  (Valid
658                     for federated jobs only)
659
660              OriginRaw
661                     Cluster ID where federated job originated  from.   (Valid
662                     for federated jobs only)
663
664              OverSubscribe
665                     Can  the  compute  resources allocated to the job be over
666                     subscribed by other jobs.  The resources to be over  sub‐
667                     scribed can be nodes, sockets, cores, or hyperthreads de‐
668                     pending upon configuration.  The value will be  "YES"  if
669                     the  job  was  submitted with the oversubscribe option or
670                     the partition  is  configured  with  OverSubscribe=Force,
671                     "NO" if the job requires exclusive node access, "USER" if
672                     the allocated compute nodes are  dedicated  to  a  single
673                     user,  "MCS" if the allocated compute nodes are dedicated
674                     to a single security class (See MCSPlugin and  MCSParame‐
675                     ters configuration parameters for more information), "OK"
676                     otherwise (typically allocated  dedicated  CPUs),  (Valid
677                     for jobs only)
678
679              Partition
680                     Partition  of  the  job or job step.  (Valid for jobs and
681                     job steps)
682
683              PreemptTime
684                     The preempt time for the job.  (Valid for jobs only)
685
686              PendingTime
687                     The time (in seconds) between start time and submit  time
688                     of  the  job.   If  the job has not started yet, then the
689                     time (in seconds) between now and the submit time of  the
690                     job.  (Valid for jobs only)
691
692              Priority
693                     Priority of the job (converted to a floating point number
694                     between 0.0 and 1.0).  Also see prioritylong.  (Valid for
695                     jobs only)
696
697              PriorityLong
698                     Priority  of the job (generally a very large unsigned in‐
699                     teger).  Also see priority.  (Valid for jobs only)
700
701              Profile
702                     Profile of the job.  (Valid for jobs only)
703
704              QOS    Quality of service associated with the job.   (Valid  for
705                     jobs only)
706
707              Reason The  reason  a  job is in its current state.  See the JOB
708                     REASON CODES section below for more information.   (Valid
709                     for jobs only)
710
711              ReasonList
712                     For  pending jobs: the reason a job is waiting for execu‐
713                     tion is printed within parenthesis.  For terminated  jobs
714                     with  failure: an explanation as to why the job failed is
715                     printed within parenthesis.  For all  other  job  states:
716                     the  list  of  allocate  nodes.  See the JOB REASON CODES
717                     section below for  more  information.   (Valid  for  jobs
718                     only)
719
720              Reboot Indicates  if  the allocated nodes should be rebooted be‐
721                     fore starting the job.  (Valid on jobs only)
722
723              ReqNodes
724                     List of node  names  explicitly  requested  by  the  job.
725                     (Valid for jobs only)
726
727              ReqSwitch
728                     The  max  number  of  requested  switches by for the job.
729                     (Valid for jobs only)
730
731              Requeue
732                     Prints whether the  job  will  be  requeued  on  failure.
733                     (Valid for jobs only)
734
735              Reservation
736                     Reservation for the job.  (Valid for jobs only)
737
738              ResizeTime
739                     The  amount  of  time changed for the job to run.  (Valid
740                     for jobs only)
741
742              RestartCnt
743                     The number of restarts for  the  job.   (Valid  for  jobs
744                     only)
745
746              ResvPort
747                     Reserved ports of the job.  (Valid for job steps only)
748
749              SchedNodes
750                     For pending jobs, a list of the nodes expected to be used
751                     when the job is started.  (Valid for jobs only)
752
753              SCT    Number of requested sockets, cores, and  threads  (S:C:T)
754                     per node for the job.  When (S:C:T) has not been set, "*"
755                     is displayed.  (Valid for jobs only)
756
757              SelectJobInfo
758                     Node selection plugin specific data for a  job.  Possible
759                     data  includes:  Geometry requirement of resource alloca‐
760                     tion (X,Y,Z dimensions), Connection type (TORUS, MESH, or
761                     NAV == torus else mesh), Permit rotation of geometry (yes
762                     or no), Node use (VIRTUAL or COPROCESSOR),  etc.   (Valid
763                     for jobs only)
764
765              SiblingsActive
766                     Cluster  names  of  where  federated  sibling jobs exist.
767                     (Valid for federated jobs only)
768
769              SiblingsActiveRaw
770                     Cluster  IDs  of  where  federated  sibling  jobs  exist.
771                     (Valid for federated jobs only)
772
773              SiblingsViable
774                     Cluster  names of where federated sibling jobs are viable
775                     to run.  (Valid for federated jobs only)
776
777              SiblingsViableRaw
778                     Cluster IDs of where federated  sibling  jobs  viable  to
779                     run.  (Valid for federated jobs only)
780
781              Sockets
782                     Number  of  sockets  per node requested by the job.  This
783                     reports the value of the srun --sockets-per-node  option.
784                     When  --sockets-per-node  has  not  been set, "*" is dis‐
785                     played.  (Valid for jobs only)
786
787              SPerBoard
788                     Number of sockets per board allocated to the job.  (Valid
789                     for jobs only)
790
791              StartTime
792                     Actual  or  expected  start  time of the job or job step.
793                     (Valid for jobs and job steps)
794
795              State  Job state in extended form.  See the JOB STATE CODES sec‐
796                     tion  below  for  a  list of possible states.  (Valid for
797                     jobs only)
798
799              StateCompact
800                     Job state in compact form.  See the JOB STATE CODES  sec‐
801                     tion  below  for  a  list of possible states.  (Valid for
802                     jobs only)
803
804              STDERR The directory for standard error to  output  to.   (Valid
805                     for jobs only)
806
807              STDIN  The directory for standard in.  (Valid for jobs only)
808
809              STDOUT The  directory for standard out to output to.  (Valid for
810                     jobs only)
811
812              StepID Job or job step ID.  In the case of job arrays,  the  job
813                     ID  format  will  be of the form "<base_job_id>_<index>".
814                     (Valid forjob steps only)
815
816              StepName
817                     Job step name.  (Valid for job steps only)
818
819              StepState
820                     The state of the job step.  (Valid for job steps only)
821
822              SubmitTime
823                     The time that the job was submitted at.  (Valid for  jobs
824                     only)
825
826              system_comment
827                     System  comment associated with the job.  (Valid for jobs
828                     only)
829
830              Threads
831                     Number of threads per core requested by  the  job.   This
832                     reports  the value of the srun --threads-per-core option.
833                     When --threads-per-core has not been  set,  "*"  is  dis‐
834                     played.  (Valid for jobs only)
835
836              TimeLeft
837                     Time  left  for  the  job  to  execute in days-hours:min‐
838                     utes:seconds.  This value is  calculated  by  subtracting
839                     the  job's  time used from its time limit.  The value may
840                     be "NOT_SET" if not yet established or "UNLIMITED" for no
841                     limit.  (Valid for jobs only)
842
843              TimeLimit
844                     Timelimit  for  the job or job step.  (Valid for jobs and
845                     job steps)
846
847              TimeUsed
848                     Time used by the  job  or  job  step  in  days-hours:min‐
849                     utes:seconds.   The  days  and  hours are printed only as
850                     needed.  For job steps this field shows the elapsed  time
851                     since execution began and thus will be inaccurate for job
852                     steps which have  been  suspended.   Clock  skew  between
853                     nodes  in  the  cluster will cause the time to be inaccu‐
854                     rate.  If the time is obviously wrong (e.g. negative), it
855                     displays as "INVALID".  (Valid for jobs and job steps)
856
857              tres-alloc
858                     Print  the  trackable  resources  allocated to the job if
859                     running.  If not running, then print  the  trackable  re‐
860                     sources requested by the job.
861
862              tres-bind
863                     Print  the  trackable resources task binding requested by
864                     the job or job step.
865
866              tres-freq
867                     Print the trackable resources  frequencies  requested  by
868                     the job or job step.
869
870              tres-per-job
871                     Print the trackable resources requested by the job.
872
873              tres-per-node
874                     Print  the  trackable resources per node requested by the
875                     job or job step.
876
877              tres-per-socket
878                     Print the trackable resources per socket requested by the
879                     job or job step.
880
881              tres-per-step
882                     Print the trackable resources requested by the job step.
883
884              tres-per-task
885                     Print  the  trackable resources per task requested by the
886                     job or job step.
887
888              UserID User ID for a job or job step.  (Valid for jobs  and  job
889                     steps)
890
891              UserName
892                     User name for a job or job step.  (Valid for jobs and job
893                     steps)
894
895              Wait4Switch
896                     The amount of time to wait  for  the  desired  number  of
897                     switches.  (Valid for jobs only)
898
899              WCKey  Workload  Characterization  Key (wckey).  (Valid for jobs
900                     only)
901
902              WorkDir
903                     The job's working directory.  (Valid for jobs only)
904
905
906       -p <part_list>, --partition=<part_list>
907              Specify the partitions of the jobs or steps to view.  Accepts  a
908              comma separated list of partition names.
909
910
911       -P, --priority
912              For  pending jobs submitted to multiple partitions, list the job
913              once per partition. In addition, if jobs are sorted by priority,
914              consider both the partition and job priority. This option can be
915              used to produce a list of pending jobs in the same order consid‐
916              ered for scheduling by Slurm with appropriate additional options
917              (e.g. "--sort=-p,i --states=PD").
918
919
920       -q <qos_list>, --qos=<qos_list>
921              Specify the qos(s) of the jobs or steps to view. Accepts a comma
922              separated list of qos's.
923
924
925       -R, --reservation=<reservation_name>
926              Specify the reservation of the jobs to view.
927
928
929       -s, --steps
930              Specify the job steps to view.  This flag indicates that a comma
931              separated list of job steps to view  follows  without  an  equal
932              sign  (see  examples).   The  job  step  format  is "job_id[_ar‐
933              ray_id].step_id". Defaults to all job steps. Since this option's
934              argument  is  optional, for proper parsing the single letter op‐
935              tion must be followed immediately with the value and not include
936              a  space  between  them.  For  example  "-s1008.0"  and  not "-s
937              1008.0".
938
939
940       --sibling
941              Show all sibling jobs on a federated cluster. Implies  --federa‐
942              tion.
943
944
945       -S <sort_list>, --sort=<sort_list>
946              Specification  of the order in which records should be reported.
947              This uses the same field specification as  the  <output_format>.
948              The  long  format option "cluster" can also be used to sort jobs
949              or job steps by cluster name (e.g.  federated  jobs).   Multiple
950              sorts may be performed by listing multiple sort fields separated
951              by commas.  The field specifications may be preceded by  "+"  or
952              "-"  for  ascending (default) and descending order respectively.
953              For example, a sort value of "P,U" will sort the records by par‐
954              tition name then by user id.  The default value of sort for jobs
955              is "P,t,-p" (increasing partition name then within a given  par‐
956              tition  by  increasing  job state and then decreasing priority).
957              The default value of sort for job  steps  is  "P,i"  (increasing
958              partition  name then within a given partition by increasing step
959              id).
960
961
962       --start
963              Report the expected start time and resources to be allocated for
964              pending jobs in order of increasing start time.  This is equiva‐
965              lent to the following options: --format="%.18i  %.9P  %.8j  %.8u
966              %.2t   %.19S  %.6D %20Y %R", --sort=S and --states=PENDING.  Any
967              of these options may be explicitly changed as desired by combin‐
968              ing  the  --start option with other option values (e.g. to use a
969              different output format).  The expected start  time  of  pending
970              jobs  is  only  available  if the Slurm is configured to use the
971              backfill scheduling plugin.
972
973
974       -t <state_list>, --states=<state_list>
975              Specify the states of jobs to view.  Accepts a  comma  separated
976              list of state names or "all". If "all" is specified then jobs of
977              all states will be reported. If no state is specified then pend‐
978              ing,  running,  and  completing  jobs  are reported. See the JOB
979              STATE CODES section below for a list of valid states.  Both  ex‐
980              tended  and compact forms are valid.  Note the <state_list> sup‐
981              plied is case insensitive ("pd" and "PD" are equivalent).
982
983
984       -u <user_list>, --user=<user_list>
985              Request jobs or job steps from a comma separated list of  users.
986              The  list can consist of user names or user id numbers.  Perfor‐
987              mance of the command can be measurably improved for systems with
988              large numbers of jobs when a single user is specified.
989
990
991       --usage
992              Print a brief help message listing the squeue options.
993
994
995       -v, --verbose
996              Report details of squeues actions.
997
998
999       -V , --version
1000              Print version information and exit.
1001
1002
1003       -w <hostlist>, --nodelist=<hostlist>
1004              Report  only  on jobs allocated to the specified node or list of
1005              nodes.  This may either be the NodeName or NodeHostname  as  de‐
1006              fined  in  slurm.conf(5)  in  the  event  that  they  differ.  A
1007              node_name of localhost is mapped to the current host name.
1008
1009

JOB REASON CODES

1011       These codes identify the reason that a job is waiting for execution.  A
1012       job  may be waiting for more than one reason, in which case only one of
1013       those reasons is displayed.
1014
1015       AssociationJobLimit   The job's association has reached its maximum job
1016                             count.
1017
1018       AssociationResourceLimit
1019                             The  job's  association has reached some resource
1020                             limit.
1021
1022       AssociationTimeLimit  The job's association has reached its time limit.
1023
1024       BadConstraints        The job's constraints can not be satisfied.
1025
1026       BeginTime             The job's earliest start time has  not  yet  been
1027                             reached.
1028
1029       Cleaning              The  job  is being requeued and still cleaning up
1030                             from its previous execution.
1031
1032       Dependency            This job is waiting for a dependent job  to  com‐
1033                             plete.
1034
1035       FrontEndDown          No  front  end  node is available to execute this
1036                             job.
1037
1038       InactiveLimit         The job reached the system InactiveLimit.
1039
1040       InvalidAccount        The job's account is invalid.
1041
1042       InvalidQOS            The job's QOS is invalid.
1043
1044       JobHeldAdmin          The job is held by a system administrator.
1045
1046       JobHeldUser           The job is held by the user.
1047
1048       JobLaunchFailure      The job could not be launched.  This may  be  due
1049                             to  a  file system problem, invalid program name,
1050                             etc.
1051
1052       Licenses              The job is waiting for a license.
1053
1054       NodeDown              A node required by the job is down.
1055
1056       NonZeroExitCode       The job terminated with a non-zero exit code.
1057
1058       PartitionDown         The partition required by this job is in  a  DOWN
1059                             state.
1060
1061       PartitionInactive     The partition required by this job is in an Inac‐
1062                             tive state and not able to start jobs.
1063
1064       PartitionNodeLimit    The number of nodes required by this job is  out‐
1065                             side of its partition's current limits.  Can also
1066                             indicate that required nodes are DOWN or DRAINED.
1067
1068       PartitionTimeLimit    The job's time limit exceeds its partition's cur‐
1069                             rent time limit.
1070
1071       Priority              One  or  more higher priority jobs exist for this
1072                             partition or advanced reservation.
1073
1074       Prolog                Its PrologSlurmctld program is still running.
1075
1076       QOSJobLimit           The job's QOS has reached its maximum job count.
1077
1078       QOSResourceLimit      The job's QOS has reached some resource limit.
1079
1080       QOSTimeLimit          The job's QOS has reached its time limit.
1081
1082       ReqNodeNotAvail       Some node specifically required by the job is not
1083                             currently  available.   The node may currently be
1084                             in use, reserved for another job, in an  advanced
1085                             reservation,  DOWN,  DRAINED,  or not responding.
1086                             Nodes which are DOWN, DRAINED, or not  responding
1087                             will  be identified as part of the job's "reason"
1088                             field as "UnavailableNodes". Such nodes will typ‐
1089                             ically  require  the intervention of a system ad‐
1090                             ministrator to make available.
1091
1092       Reservation           The job is waiting its  advanced  reservation  to
1093                             become available.
1094
1095       Resources             The job is waiting for resources to become avail‐
1096                             able.
1097
1098       SystemFailure         Failure of the Slurm system, a file  system,  the
1099                             network, etc.
1100
1101       TimeLimit             The job exhausted its time limit.
1102
1103       QOSUsageThreshold     Required QOS threshold has been breached.
1104
1105       WaitingForScheduling  No reason has been set for this job yet.  Waiting
1106                             for the scheduler to  determine  the  appropriate
1107                             reason.
1108
1109

JOB STATE CODES

1111       Jobs  typically pass through several states in the course of their exe‐
1112       cution.  The typical states are PENDING, RUNNING,  SUSPENDED,  COMPLET‐
1113       ING, and COMPLETED.  An explanation of each state follows.
1114
1115       BF  BOOT_FAIL       Job terminated due to launch failure, typically due
1116                           to a hardware failure (e.g. unable to boot the node
1117                           or block and the job can not be requeued).
1118
1119       CA  CANCELLED       Job  was explicitly cancelled by the user or system
1120                           administrator.  The job may or may  not  have  been
1121                           initiated.
1122
1123       CD  COMPLETED       Job  has terminated all processes on all nodes with
1124                           an exit code of zero.
1125
1126       CF  CONFIGURING     Job has been allocated resources, but  are  waiting
1127                           for them to become ready for use (e.g. booting).
1128
1129       CG  COMPLETING      Job is in the process of completing. Some processes
1130                           on some nodes may still be active.
1131
1132       DL  DEADLINE        Job terminated on deadline.
1133
1134       F   FAILED          Job terminated with non-zero  exit  code  or  other
1135                           failure condition.
1136
1137       NF  NODE_FAIL       Job  terminated due to failure of one or more allo‐
1138                           cated nodes.
1139
1140       OOM OUT_OF_MEMORY   Job experienced out of memory error.
1141
1142       PD  PENDING         Job is awaiting resource allocation.
1143
1144       PR  PREEMPTED       Job terminated due to preemption.
1145
1146       R   RUNNING         Job currently has an allocation.
1147
1148       RD  RESV_DEL_HOLD   Job is being held after requested  reservation  was
1149                           deleted.
1150
1151       RF  REQUEUE_FED     Job is being requeued by a federation.
1152
1153       RH  REQUEUE_HOLD    Held job is being requeued.
1154
1155       RQ  REQUEUED        Completing job is being requeued.
1156
1157       RS  RESIZING        Job is about to change size.
1158
1159       RV  REVOKED         Sibling was removed from cluster due to other clus‐
1160                           ter starting the job.
1161
1162       SI  SIGNALING       Job is being signaled.
1163
1164       SE  SPECIAL_EXIT    The job was requeued in a special state. This state
1165                           can  be set by users, typically in EpilogSlurmctld,
1166                           if the job has terminated with  a  particular  exit
1167                           value.
1168
1169       SO  STAGE_OUT       Job is staging out files.
1170
1171       ST  STOPPED         Job  has  an  allocation,  but  execution  has been
1172                           stopped with SIGSTOP signal.  CPUS  have  been  re‐
1173                           tained by this job.
1174
1175       S   SUSPENDED       Job  has an allocation, but execution has been sus‐
1176                           pended and CPUs have been released for other jobs.
1177
1178       TO  TIMEOUT         Job terminated upon reaching its time limit.
1179
1180

PERFORMANCE

1182       Executing squeue sends a remote procedure call to slurmctld. If  enough
1183       calls  from squeue or other Slurm client commands that send remote pro‐
1184       cedure calls to the slurmctld daemon come in at once, it can result  in
1185       a  degradation of performance of the slurmctld daemon, possibly result‐
1186       ing in a denial of service.
1187
1188       Do not run squeue or other Slurm client commands that send remote  pro‐
1189       cedure  calls  to  slurmctld  from loops in shell scripts or other pro‐
1190       grams. Ensure that programs limit calls to squeue to the minimum neces‐
1191       sary for the information you are trying to gather.
1192
1193

ENVIRONMENT VARIABLES

1195       Some  squeue  options may be set via environment variables. These envi‐
1196       ronment variables, along with their corresponding options,  are  listed
1197       below. (Note: Commandline options will always override these settings.)
1198
1199       SLURM_BITSTR_LEN    Specifies  the string length to be used for holding
1200                           a job array's  task  ID  expression.   The  default
1201                           value  is  64  bytes.   A value of 0 will print the
1202                           full expression with any length  required.   Larger
1203                           values may adversely impact the application perfor‐
1204                           mance.
1205
1206       SLURM_CLUSTERS      Same as --clusters
1207
1208       SLURM_CONF          The location of the Slurm configuration file.
1209
1210       SLURM_TIME_FORMAT   Specify the format used to report  time  stamps.  A
1211                           value  of  standard,  the  default value, generates
1212                           output            in            the            form
1213                           "year-month-dateThour:minute:second".   A  value of
1214                           relative returns only "hour:minute:second"  if  the
1215                           current  day.   For other dates in the current year
1216                           it prints the "hour:minute"  preceded  by  "Tomorr"
1217                           (tomorrow),  "Ystday"  (yesterday), the name of the
1218                           day for the coming week (e.g. "Mon", "Tue",  etc.),
1219                           otherwise  the  date  (e.g.  "25  Apr").  For other
1220                           years it returns a date month and  year  without  a
1221                           time  (e.g.   "6 Jun 2012"). All of the time stamps
1222                           use a 24 hour format.
1223
1224                           A valid strftime() format can  also  be  specified.
1225                           For example, a value of "%a %T" will report the day
1226                           of the week and a time stamp (e.g. "Mon 12:34:56").
1227
1228       SQUEUE_ACCOUNT      -A <account_list>, --account=<account_list>
1229
1230       SQUEUE_ALL          -a, --all
1231
1232       SQUEUE_ARRAY        -r, --array
1233
1234       SQUEUE_NAMES        --name=<name_list>
1235
1236       SQUEUE_FEDERATION   --federation
1237
1238       SQUEUE_FORMAT       -o <output_format>, --format=<output_format>
1239
1240       SQUEUE_FORMAT2      -O <output_format>, --Format=<output_format>
1241
1242       SQUEUE_LICENSES     -p-l <license_list>, --license=<license_list>
1243
1244       SQUEUE_LOCAL        --local
1245
1246       SQUEUE_PARTITION    -p <part_list>, --partition=<part_list>
1247
1248       SQUEUE_PRIORITY     -P, --priority
1249
1250       SQUEUE_QOS          -p <qos_list>, --qos=<qos_list>
1251
1252       SQUEUE_SIBLING      --sibling
1253
1254       SQUEUE_SORT         -S <sort_list>, --sort=<sort_list>
1255
1256       SQUEUE_STATES       -t <state_list>, --states=<state_list>
1257
1258       SQUEUE_USERS        -u <user_list>, --users=<user_list>
1259
1260

EXAMPLES

1262       Print the jobs scheduled in the debug partition and  in  the  COMPLETED
1263       state in the format with six right justified digits for the job id fol‐
1264       lowed by the priority with an arbitrary fields size:
1265
1266              $ squeue -p debug -t COMPLETED -o "%.6i %p"
1267               JOBID PRIORITY
1268               65543 99993
1269               65544 99992
1270               65545 99991
1271
1272
1273       Print the job steps in the debug partition sorted by user:
1274
1275              $ squeue -s -p debug -S u
1276                STEPID        NAME PARTITION     USER      TIME NODELIST
1277               65552.1       test1     debug    alice      0:23 dev[1-4]
1278               65562.2     big_run     debug      bob      0:18 dev22
1279               65550.1      param1     debug  candice   1:43:21 dev[6-12]
1280
1281
1282       Print information only about jobs 12345, 12346 and 12348:
1283
1284              $ squeue --jobs 12345,12346,12348
1285               JOBID PARTITION NAME USER ST  TIME  NODES NODELIST(REASON)
1286               12345     debug job1 dave  R   0:21     4 dev[9-12]
1287               12346     debug job2 dave PD   0:00     8 (Resources)
1288               12348     debug job3 ed   PD   0:00     4 (Priority)
1289
1290
1291       Print information only about job step 65552.1:
1292
1293              $ squeue --steps 65552.1
1294                STEPID     NAME PARTITION    USER    TIME  NODELIST
1295               65552.1    test2     debug   alice   12:49  dev[1-4]
1296
1297

COPYING

1299       Copyright (C) 2002-2007 The Regents of the  University  of  California.
1300       Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER).
1301       Copyright (C) 2008-2010 Lawrence Livermore National Security.
1302       Copyright (C) 2010-2016 SchedMD LLC.
1303
1304       This  file  is  part  of Slurm, a resource management program.  For de‐
1305       tails, see <https://slurm.schedmd.com/>.
1306
1307       Slurm is free software; you can redistribute it and/or modify it  under
1308       the  terms  of  the GNU General Public License as published by the Free
1309       Software Foundation; either version 2 of the License, or (at  your  op‐
1310       tion) any later version.
1311
1312       Slurm  is  distributed  in the hope that it will be useful, but WITHOUT
1313       ANY WARRANTY; without even the implied warranty of  MERCHANTABILITY  or
1314       FITNESS  FOR  A PARTICULAR PURPOSE.  See the GNU General Public License
1315       for more details.
1316

SEE ALSO

1318       scancel(1), scontrol(1), sinfo(1),  srun(1),  slurm_load_ctl_conf  (3),
1319       slurm_load_jobs (3), slurm_load_node (3), slurm_load_partitions (3)
1320
1321
1322
1323April 2021                      Slurm Commands                       squeue(1)
Impressum