1squeue(1)                       Slurm Commands                       squeue(1)
2
3
4

NAME

6       squeue  -  view  information about jobs located in the Slurm scheduling
7       queue.
8
9

SYNOPSIS

11       squeue [OPTIONS...]
12
13

DESCRIPTION

15       squeue is used to view job and job step information for jobs managed by
16       Slurm.
17
18

OPTIONS

20       -A <account_list>, --account=<account_list>
21              Specify  the accounts of the jobs to view. Accepts a comma sepa‐
22              rated list of account names. This has no effect when listing job
23              steps.
24
25
26       -a, --all
27              Display  information about jobs and job steps in all partitions.
28              This causes information to be displayed  about  partitions  that
29              are  configured as hidden and partitions that are unavailable to
30              user's group.
31
32
33       -r, --array
34              Display one job array element per line.   Without  this  option,
35              the  display  will be optimized for use with job arrays (pending
36              job array elements will be combined on one line of  output  with
37              the array index values printed using a regular expression).
38
39
40       --array-unique
41              Display  one  unique pending job array element per line. Without
42              this option, the pending job array elements will be grouped into
43              the  master array job to optimize the display.  This can also be
44              set with the environment variable SQUEUE_ARRAY_UNIQUE.
45
46
47       --federation
48              Show jobs from the federation if a member of one.
49
50
51       -h, --noheader
52              Do not print a header on the output.
53
54
55       --help Print a help message describing all options squeue.
56
57
58       --hide Do not display information about jobs and job steps in all  par‐
59              titions.  By default, information about partitions that are con‐
60              figured as hidden or are not available to the user's group  will
61              not be displayed (i.e. this is the default behavior).
62
63
64       -i <seconds>, --iterate=<seconds>
65              Repeatedly  gather  and  report the requested information at the
66              interval specified (in seconds).   By  default,  prints  a  time
67              stamp with the header.
68
69
70       -j <job_id_list>, --jobs=<job_id_list>
71              Requests a comma separated list of job IDs to display.  Defaults
72              to all jobs.  The --jobs=<job_id_list> option  may  be  used  in
73              conjunction  with  the  --steps option to print step information
74              about specific jobs.  Note: If a list of job  IDs  is  provided,
75              the  jobs  are  displayed even if they are on hidden partitions.
76              Since this option's argument is optional, for proper parsing the
77              single letter option must be followed immediately with the value
78              and not include a space between them. For example  "-j1008"  and
79              not  "-j 1008".  The job ID format is "job_id[_array_id]".  Per‐
80              formance of the command can be measurably improved  for  systems
81              with  large  numbers  of jobs when a single job ID is specified.
82              By default, this field size will be limited to  64  bytes.   Use
83              the  environment  variable  SLURM_BITSTR_LEN  to  specify larger
84              field sizes.
85
86
87
88       --local
89              Show only jobs local to this cluster. Ignore other  clusters  in
90              this federation (if any). Overrides --federation.
91
92
93       -l, --long
94              Report  more  of the available information for the selected jobs
95              or job steps, subject to any constraints specified.
96
97
98       -L, --licenses=<license_list>
99              Request jobs requesting or  using  one  or  more  of  the  named
100              licenses.   The  license list consists of a comma separated list
101              of license names.
102
103
104       --me   Equivalent to --user=<my username>.
105
106
107       -M, --clusters=<cluster_name>
108              Clusters to issue commands to.  Multiple cluster  names  may  be
109              comma  separated.   A  value  of  'all' will query to run on all
110              clusters.  This option implicitly sets the --local option.
111
112
113       -n, --name=<name_list>
114              Request jobs or job steps having one  of  the  specified  names.
115              The list consists of a comma separated list of job names.
116
117
118       --noconvert
119              Don't  convert  units from their original type (e.g. 2048M won't
120              be converted to 2G).
121
122
123       -o <output_format>, --format=<output_format>
124              Specify the information to be displayed, its size  and  position
125              (right  or  left  justified).   Also see the -O <output_format>,
126              --Format=<output_format> option described below (which  supports
127              less  flexibility  in  formatting,  but  supports  access to all
128              fields).  If the command is  executed  in  a  federated  cluster
129              environment and information about more than one cluster is to be
130              displayed and the -h, --noheader option is used, then the  clus‐
131              ter  name  will  be  displayed before the default output formats
132              shown below.
133
134              The default formats with various options are:
135
136              default        "%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R"
137
138              -l, --long     "%.18i %.9P %.8j %.8u %.8T %.10M %.9l %.6D %R"
139
140              -s, --steps    "%.15i %.8j %.9P %.8u %.9M %N"
141
142
143              The format of each field is "%[[.]size]type[suffix]"
144
145                 size   Minimum field size. If no size is specified,  whatever
146                        is needed to print the information will be used.
147
148                 .      Indicates  the  output  should  be right justified and
149                        size must be specified.  By  default  output  is  left
150                        justified.
151
152                 suffix Arbitrary string to append to the end of the field.
153
154
155              Note  that  many of these type specifications are valid only for
156              jobs while others are valid only  for  job  steps.   Valid  type
157              specifications include:
158
159
160              %all  Print  all fields available for this data type with a ver‐
161                    tical bar separating each field.
162
163              %a    Account associated with the job.  (Valid for jobs only)
164
165              %A    Number of tasks created by a job step.  This  reports  the
166                    value  of  the srun --ntasks option.  (Valid for job steps
167                    only)
168
169              %A    Job id.  This will have a unique value for each element of
170                    job arrays.  (Valid for jobs only)
171
172              %B    Executing  (batch) host. For an allocated session, this is
173                    the host on which the session is executing (i.e. the  node
174                    from  which  the srun or the salloc command was executed).
175                    For a batch job, this is  the  node  executing  the  batch
176                    script. In the case of a typical Linux cluster, this would
177                    be the compute node zero of the allocation. In the case of
178                    a Cray ALPS system, this would be the front-end host whose
179                    slurmd daemon executes the job script.
180
181              %c    Minimum number of CPUs (processors) per node requested  by
182                    the  job.   This  reports  the value of the srun --mincpus
183                    option with a default value  of  zero.   (Valid  for  jobs
184                    only)
185
186              %C    Number  of CPUs (processors) requested by the job or allo‐
187                    cated to it if already running.  As a  job  is  completing
188                    this  number will reflect the current number of CPUs allo‐
189                    cated.  (Valid for jobs only)
190
191              %d    Minimum size of temporary disk space (in MB) requested  by
192                    the job.  (Valid for jobs only)
193
194              %D    Number of nodes allocated to the job or the minimum number
195                    of nodes required by a pending job. The actual  number  of
196                    nodes allocated to a pending job may exceed this number if
197                    the job specified a node range count  (e.g.   minimum  and
198                    maximum  node  counts)  or  the  job specifies a processor
199                    count instead of a node count. As a job is completing this
200                    number will reflect the current number of nodes allocated.
201                    (Valid for jobs only)
202
203              %e    Time at which the job ended or is expected to  end  (based
204                    upon its time limit).  (Valid for jobs only)
205
206              %E    Job dependencies remaining. This job will not begin execu‐
207                    tion until these dependent jobs complete. In the case of a
208                    job  that  can not run due to job dependencies never being
209                    satisfied, the full original job dependency  specification
210                    will  be reported. A value of NULL implies this job has no
211                    dependencies.  (Valid for jobs only)
212
213              %f    Features required by the job.  (Valid for jobs only)
214
215              %F    Job array's  job  ID.  This  is  the  base  job  ID.   For
216                    non-array jobs, this is the job ID.  (Valid for jobs only)
217
218              %g    Group name of the job.  (Valid for jobs only)
219
220              %G    Group ID of the job.  (Valid for jobs only)
221
222              %h    Can  the  compute  resources  allocated to the job be over
223                    subscribed by other jobs.  The resources to be  over  sub‐
224                    scribed  can  be  nodes,  sockets,  cores, or hyperthreads
225                    depending upon configuration.  The value will be "YES"  if
226                    the job was submitted with the oversubscribe option or the
227                    partition is configured with OverSubscribe=Force, "NO"  if
228                    the  job  requires  exclusive  node  access, "USER" if the
229                    allocated compute nodes are dedicated to  a  single  user,
230                    "MCS"  if  the  allocated compute nodes are dedicated to a
231                    single security class  (See  MCSPlugin  and  MCSParameters
232                    configuration  parameters for more information), "OK" oth‐
233                    erwise (typically allocated dedicated  CPUs),  (Valid  for
234                    jobs only)
235
236              %H    Number  of  sockets  per  node requested by the job.  This
237                    reports the value of the srun  --sockets-per-node  option.
238                    When  --sockets-per-node  has  not  been  set, "*" is dis‐
239                    played.  (Valid for jobs only)
240
241              %i    Job or job step id.  In the case of job arrays, the job ID
242                    format  will  be  of the form "<base_job_id>_<index>".  By
243                    default, the job array index field size will be limited to
244                    64  bytes.   Use the environment variable SLURM_BITSTR_LEN
245                    to specify larger field sizes.  (Valid for  jobs  and  job
246                    steps)  In  the case of heterogeneous job allocations, the
247                    job ID format will be of the form "#+#"  where  the  first
248                    number  is  the  "heterogeneous job leader" and the second
249                    number the zero origin offset for each  component  of  the
250                    job.
251
252              %I    Number  of  cores  per  socket requested by the job.  This
253                    reports the value of the srun  --cores-per-socket  option.
254                    When  --cores-per-socket  has  not  been  set, "*" is dis‐
255                    played.  (Valid for jobs only)
256
257              %j    Job or job step name.  (Valid for jobs and job steps)
258
259              %J    Number of threads per core requested  by  the  job.   This
260                    reports  the  value of the srun --threads-per-core option.
261                    When --threads-per-core has not  been  set,  "*"  is  dis‐
262                    played.  (Valid for jobs only)
263
264              %k    Comment associated with the job.  (Valid for jobs only)
265
266              %K    Job array index.  By default, this field size will be lim‐
267                    ited to 64 bytes.  Use the environment variable SLURM_BIT‐
268                    STR_LEN  to  specify  larger field sizes.  (Valid for jobs
269                    only)
270
271              %l    Time limit of the  job  or  job  step  in  days-hours:min‐
272                    utes:seconds.   The  value  may  be  "NOT_SET"  if not yet
273                    established or "UNLIMITED" for no limit.  (Valid for  jobs
274                    and job steps)
275
276              %L    Time  left  for  the  job  to  execute  in days-hours:min‐
277                    utes:seconds.  This value is calculated by subtracting the
278                    job's  time  used  from  its time limit.  The value may be
279                    "NOT_SET" if not yet established  or  "UNLIMITED"  for  no
280                    limit.  (Valid for jobs only)
281
282              %m    Minimum  size  of  memory  (in  MB)  requested by the job.
283                    (Valid for jobs only)
284
285              %M    Time used by  the  job  or  job  step  in  days-hours:min‐
286                    utes:seconds.   The  days  and  hours  are printed only as
287                    needed.  For job steps this field shows the  elapsed  time
288                    since  execution began and thus will be inaccurate for job
289                    steps which have been suspended.  Clock skew between nodes
290                    in  the  cluster will cause the time to be inaccurate.  If
291                    the time is obviously wrong (e.g. negative),  it  displays
292                    as "INVALID".  (Valid for jobs and job steps)
293
294              %n    List  of  node  names  explicitly  requested  by  the job.
295                    (Valid for jobs only)
296
297              %N    List of nodes allocated to the job or  job  step.  In  the
298                    case  of a COMPLETING job, the list of nodes will comprise
299                    only those nodes that have not yet been returned  to  ser‐
300                    vice.  (Valid for jobs and job steps)
301
302              %o    The command to be executed.
303
304              %O    Are  contiguous  nodes  requested  by the job.  (Valid for
305                    jobs only)
306
307              %p    Priority of the job (converted to a floating point  number
308                    between 0.0 and 1.0).  Also see %Q.  (Valid for jobs only)
309
310              %P    Partition of the job or job step.  (Valid for jobs and job
311                    steps)
312
313              %q    Quality of service associated with the  job.   (Valid  for
314                    jobs only)
315
316              %Q    Priority of the job (generally a very large unsigned inte‐
317                    ger).  Also see %p.  (Valid for jobs only)
318
319              %r    The reason a job is in its current  state.   See  the  JOB
320                    REASON  CODES  section below for more information.  (Valid
321                    for jobs only)
322
323              %R    For pending jobs: the reason a job is waiting  for  execu‐
324                    tion  is  printed within parenthesis.  For terminated jobs
325                    with failure: an explanation as to why the job  failed  is
326                    printed within parenthesis.  For all other job states: the
327                    list of allocate nodes.  See the JOB REASON CODES  section
328                    below for more information.  (Valid for jobs only)
329
330              %s    Node  selection  plugin  specific data for a job. Possible
331                    data includes: Geometry requirement of resource allocation
332                    (X,Y,Z  dimensions),  Connection type (TORUS, MESH, or NAV
333                    == torus else mesh), Permit rotation of geometry  (yes  or
334                    no),  Node  use (VIRTUAL or COPROCESSOR), etc.  (Valid for
335                    jobs only)
336
337              %S    Actual or expected start time of  the  job  or  job  step.
338                    (Valid for jobs and job steps)
339
340              %t    Job  state  in compact form.  See the JOB STATE CODES sec‐
341                    tion below for a list of possible states.  (Valid for jobs
342                    only)
343
344              %T    Job  state in extended form.  See the JOB STATE CODES sec‐
345                    tion below for a list of possible states.  (Valid for jobs
346                    only)
347
348              %u    User  name for a job or job step.  (Valid for jobs and job
349                    steps)
350
351              %U    User ID for a job or job step.  (Valid for  jobs  and  job
352                    steps)
353
354              %v    Reservation for the job.  (Valid for jobs only)
355
356              %V    The job's submission time.
357
358              %w    Workload  Characterization  Key  (wckey).  (Valid for jobs
359                    only)
360
361              %W    Licenses reserved for the job.  (Valid for jobs only)
362
363              %x    List of node names explicitly excluded by the job.  (Valid
364                    for jobs only)
365
366              %X    Count  of cores reserved on each node for system use (core
367                    specialization).  (Valid for jobs only)
368
369              %y    Nice value (adjustment to a  job's  scheduling  priority).
370                    (Valid for jobs only)
371
372              %Y    For  pending jobs, a list of the nodes expected to be used
373                    when the job is started.
374
375              %z    Number of requested sockets, cores,  and  threads  (S:C:T)
376                    per  node for the job.  When (S:C:T) has not been set, "*"
377                    is displayed.  (Valid for jobs only)
378
379              %Z    The job's working directory.
380
381
382
383       -O <output_format>, --Format=<output_format>
384              Specify the information to be displayed.  Also see the -o  <out‐
385              put_format>,  --format=<output_format>  option  described  below
386              (which supports greater flexibility in formatting, but does  not
387              support  access  to  all  fields because we ran out of letters).
388              Requests a comma separated list of job information  to  be  dis‐
389              played.
390
391
392              The format of each field is "type[:[.][size][sufix]]"
393
394                 size   Minimum  field size. If no size is specified, 20 char‐
395                        acters will be allocated to print the information.
396
397                 .      Indicates the output should  be  right  justified  and
398                        size  must  be  specified.   By default output is left
399                        justified.
400
401                 sufix  Arbitrary string to append to the end of the field.
402
403
404              Note that many of these type specifications are valid  only  for
405              jobs  while  others  are  valid  only for job steps.  Valid type
406              specifications include:
407
408
409              Account
410                     Print the account associated with the  job.   (Valid  for
411                     jobs only)
412
413              AccrueTime
414                     Print  the  accrue  time associated with the job.  (Valid
415                     for jobs only)
416
417              admin_comment
418                     Administrator comment associated with  the  job.   (Valid
419                     for jobs only)
420
421              AllocNodes
422                     Print  the  nodes  allocated to the job.  (Valid for jobs
423                     only)
424
425              AllocSID
426                     Print the session ID used to submit the job.  (Valid  for
427                     jobs only)
428
429              ArrayJobID
430                     Prints  the job ID of the job array.  (Valid for jobs and
431                     job steps)
432
433              ArrayTaskID
434                     Prints the task ID of the job array.  (Valid for jobs and
435                     job steps)
436
437              AssocID
438                     Prints  the  ID  of the job association.  (Valid for jobs
439                     only)
440
441              BatchFlag
442                     Prints whether the batch flag has been set.   (Valid  for
443                     jobs only)
444
445              BatchHost
446                     Executing (batch) host. For an allocated session, this is
447                     the host on which the session is executing (i.e. the node
448                     from  which the srun or the salloc command was executed).
449                     For a batch job, this is the  node  executing  the  batch
450                     script.  In  the  case  of  a typical Linux cluster, this
451                     would be the compute node zero of the allocation. In  the
452                     case  of  a Cray ALPS system, this would be the front-end
453                     host whose slurmd daemon executes the job script.  (Valid
454                     for jobs only)
455
456              BoardsPerNode
457                     Prints  the  number  of  boards per node allocated to the
458                     job.  (Valid for jobs only)
459
460              BurstBuffer
461                     Burst Buffer specification (Valid for jobs only)
462
463              BurstBufferState
464                     Burst Buffer state (Valid for jobs only)
465
466              Cluster
467                     Name of the cluster that is running the job or job step.
468
469              ClusterFeature
470                     Cluster features required by the job.   (Valid  for  jobs
471                     only)
472
473              Command
474                     The command to be executed.  (Valid for jobs only)
475
476              Comment
477                     Comment associated with the job.  (Valid for jobs only)
478
479              Contiguous
480                     Are  contiguous  nodes  requested by the job.  (Valid for
481                     jobs only)
482
483              Cores
484                     Number of cores per socket requested by  the  job.   This
485                     reports  the value of the srun --cores-per-socket option.
486                     When --cores-per-socket has not been  set,  "*"  is  dis‐
487                     played.  (Valid for jobs only)
488
489              CoreSpec
490                     Count of cores reserved on each node for system use (core
491                     specialization).  (Valid for jobs only)
492
493              CPUFreq
494                     Prints the frequency of the allocated CPUs.   (Valid  for
495                     job steps only)
496
497              cpus-per-task
498                     Prints the number of CPUs per tasks allocated to the job.
499                     (Valid for jobs only)
500
501              cpus-per-tres
502                     Print the memory required per trackable  resources  allo‐
503                     cated to the job or job step.
504
505              Deadline
506                     Prints  the  deadline affected to the job (Valid for jobs
507                     only)
508
509              DelayBoot
510                     Delay boot time.  (Valid for jobs only)
511
512              Dependency
513                     Job dependencies remaining. This job will not begin  exe‐
514                     cution  until  these dependent jobs complete. In the case
515                     of a job that can not run due to job  dependencies  never
516                     being  satisfied, the full original job dependency speci‐
517                     fication will be reported. A value of NULL  implies  this
518                     job has no dependencies.  (Valid for jobs only)
519
520              DerivedEC
521                     Derived  exit code for the job, which is the highest exit
522                     code of any job step.  (Valid for jobs only)
523
524              EligibleTime
525                     Time the job is eligible for running.   (Valid  for  jobs
526                     only)
527
528              EndTime
529                     The  time of job termination, actual or expected.  (Valid
530                     for jobs only)
531
532              exit_code
533                     The exit code for the job.  (Valid for jobs only)
534
535              Feature
536                     Features required by the job.  (Valid for jobs only)
537
538              GroupID
539                     Group ID of the job.  (Valid for jobs only)
540
541              GroupName
542                     Group name of the job.  (Valid for jobs only)
543
544              HetJobID
545                     Job ID of the heterogeneous job leader.
546
547              HetJobIDSet
548                     Expression identifying all components job  IDs  within  a
549                     heterogeneous job.
550
551              HetJobOffset
552                     Zero  origin  offset within a collection of heterogeneous
553                     job components.
554
555              JobArrayID
556                     Job array's job  ID.  This  is  the  base  job  ID.   For
557                     non-array  jobs,  this  is  the  job ID.  (Valid for jobs
558                     only)
559
560              JobID
561                     Job ID.  This will have a unique value for  each  element
562                     of  job  arrays and each component of heterogeneous jobs.
563                     (Valid for jobs only)
564
565              LastSchedEval
566                     Prints the last time the job was evaluated  for  schedul‐
567                     ing.  (Valid for jobs only)
568
569              Licenses
570                     Licenses reserved for the job.  (Valid for jobs only)
571
572              MaxCPUs
573                     Prints  the  max  number  of  CPUs  allocated to the job.
574                     (Valid for jobs only)
575
576              MaxNodes
577                     Prints the max number of  nodes  allocated  to  the  job.
578                     (Valid for jobs only)
579
580              MCSLabel
581                     Prints the MCS_label of the job.  (Valid for jobs only)
582
583              mem-per-tres
584                     Print the memory (in MB) required per trackable resources
585                     allocated to the job or job step.
586
587              MinCpus
588                     Minimum number of CPUs (processors) per node requested by
589                     the  job.   This  reports the value of the srun --mincpus
590                     option with a default value of  zero.   (Valid  for  jobs
591                     only)
592
593              MinMemory
594                     Minimum  size  of  memory  (in  MB) requested by the job.
595                     (Valid for jobs only)
596
597              MinTime
598                     Minimum time limit of the job (Valid for jobs only)
599
600              MinTmpDisk
601                     Minimum size of temporary disk space (in MB) requested by
602                     the job.  (Valid for jobs only)
603
604              Name
605                     Job or job step name.  (Valid for jobs and job steps)
606
607              Network
608                     The  network that the job is running on.  (Valid for jobs
609                     and job steps)
610
611              Nice
612                     Nice value (adjustment to a job's  scheduling  priority).
613                     (Valid for jobs only)
614
615              NodeList
616                     List  of  nodes  allocated to the job or job step. In the
617                     case of a COMPLETING job, the list of nodes will comprise
618                     only  those nodes that have not yet been returned to ser‐
619                     vice.  (Valid for jobs only)
620
621              Nodes
622                     List of nodes allocated to the job or job  step.  In  the
623                     case of a COMPLETING job, the list of nodes will comprise
624                     only those nodes that have not yet been returned to  ser‐
625                     vice.  (Valid job steps only)
626
627              NTPerBoard
628                     The  number  of  tasks  per  board  allocated to the job.
629                     (Valid for jobs only)
630
631              NTPerCore
632                     The number of  tasks  per  core  allocated  to  the  job.
633                     (Valid for jobs only)
634
635              NTPerNode
636                     The  number  of  tasks  per  node  allocated  to the job.
637                     (Valid for jobs only)
638
639              NTPerSocket
640                     The number of tasks per  socket  allocated  to  the  job.
641                     (Valid for jobs only)
642
643              NumCPUs
644                     Number of CPUs (processors) requested by the job or allo‐
645                     cated to it if already running.  As a job is  completing,
646                     this number will reflect the current number of CPUs allo‐
647                     cated.  (Valid for jobs and job steps)
648
649              NumNodes
650                     Number of nodes allocated to the job or the minimum  num‐
651                     ber of nodes required by a pending job. The actual number
652                     of nodes allocated to a pending job may exceed this  num‐
653                     ber  if the job specified a node range count (e.g.  mini‐
654                     mum and maximum node counts) or the job specifies a  pro‐
655                     cessor  count  instead  of a node count. As a job is com‐
656                     pleting this number will reflect the  current  number  of
657                     nodes allocated.  (Valid for jobs only)
658
659              NumTasks
660                     Number  of  tasks  requested  by a job or job step.  This
661                     reports the value of the  --ntasks  option.   (Valid  for
662                     jobs and job steps)
663
664              Origin
665                     Cluster name where federated job originated from.  (Valid
666                     for federated jobs only)
667
668              OriginRaw
669                     Cluster ID where federated job originated  from.   (Valid
670                     for federated jobs only)
671
672              OverSubscribe
673                     Can  the  compute  resources allocated to the job be over
674                     subscribed by other jobs.  The resources to be over  sub‐
675                     scribed  can  be  nodes,  sockets, cores, or hyperthreads
676                     depending upon configuration.  The value will be "YES" if
677                     the  job  was  submitted with the oversubscribe option or
678                     the partition  is  configured  with  OverSubscribe=Force,
679                     "NO" if the job requires exclusive node access, "USER" if
680                     the allocated compute nodes are  dedicated  to  a  single
681                     user,  "MCS" if the allocated compute nodes are dedicated
682                     to a single security class (See MCSPlugin and  MCSParame‐
683                     ters configuration parameters for more information), "OK"
684                     otherwise (typically allocated  dedicated  CPUs),  (Valid
685                     for jobs only)
686
687              Partition
688                     Partition  of  the  job or job step.  (Valid for jobs and
689                     job steps)
690
691              PreemptTime
692                     The preempt time for the job.  (Valid for jobs only)
693
694              PendingTime
695                     The time (in seconds) between start time and submit  time
696                     of  the  job.   If  the job has not started yet, then the
697                     time (in seconds) between now and the submit time of  the
698                     job.  (Valid for jobs only)
699
700              Priority
701                     Priority of the job (converted to a floating point number
702                     between 0.0 and 1.0).  Also see prioritylong.  (Valid for
703                     jobs only)
704
705              PriorityLong
706                     Priority  of  the  job  (generally  a very large unsigned
707                     integer).  Also see priority.  (Valid for jobs only)
708
709              Profile
710                     Profile of the job.  (Valid for jobs only)
711
712              QOS
713                     Quality of service associated with the job.   (Valid  for
714                     jobs only)
715
716              Reason
717                     The  reason  a  job is in its current state.  See the JOB
718                     REASON CODES section below for more information.   (Valid
719                     for jobs only)
720
721              ReasonList
722                     For  pending jobs: the reason a job is waiting for execu‐
723                     tion is printed within parenthesis.  For terminated  jobs
724                     with  failure: an explanation as to why the job failed is
725                     printed within parenthesis.  For all  other  job  states:
726                     the  list  of  allocate  nodes.  See the JOB REASON CODES
727                     section below for  more  information.   (Valid  for  jobs
728                     only)
729
730              Reboot
731                     Indicates  if  the  allocated  nodes  should  be rebooted
732                     before starting the job.  (Valid on jobs only)
733
734              ReqNodes
735                     List of node  names  explicitly  requested  by  the  job.
736                     (Valid for jobs only)
737
738              ReqSwitch
739                     The  max  number  of  requested  switches by for the job.
740                     (Valid for jobs only)
741
742              Requeue
743                     Prints whether the  job  will  be  requeued  on  failure.
744                     (Valid for jobs only)
745
746              Reservation
747                     Reservation for the job.  (Valid for jobs only)
748
749              ResizeTime
750                     The  amount  of  time changed for the job to run.  (Valid
751                     for jobs only)
752
753              RestartCnt
754                     The number of restarts for  the  job.   (Valid  for  jobs
755                     only)
756
757              ResvPort
758                     Reserved ports of the job.  (Valid for job steps only)
759
760              SchedNodes
761                     For pending jobs, a list of the nodes expected to be used
762                     when the job is started.  (Valid for jobs only)
763
764              SCT
765                     Number of requested sockets, cores, and  threads  (S:C:T)
766                     per node for the job.  When (S:C:T) has not been set, "*"
767                     is displayed.  (Valid for jobs only)
768
769              SelectJobInfo
770                     Node selection plugin specific data for a  job.  Possible
771                     data  includes:  Geometry requirement of resource alloca‐
772                     tion (X,Y,Z dimensions), Connection type (TORUS, MESH, or
773                     NAV == torus else mesh), Permit rotation of geometry (yes
774                     or no), Node use (VIRTUAL or COPROCESSOR),  etc.   (Valid
775                     for jobs only)
776
777              SiblingsActive
778                     Cluster  names  of  where  federated  sibling jobs exist.
779                     (Valid for federated jobs only)
780
781              SiblingsActiveRaw
782                     Cluster  IDs  of  where  federated  sibling  jobs  exist.
783                     (Valid for federated jobs only)
784
785              SiblingsViable
786                     Cluster  names of where federated sibling jobs are viable
787                     to run.  (Valid for federated jobs only)
788
789              SiblingsViableRaw
790                     Cluster IDs of where federated  sibling  jobs  viable  to
791                     run.  (Valid for federated jobs only)
792
793              Sockets
794                     Number  of  sockets  per node requested by the job.  This
795                     reports the value of the srun --sockets-per-node  option.
796                     When  --sockets-per-node  has  not  been set, "*" is dis‐
797                     played.  (Valid for jobs only)
798
799              SPerBoard
800                     Number of sockets per board allocated to the job.  (Valid
801                     for jobs only)
802
803              StartTime
804                     Actual  or  expected  start  time of the job or job step.
805                     (Valid for jobs and job steps)
806
807              State
808                     Job state in extended form.  See the JOB STATE CODES sec‐
809                     tion  below  for  a  list of possible states.  (Valid for
810                     jobs only)
811
812              StateCompact
813                     Job state in compact form.  See the JOB STATE CODES  sec‐
814                     tion  below  for  a  list of possible states.  (Valid for
815                     jobs only)
816
817              STDERR
818                     The directory for standard error to  output  to.   (Valid
819                     for jobs only)
820
821              STDIN
822                     The directory for standard in.  (Valid for jobs only)
823
824              STDOUT
825                     The  directory for standard out to output to.  (Valid for
826                     jobs only)
827
828              StepID
829                     Job or job step ID.  In the case of job arrays,  the  job
830                     ID  format  will  be of the form "<base_job_id>_<index>".
831                     (Valid forjob steps only)
832
833              StepName
834                     Job step name.  (Valid for job steps only)
835
836              StepState
837                     The state of the job step.  (Valid for job steps only)
838
839              SubmitTime
840                     The time that the job was submitted at.  (Valid for  jobs
841                     only)
842
843              system_comment
844                     System  comment associated with the job.  (Valid for jobs
845                     only)
846
847              Threads
848                     Number of threads per core requested by  the  job.   This
849                     reports  the value of the srun --threads-per-core option.
850                     When --threads-per-core has not been  set,  "*"  is  dis‐
851                     played.  (Valid for jobs only)
852
853              TimeLeft
854                     Time  left  for  the  job  to  execute in days-hours:min‐
855                     utes:seconds.  This value is  calculated  by  subtracting
856                     the  job's  time used from its time limit.  The value may
857                     be "NOT_SET" if not yet established or "UNLIMITED" for no
858                     limit.  (Valid for jobs only)
859
860              TimeLimit
861                     Timelimit  for  the job or job step.  (Valid for jobs and
862                     job steps)
863
864              TimeUsed
865                     Time used by the  job  or  job  step  in  days-hours:min‐
866                     utes:seconds.   The  days  and  hours are printed only as
867                     needed.  For job steps this field shows the elapsed  time
868                     since execution began and thus will be inaccurate for job
869                     steps which have  been  suspended.   Clock  skew  between
870                     nodes  in  the  cluster will cause the time to be inaccu‐
871                     rate.  If the time is obviously wrong (e.g. negative), it
872                     displays as "INVALID".  (Valid for jobs and job steps)
873
874              tres-alloc
875                     Print  the  trackable  resources  allocated to the job if
876                     running.   If  not  running,  then  print  the  trackable
877                     resources requested by the job.
878
879              tres-bind
880                     Print  the  trackable resources task binding requested by
881                     the job or job step.
882
883              tres-freq
884                     Print the trackable resources  frequencies  requested  by
885                     the job or job step.
886
887              tres-per-job
888                     Print the trackable resources requested by the job.
889
890              tres-per-node
891                     Print  the  trackable resources per node requested by the
892                     job or job step.
893
894              tres-per-socket
895                     Print the trackable resources per socket requested by the
896                     job or job step.
897
898              tres-per-step
899                     Print the trackable resources requested by the job step.
900
901              tres-per-task
902                     Print  the  trackable resources per task requested by the
903                     job or job step.
904
905              UserID
906                     User ID for a job or job step.  (Valid for jobs  and  job
907                     steps)
908
909              UserName
910                     User name for a job or job step.  (Valid for jobs and job
911                     steps)
912
913              Wait4Switch
914                     The amount of time to wait  for  the  desired  number  of
915                     switches.  (Valid for jobs only)
916
917              WCKey
918                     Workload  Characterization  Key (wckey).  (Valid for jobs
919                     only)
920
921              WorkDir
922                     The job's working directory.  (Valid for jobs only)
923
924
925       -p <part_list>, --partition=<part_list>
926              Specify the partitions of the jobs or steps to view.  Accepts  a
927              comma separated list of partition names.
928
929
930       -P, --priority
931              For  pending jobs submitted to multiple partitions, list the job
932              once per partition. In addition, if jobs are sorted by priority,
933              consider both the partition and job priority. This option can be
934              used to produce a list of pending jobs in the same order consid‐
935              ered for scheduling by Slurm with appropriate additional options
936              (e.g. "--sort=-p,i --states=PD").
937
938
939       -q <qos_list>, --qos=<qos_list>
940              Specify the qos(s) of the jobs or steps to view. Accepts a comma
941              separated list of qos's.
942
943
944       -R, --reservation=<reservation_name>
945              Specify the reservation of the jobs to view.
946
947
948       -s, --steps
949              Specify the job steps to view.  This flag indicates that a comma
950              separated list of job steps to view  follows  without  an  equal
951              sign    (see    examples).     The    job    step    format   is
952              "job_id[_array_id].step_id". Defaults to all  job  steps.  Since
953              this  option's argument is optional, for proper parsing the sin‐
954              gle letter option must be followed immediately  with  the  value
955              and not include a space between them. For example "-s1008.0" and
956              not "-s 1008.0".
957
958
959       --sibling
960              Show all sibling jobs on a federated cluster. Implies  --federa‐
961              tion.
962
963
964       -S <sort_list>, --sort=<sort_list>
965              Specification  of the order in which records should be reported.
966              This uses the same field specification as  the  <output_format>.
967              The  long  format option "cluster" can also be used to sort jobs
968              or job steps by cluster name (e.g.  federated  jobs).   Multiple
969              sorts may be performed by listing multiple sort fields separated
970              by commas.  The field specifications may be preceded by  "+"  or
971              "-"  for  ascending (default) and descending order respectively.
972              For example, a sort value of "P,U" will sort the records by par‐
973              tition name then by user id.  The default value of sort for jobs
974              is "P,t,-p" (increasing partition name then within a given  par‐
975              tition  by  increasing  job state and then decreasing priority).
976              The default value of sort for job  steps  is  "P,i"  (increasing
977              partition  name then within a given partition by increasing step
978              id).
979
980
981       --start
982              Report the expected start time and resources to be allocated for
983              pending jobs in order of increasing start time.  This is equiva‐
984              lent to the following options: --format="%.18i  %.9P  %.8j  %.8u
985              %.2t   %.19S  %.6D %20Y %R", --sort=S and --states=PENDING.  Any
986              of these options may be explicitly changed as desired by combin‐
987              ing  the  --start option with other option values (e.g. to use a
988              different output format).  The expected start  time  of  pending
989              jobs  is  only  available  if the Slurm is configured to use the
990              backfill scheduling plugin.
991
992
993       -t <state_list>, --states=<state_list>
994              Specify the states of jobs to view.  Accepts a  comma  separated
995              list of state names or "all". If "all" is specified then jobs of
996              all states will be reported. If no state is specified then pend‐
997              ing,  running,  and  completing  jobs  are reported. See the JOB
998              STATE CODES section below for  a  list  of  valid  states.  Both
999              extended  and  compact  forms  are valid.  Note the <state_list>
1000              supplied is case insensitive ("pd" and "PD" are equivalent).
1001
1002
1003       -u <user_list>, --user=<user_list>
1004              Request jobs or job steps from a comma separated list of  users.
1005              The  list can consist of user names or user id numbers.  Perfor‐
1006              mance of the command can be measurably improved for systems with
1007              large numbers of jobs when a single user is specified.
1008
1009
1010       --usage
1011              Print a brief help message listing the squeue options.
1012
1013
1014       -v, --verbose
1015              Report details of squeues actions.
1016
1017
1018       -V , --version
1019              Print version information and exit.
1020
1021
1022       -w <hostlist>, --nodelist=<hostlist>
1023              Report  only  on jobs allocated to the specified node or list of
1024              nodes.  This may either  be  the  NodeName  or  NodeHostname  as
1025              defined  in  slurm.conf(5)  in  the  event  that they differ.  A
1026              node_name of localhost is mapped to the current host name.
1027
1028

JOB REASON CODES

1030       These codes identify the reason that a job is waiting for execution.  A
1031       job  may be waiting for more than one reason, in which case only one of
1032       those reasons is displayed.
1033
1034       AssociationJobLimit   The job's association has reached its maximum job
1035                             count.
1036
1037       AssociationResourceLimit
1038                             The  job's  association has reached some resource
1039                             limit.
1040
1041       AssociationTimeLimit  The job's association has reached its time limit.
1042
1043       BadConstraints        The job's constraints can not be satisfied.
1044
1045       BeginTime             The job's earliest start time has  not  yet  been
1046                             reached.
1047
1048       Cleaning              The  job  is being requeued and still cleaning up
1049                             from its previous execution.
1050
1051       Dependency            This job is waiting for a dependent job  to  com‐
1052                             plete.
1053
1054       FrontEndDown          No  front  end  node is available to execute this
1055                             job.
1056
1057       InactiveLimit         The job reached the system InactiveLimit.
1058
1059       InvalidAccount        The job's account is invalid.
1060
1061       InvalidQOS            The job's QOS is invalid.
1062
1063       JobHeldAdmin          The job is held by a system administrator.
1064
1065       JobHeldUser           The job is held by the user.
1066
1067       JobLaunchFailure      The job could not be launched.  This may  be  due
1068                             to  a  file system problem, invalid program name,
1069                             etc.
1070
1071       Licenses              The job is waiting for a license.
1072
1073       NodeDown              A node required by the job is down.
1074
1075       NonZeroExitCode       The job terminated with a non-zero exit code.
1076
1077       PartitionDown         The partition required by this job is in  a  DOWN
1078                             state.
1079
1080       PartitionInactive     The partition required by this job is in an Inac‐
1081                             tive state and not able to start jobs.
1082
1083       PartitionNodeLimit    The number of nodes required by this job is  out‐
1084                             side of its partition's current limits.  Can also
1085                             indicate that required nodes are DOWN or DRAINED.
1086
1087       PartitionTimeLimit    The job's time limit exceeds its partition's cur‐
1088                             rent time limit.
1089
1090       Priority              One  or  more higher priority jobs exist for this
1091                             partition or advanced reservation.
1092
1093       Prolog                Its PrologSlurmctld program is still running.
1094
1095       QOSJobLimit           The job's QOS has reached its maximum job count.
1096
1097       QOSResourceLimit      The job's QOS has reached some resource limit.
1098
1099       QOSTimeLimit          The job's QOS has reached its time limit.
1100
1101       ReqNodeNotAvail       Some node specifically required by the job is not
1102                             currently  available.   The node may currently be
1103                             in use, reserved for another job, in an  advanced
1104                             reservation,  DOWN,  DRAINED,  or not responding.
1105                             Nodes which are DOWN, DRAINED, or not  responding
1106                             will  be identified as part of the job's "reason"
1107                             field as "UnavailableNodes". Such nodes will typ‐
1108                             ically  require  the  intervention  of  a  system
1109                             administrator to make available.
1110
1111       Reservation           The job is waiting its  advanced  reservation  to
1112                             become available.
1113
1114       Resources             The job is waiting for resources to become avail‐
1115                             able.
1116
1117       SystemFailure         Failure of the Slurm system, a file  system,  the
1118                             network, etc.
1119
1120       TimeLimit             The job exhausted its time limit.
1121
1122       QOSUsageThreshold     Required QOS threshold has been breached.
1123
1124       WaitingForScheduling  No reason has been set for this job yet.  Waiting
1125                             for the scheduler to  determine  the  appropriate
1126                             reason.
1127
1128

JOB STATE CODES

1130       Jobs  typically pass through several states in the course of their exe‐
1131       cution.  The typical states are PENDING, RUNNING,  SUSPENDED,  COMPLET‐
1132       ING, and COMPLETED.  An explanation of each state follows.
1133
1134       BF  BOOT_FAIL       Job terminated due to launch failure, typically due
1135                           to a hardware failure (e.g. unable to boot the node
1136                           or block and the job can not be requeued).
1137
1138       CA  CANCELLED       Job  was explicitly cancelled by the user or system
1139                           administrator.  The job may or may  not  have  been
1140                           initiated.
1141
1142       CD  COMPLETED       Job  has terminated all processes on all nodes with
1143                           an exit code of zero.
1144
1145       CF  CONFIGURING     Job has been allocated resources, but  are  waiting
1146                           for them to become ready for use (e.g. booting).
1147
1148       CG  COMPLETING      Job is in the process of completing. Some processes
1149                           on some nodes may still be active.
1150
1151       DL  DEADLINE        Job terminated on deadline.
1152
1153       F   FAILED          Job terminated with non-zero  exit  code  or  other
1154                           failure condition.
1155
1156       NF  NODE_FAIL       Job  terminated due to failure of one or more allo‐
1157                           cated nodes.
1158
1159       OOM OUT_OF_MEMORY   Job experienced out of memory error.
1160
1161       PD  PENDING         Job is awaiting resource allocation.
1162
1163       PR  PREEMPTED       Job terminated due to preemption.
1164
1165       R   RUNNING         Job currently has an allocation.
1166
1167       RD  RESV_DEL_HOLD   Job is held.
1168
1169       RF  REQUEUE_FED     Job is being requeued by a federation.
1170
1171       RH  REQUEUE_HOLD    Held job is being requeued.
1172
1173       RQ  REQUEUED        Completing job is being requeued.
1174
1175       RS  RESIZING        Job is about to change size.
1176
1177       RV  REVOKED         Sibling was removed from cluster due to other clus‐
1178                           ter starting the job.
1179
1180       SI  SIGNALING       Job is being signaled.
1181
1182       SE  SPECIAL_EXIT    The job was requeued in a special state. This state
1183                           can be set by users, typically in  EpilogSlurmctld,
1184                           if  the  job  has terminated with a particular exit
1185                           value.
1186
1187       SO  STAGE_OUT       Job is staging out files.
1188
1189       ST  STOPPED         Job has  an  allocation,  but  execution  has  been
1190                           stopped   with  SIGSTOP  signal.   CPUS  have  been
1191                           retained by this job.
1192
1193       S   SUSPENDED       Job has an allocation, but execution has been  sus‐
1194                           pended and CPUs have been released for other jobs.
1195
1196       TO  TIMEOUT         Job terminated upon reaching its time limit.
1197
1198

PERFORMANCE

1200       Executing  squeue sends a remote procedure call to slurmctld. If enough
1201       calls from squeue or other Slurm client commands that send remote  pro‐
1202       cedure  calls to the slurmctld daemon come in at once, it can result in
1203       a degradation of performance of the slurmctld daemon, possibly  result‐
1204       ing in a denial of service.
1205
1206       Do  not run squeue or other Slurm client commands that send remote pro‐
1207       cedure calls to slurmctld from loops in shell  scripts  or  other  pro‐
1208       grams. Ensure that programs limit calls to squeue to the minimum neces‐
1209       sary for the information you are trying to gather.
1210
1211

ENVIRONMENT VARIABLES

1213       Some squeue options may be set via environment variables.  These  envi‐
1214       ronment  variables,  along with their corresponding options, are listed
1215       below. (Note: Commandline options will always override these settings.)
1216
1217       SLURM_BITSTR_LEN    Specifies the string length to be used for  holding
1218                           a  job  array's  task  ID  expression.  The default
1219                           value is 64 bytes.  A value of  0  will  print  the
1220                           full  expression  with any length required.  Larger
1221                           values may adversely impact the application perfor‐
1222                           mance.
1223
1224       SLURM_CLUSTERS      Same as --clusters
1225
1226       SLURM_CONF          The location of the Slurm configuration file.
1227
1228       SLURM_TIME_FORMAT   Specify  the  format  used to report time stamps. A
1229                           value of standard,  the  default  value,  generates
1230                           output            in            the            form
1231                           "year-month-dateThour:minute:second".  A  value  of
1232                           relative  returns  only "hour:minute:second" if the
1233                           current day.  For other dates in the  current  year
1234                           it  prints  the  "hour:minute" preceded by "Tomorr"
1235                           (tomorrow), "Ystday" (yesterday), the name  of  the
1236                           day  for the coming week (e.g. "Mon", "Tue", etc.),
1237                           otherwise the date  (e.g.  "25  Apr").   For  other
1238                           years  it  returns  a date month and year without a
1239                           time (e.g.  "6 Jun 2012"). All of the  time  stamps
1240                           use a 24 hour format.
1241
1242                           A  valid  strftime()  format can also be specified.
1243                           For example, a value of "%a %T" will report the day
1244                           of the week and a time stamp (e.g. "Mon 12:34:56").
1245
1246       SQUEUE_ACCOUNT      -A <account_list>, --account=<account_list>
1247
1248       SQUEUE_ALL          -a, --all
1249
1250       SQUEUE_ARRAY        -r, --array
1251
1252       SQUEUE_NAMES        --name=<name_list>
1253
1254       SQUEUE_FEDERATION   --federation
1255
1256       SQUEUE_FORMAT       -o <output_format>, --format=<output_format>
1257
1258       SQUEUE_FORMAT2      -O <output_format>, --Format=<output_format>
1259
1260       SQUEUE_LICENSES     -p-l <license_list>, --license=<license_list>
1261
1262       SQUEUE_LOCAL        --local
1263
1264       SQUEUE_PARTITION    -p <part_list>, --partition=<part_list>
1265
1266       SQUEUE_PRIORITY     -P, --priority
1267
1268       SQUEUE_QOS          -p <qos_list>, --qos=<qos_list>
1269
1270       SQUEUE_SIBLING      --sibling
1271
1272       SQUEUE_SORT         -S <sort_list>, --sort=<sort_list>
1273
1274       SQUEUE_STATES       -t <state_list>, --states=<state_list>
1275
1276       SQUEUE_USERS        -u <user_list>, --users=<user_list>
1277
1278

EXAMPLES

1280       Print  the  jobs  scheduled in the debug partition and in the COMPLETED
1281       state in the format with six right justified digits for the job id fol‐
1282       lowed by the priority with an arbitrary fields size:
1283       # squeue -p debug -t COMPLETED -o "%.6i %p"
1284        JOBID PRIORITY
1285        65543 99993
1286        65544 99992
1287        65545 99991
1288
1289       Print the job steps in the debug partition sorted by user:
1290       # squeue -s -p debug -S u
1291         STEPID        NAME PARTITION     USER      TIME NODELIST
1292        65552.1       test1     debug    alice      0:23 dev[1-4]
1293        65562.2     big_run     debug      bob      0:18 dev22
1294        65550.1      param1     debug  candice   1:43:21 dev[6-12]
1295
1296       Print information only about jobs 12345,12345, and 12348:
1297       # squeue --jobs 12345,12346,12348
1298        JOBID PARTITION NAME USER ST  TIME  NODES NODELIST(REASON)
1299        12345     debug job1 dave  R   0:21     4 dev[9-12]
1300        12346     debug job2 dave PD   0:00     8 (Resources)
1301        12348     debug job3 ed   PD   0:00     4 (Priority)
1302
1303       Print information only about job step 65552.1:
1304       # squeue --steps 65552.1
1305         STEPID     NAME PARTITION    USER    TIME  NODELIST
1306        65552.1    test2     debug   alice   12:49  dev[1-4]
1307
1308

COPYING

1310       Copyright  (C)  2002-2007  The Regents of the University of California.
1311       Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER).
1312       Copyright (C) 2008-2010 Lawrence Livermore National Security.
1313       Copyright (C) 2010-2016 SchedMD LLC.
1314
1315       This file is  part  of  Slurm,  a  resource  management  program.   For
1316       details, see <https://slurm.schedmd.com/>.
1317
1318       Slurm  is free software; you can redistribute it and/or modify it under
1319       the terms of the GNU General Public License as published  by  the  Free
1320       Software  Foundation;  either  version  2  of  the License, or (at your
1321       option) any later version.
1322
1323       Slurm is distributed in the hope that it will be  useful,  but  WITHOUT
1324       ANY  WARRANTY;  without even the implied warranty of MERCHANTABILITY or
1325       FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General  Public  License
1326       for more details.
1327

SEE ALSO

1329       scancel(1),  scontrol(1),  sinfo(1),  srun(1), slurm_load_ctl_conf (3),
1330       slurm_load_jobs (3), slurm_load_node (3), slurm_load_partitions (3)
1331
1332
1333
1334October 2020                    Slurm Commands                       squeue(1)
Impressum