1STRESS-NG(1)                General Commands Manual               STRESS-NG(1)
2
3
4

NAME

6       stress-ng - a tool to load and stress a computer system
7
8

SYNOPSIS

10       stress-ng [OPTION [ARG]] ...
11
12

DESCRIPTION

14       stress-ng  will  stress  test  a  computer system in various selectable
15       ways. It was designed to exercise various physical subsystems of a com‐
16       puter  as  well  as  the  various  operating  system kernel interfaces.
17       stress-ng also has a wide range of CPU specific stress tests that exer‐
18       cise floating point, integer, bit manipulation and control flow.
19
20       stress-ng  was originally intended to make a machine work hard and trip
21       hardware issues such as thermal overruns as well  as  operating  system
22       bugs  that  only  occur  when  a  system  is  being  thrashed hard. Use
23       stress-ng with caution as some of the tests can make a system  run  hot
24       on poorly designed hardware and also can cause excessive system thrash‐
25       ing which may be difficult to stop.
26
27       stress-ng can also measure test throughput rates; this can be useful to
28       observe  performance changes across different operating system releases
29       or types of hardware. However, it has never been intended to be used as
30       a precise benchmark test suite, so do NOT use it in this manner.
31
32       Running  stress-ng  with root privileges will adjust out of memory set‐
33       tings on Linux systems to make the stressors unkillable in  low  memory
34       situations,  so  use this judiciously.  With the appropriate privilege,
35       stress-ng can allow the ionice class and ionice levels to be  adjusted,
36       again, this should be used with care.
37
38       One  can  specify  the number of processes to invoke per type of stress
39       test; specifying a zero value will  select  the  number  of  processors
40       available as defined by sysconf(_SC_NPROCESSORS_CONF), if that can't be
41       determined then the number of online CPUs is used.   If  the  value  is
42       less than zero then the number of online CPUs is used.
43

OPTIONS

45       General stress-ng control options:
46
47       --abort
48              this  option  will  force all running stressors to abort (termi‐
49              nate) if any other stressor terminates prematurely because of  a
50              failure.
51
52       --aggressive
53              enables more file, cache and memory aggressive options. This may
54              slow tests down, increase latencies and  reduce  the  number  of
55              bogo  ops as well as changing the balance of user time vs system
56              time used depending on the type of stressor being used.
57
58       -a N, --all N, --parallel N
59              start N instances of all stressors in parallel.  If  N  is  less
60              than zero, then the number of CPUs online is used for the number
61              of instances.  If N is zero, then the number of configured  CPUs
62              in the system is used.
63
64       -b N, --backoff N
65              wait  N  microseconds  between  the  start of each stress worker
66              process. This allows one to ramp up the stress tests over time.
67
68       --class name
69              specify the class of stressors to run. Stressors are  classified
70              into  one  or more of the following classes: cpu, cpu-cache, de‐
71              vice, io, interrupt,  filesystem,  memory,  network,  os,  pipe,
72              scheduler  and vm.  Some stressors fall into just one class. For
73              example the 'get' stressor is just  in  the  'os'  class.  Other
74              stressors  fall  into  more  than  one  class,  for example, the
75              'lsearch' stressor falls into the 'cpu', 'cpu-cache'  and  'mem‐
76              ory'  classes as it exercises all these three.  Selecting a spe‐
77              cific class will run all the stressors that fall into that class
78              only when run with the --sequential option.
79
80              Specifying  a  name  followed  by  a  question mark (for example
81              --class vm?) will print out all the stressors in  that  specific
82              class.
83
84       -n, --dry-run
85              parse options, but do not run stress tests. A no-op.
86
87       --ftrace
88              enable kernel function call tracing (Linux only).  This will use
89              the kernel debugfs ftrace mechanism to  record  all  the  kernel
90              functions  used  on the system while stress-ng is running.  This
91              is only as accurate as the kernel ftrace output, so there may be
92              some variability on the data reported.
93
94       -h, --help
95              show help.
96
97       --ignite-cpu
98              alter kernel controls to try and maximize the CPU. This requires
99              root privilege to alter various /sys interface  controls.   Cur‐
100              rently  this only works for Intel P-State enabled x86 systems on
101              Linux.
102
103       --ionice-class class
104              specify ionice class (only on Linux).  Can  be  idle  (default),
105              besteffort, be, realtime, rt.
106
107       --ionice-level level
108              specify  ionice  level  (only on Linux). For idle, 0 is the only
109              possible option. For besteffort or realtime  values  0  (highest
110              priority)  to  7  (lowest  priority). See ionice(1) for more de‐
111              tails.
112
113       --job jobfile
114              run stressors using a jobfile.  The  jobfile  is  essentially  a
115              file  containing stress-ng options (without the leading --) with
116              one option per line. Lines may have comments with  comment  text
117              proceeded by the # character. A simple example is as follows:
118
119              run sequential   # run stressors sequentially
120              verbose          # verbose output
121              metrics-brief    # show metrics at end of run
122              timeout 60s      # stop each stressor after 60 seconds
123              #
124              # vm stressor options:
125              #
126              vm 2             # 2 vm stressors
127              vm-bytes 128M    # 128MB available memory
128              vm-keep          # keep vm mapping
129              vm-populate      # populate memory
130              #
131              # memcpy stressor options:
132              #
133              memcpy 5         # 5 memcpy stressors
134
135              The  job  file  introduces the run command that specifies how to
136              run the stressors:
137
138              run sequential - run stressors sequentially
139              run parallel - run stressors together in parallel
140
141              Note that 'run parallel' is the default.
142
143       -k, --keep-name
144              by default, stress-ng will attempt to change  the  name  of  the
145              stress  processes  according to their functionality; this option
146              disables this and keeps the process names to be the name of  the
147              parent process, that is, stress-ng.
148
149       --log-brief
150              by  default  stress-ng  will report the name of the program, the
151              message type and the process id as a prefix to all  output.  The
152              --log-brief  option will output messages without these fields to
153              produce a less verbose output.
154
155       --log-file filename
156              write messages to the specified log file.
157
158       --maximize
159              overrides the default stressor settings and instead  sets  these
160              to  the  maximum settings allowed.  These defaults can always be
161              overridden by the per stressor settings options if required.
162
163       --max-fd N
164              set the maximum limit on file descriptors (value or a % of  sys‐
165              tem  allowed  maximum).   By  default, stress-ng can use all the
166              available file descriptors; this option sets the  limit  in  the
167              range from 10 up to the maximum limit of RLIMIT_NOFILE.  One can
168              use a % setting too, e.g. 50% is half the maximum  allowed  file
169              descriptors.  Note that stress-ng will use about 5 of the avail‐
170              able file descriptors so take this into consideration when using
171              this setting.
172
173       --metrics
174              output  number  of  bogo  operations  in  total performed by the
175              stress processes.  Note that these are not a reliable metric  of
176              performance  or throughput and have not been designed to be used
177              for benchmarking whatsoever. The metrics are just a  useful  way
178              to  observe  how  a  system  behaves when under various kinds of
179              load.
180
181              The following columns of information are output:
182
183              Column Heading             Explanation
184              bogo ops                   number of iterations of the  stressor
185                                         during the run. This is metric of how
186                                         much overall "work" has been achieved
187                                         in bogo operations.
188              real time (secs)           average  wall clock duration (in sec‐
189                                         onds) of the stressor.  This  is  the
190                                         total  wall clock time of all the in‐
191                                         stances of that  particular  stressor
192                                         divided by the number of these stres‐
193                                         sors being run.
194              usr time (secs)            total user time (in seconds) consumed
195                                         running  all  the  instances  of  the
196                                         stressor.
197
198
199              sys time (secs)            total system time (in  seconds)  con‐
200                                         sumed  running  all  the instances of
201                                         the stressor.
202              bogo ops/s (real time)     total  bogo  operations  per   second
203                                         based  on  wall  clock  run time. The
204                                         wall clock time reflects the apparent
205                                         run time. The more processors one has
206                                         on a system the more  the  work  load
207                                         can  be  distributed  onto  these and
208                                         hence the wall clock time will reduce
209                                         and  the bogo ops rate will increase.
210                                         This is  essentially  the  "apparent"
211                                         bogo ops rate of the system.
212              bogo ops/s (usr+sys time)  total   bogo  operations  per  second
213                                         based on cumulative user  and  system
214                                         time.  This is the real bogo ops rate
215                                         of the system taking into  considera‐
216                                         tion  the  actual time execution time
217                                         of the stressor across all  the  pro‐
218                                         cessors.   Generally  this  will  de‐
219                                         crease as one  adds  more  concurrent
220                                         stressors due to contention on cache,
221                                         memory, execution  units,  buses  and
222                                         I/O devices.
223              CPU used per instance (%)  total  percentage of CPU used divided
224                                         by number of stressor instances. 100%
225                                         is  1  full  CPU.  Some stressors run
226                                         multiple threads so it is possible to
227                                         have a figure greater than 100%.
228
229       --metrics-brief
230              show  shorter  list  of  stressor  metrics  (no CPU used per in‐
231              stance).
232
233       --minimize
234              overrides the default stressor settings and instead  sets  these
235              to  the  minimum settings allowed.  These defaults can always be
236              overridden by the per stressor settings options if required.
237
238       --no-madvise
239              from version 0.02.26 stress-ng  automatically  calls  madvise(2)
240              with random advise options before each mmap and munmap to stress
241              the vm subsystem a little harder. The --no-advise  option  turns
242              this default off.
243
244       --no-rand-seed
245              Do  not seed the stress-ng pseudo-random number generator with a
246              quasi random start seed, but instead seed it with constant  val‐
247              ues.  This  forces  tests  to run each time using the same start
248              conditions which can be useful when  one  requires  reproducible
249              stress tests.
250
251       --oomable
252              Do not respawn a stressor if it gets killed by the Out-of-Memory
253              (OOM) killer.  The default behaviour is to  restart  a  new  in‐
254              stance  of  a  stressor  if the kernel OOM killer terminates the
255              process. This option disables this default behaviour.
256
257       --page-in
258              touch allocated pages that are not in core, forcing them  to  be
259              paged  back  in.  This is a useful option to force all the allo‐
260              cated pages to be paged in when using the bigheap, mmap  and  vm
261              stressors.  It will severely degrade performance when the memory
262              in the system is less than the  allocated  buffer  sizes.   This
263              uses  mincore(2) to determine the pages that are not in core and
264              hence need touching to page them back in.
265
266       --pathological
267              enable stressors that are known to hang systems.  Some stressors
268              can  quickly  consume  resources  in  such  a  way that they can
269              rapidly hang a system before the kernel can OOM kill them. These
270              stressors  are not enabled by default, this option enables them,
271              but you probably don't want to do this. You have been warned.
272
273       --perf measure processor and system activity using perf  events.  Linux
274              only and caveat emptor, according to perf_event_open(2): "Always
275              double-check your results! Various generalized events  have  had
276              wrong  values.".   Note  that  with  Linux 4.7 one needs to have
277              CAP_SYS_ADMIN capabilities for this option to  work,  or  adjust
278              /proc/sys/kernel/perf_event_paranoid  to  below  2  to  use this
279              without CAP_SYS_ADMIN.
280
281       -q, --quiet
282              do not show any output.
283
284       -r N, --random N
285              start N random stress workers. If N is 0,  then  the  number  of
286              configured processors is used for N.
287
288       --sched scheduler
289              select  the  named scheduler (only on Linux). To see the list of
290              available schedulers use: stress-ng --sched which
291
292       --sched-prio prio
293              select the scheduler priority level  (only  on  Linux).  If  the
294              scheduler  does not support this then the default priority level
295              of 0 is chosen.
296
297       --sched-period period
298              select the period parameter  for  deadline  scheduler  (only  on
299              Linux). Default value is 0 (in nanoseconds).
300
301       --sched-runtime runtime
302              select  the  runtime  parameter  for deadline scheduler (only on
303              Linux). Default value is 99999 (in nanoseconds).
304
305       --sched-deadline deadline
306              select the deadline parameter for deadline  scheduler  (only  on
307              Linux). Default value is 100000 (in nanoseconds).
308
309       --sched-reclaim
310              use  cpu  bandwidth reclaim feature for deadline scheduler (only
311              on Linux).
312
313       --seed N
314              set the random number generate seed with a 64 bit value.  Allows
315              stressors  to  use the same random number generator sequences on
316              each invocation.
317
318       --sequential N
319              sequentially run all the stressors one by one for a  default  of
320              60  seconds.  The  number of instances of each of the individual
321              stressors to be started is N.  If N is less than zero, then  the
322              number of CPUs online is used for the number of instances.  If N
323              is zero, then the number of CPUs in the system is used.  Use the
324              --timeout option to specify the duration to run each stressor.
325
326       --skip-silent
327              silence  messages  that  report that a stressor has been skipped
328              because it requires features not supported by the  system,  such
329              as  unimplemented  system  calls, missing resources or processor
330              specific features.
331
332       --smart
333              scan the block devices for changes S.M.A.R.T. statistics  (Linux
334              only).  This  requires root privileges to read the Self-Monitor‐
335              ing, Analysis and Reporting Technology data from all  block  de‐
336              vies  and  will report any changes in the statistics. One caveat
337              is that device manufacturers provide different sets of data, the
338              exact meaning of the data can be vague and the data may be inac‐
339              curate.
340
341       --stressors
342              output the names of the available stressors.
343
344       --syslog
345              log output (except for verbose -v messages) to the syslog.
346
347       --taskset list
348              set CPU affinity based on the list of CPUs  provided;  stress-ng
349              is  bound  to  just  use these CPUs (Linux only). The CPUs to be
350              used are specified by a comma separated list of CPU (0 to  N-1).
351              One  can  specify  a  range  of  CPUs  using  '-',  for example:
352              --taskset 0,2-3,6,7-11
353
354       --temp-path path
355              specify a path for stress-ng temporary directories and temporary
356              files;  the default path is the current working directory.  This
357              path must have read and write access for  the  stress-ng  stress
358              processes.
359
360       --thermalstat S
361              every  S  seconds show CPU and thermal load statistics. This op‐
362              tion shows average CPU frequency  in  GHz  (average  of  online-
363              CPUs),  load  averages  (1  minute, 5 minute and 15 minutes) and
364              available thermal zone temperatures in degrees Centigrade.
365
366       --thrash
367              This can only be used when running on Linux and with root privi‐
368              lege.  This  option  starts  a  background thrasher process that
369              works through all the processes on a system and tries to page as
370              many  pages  in the processes as possible.  This will cause con‐
371              siderable amount of thrashing of swap on an over-committed  sys‐
372              tem.
373
374       -t N, --timeout T
375              stop stress test after T seconds. One can also specify the units
376              of time in seconds, minutes, hours, days or years with the  suf‐
377              fix  s,  m,  h, d or y.  Note: A timeout of 0 will run stress-ng
378              without any timeouts (run forever).
379
380       --timestamp
381              add a timestamp in hours, minutes, seconds and hundredths  of  a
382              second to the log output.
383
384       --timer-slack N
385              adjust  the  per  process  timer  slack  to N nanoseconds (Linux
386              only). Increasing the timer slack allows the kernel to  coalesce
387              timer  events by adding some fuzziness to timer expiration times
388              and hence reduce  wakeups.   Conversely,  decreasing  the  timer
389              slack  will  increase wakeups.  A value of 0 for the timer-slack
390              will set the system default of 50,000 nanoseconds.
391
392       --times
393              show the cumulative user and system times of all the child  pro‐
394              cesses at the end of the stress run.  The percentage of utilisa‐
395              tion of available CPU time is also calculated from the number of
396              on-line CPUs in the system.
397
398       --tz   collect temperatures from the available thermal zones on the ma‐
399              chine (Linux only).  Some devices may have one or  more  thermal
400              zones, where as others may have none.
401
402       -v, --verbose
403              show all debug, warnings and normal information output.
404
405       --verify
406              verify  results when a test is run. This is not available on all
407              tests. This will sanity check the computations  or  memory  con‐
408              tents  from a test run and report to stderr any unexpected fail‐
409              ures.
410
411       -V, --version
412              show version of stress-ng, version of toolchain  used  to  build
413              stress-ng and system information.
414
415       --vmstat S
416              every S seconds show statistics about processes, memory, paging,
417              block I/O, interrupts, context switches, disks and cpu activity.
418              The  output  is  similar  that  to the output from the vmstat(8)
419              utility. Currently a Linux only option.
420
421       -x, --exclude list
422              specify a list of one or more stressors to exclude (that is,  do
423              not  run  them).   This  is useful to exclude specific stressors
424              when one selects many stressors to run using the --class option,
425              --sequential,  --all  and --random options. Example, run the cpu
426              class stressors concurrently and exclude  the  numa  and  search
427              stressors:
428
429              stress-ng --class cpu --all 1 -x numa,bsearch,hsearch,lsearch
430
431       -Y, --yaml filename
432              output gathered statistics to a YAML formatted file named 'file‐
433              name'.
434
435
436
437       Stressor specific options:
438
439       --access N
440              start N workers that work through various settings of file  mode
441              bits (read, write, execute) for the file owner and checks if the
442              user permissions of the file using  access(2)  and  faccessat(2)
443              are sane.
444
445       --access-ops N
446              stop access workers after N bogo access sanity checks.
447
448       --affinity N
449              start  N  workers  that run 16 processes that rapidly change CPU
450              affinity (only on Linux). Rapidly  switching  CPU  affinity  can
451              contribute to poor cache behaviour and high context switch rate.
452
453       --affinity-ops N
454              stop  affinity  workers  after  N bogo affinity operations. Note
455              that the counters across the 16 processes are not locked to  im‐
456              prove  affinity  test rates so the final number of bogo-ops will
457              be equal or more than the specified ops stop  threshold  because
458              of racy unlocked bogo-op counting.
459
460       --affinity-delay N
461              delay  N  nanoseconds  before changing affinity to the next CPU.
462              The delay will spin on CPU scheduling  yield  operations  for  N
463              nanoseconds  before the process is moved to another CPU. The de‐
464              fault is 0 nanosconds.
465
466       --affinity-pin
467              pin all the 16 per stressor processes to a CPU. All 16 processes
468              follow the CPU chosen by the main parent stressor, forcing heavy
469              per CPU loading.
470
471       --affinity-rand
472              switch CPU affinity randomly rather than the default of  sequen‐
473              tially.
474
475       --af-alg N
476              start  N workers that exercise the AF_ALG socket domain by hash‐
477              ing and encrypting various sized random messages. This exercises
478              the  available  hashes,  ciphers, rng and aead crypto engines in
479              the Linux kernel.
480
481       --af-alg-ops N
482              stop af-alg workers after N AF_ALG messages are hashed.
483
484       --af-alg-dump
485              dump the internal  list  representing  cryptographic  algorithms
486              parsed from the /proc/crypto file to standard output (stdout).
487
488       --aio N
489              start  N  workers  that  issue  multiple  small asynchronous I/O
490              writes and reads on a relatively small temporary file using  the
491              POSIX  aio  interface.  This will just hit the file system cache
492              and soak up a lot of user and kernel time in  issuing  and  han‐
493              dling I/O requests.  By default, each worker process will handle
494              16 concurrent I/O requests.
495
496       --aio-ops N
497              stop POSIX asynchronous I/O workers after  N  bogo  asynchronous
498              I/O requests.
499
500       --aio-requests N
501              specify  the  number  of  POSIX  asynchronous  I/O requests each
502              worker should issue, the default is 16; 1 to 4096 are allowed.
503
504       --aiol N
505              start N workers that issue multiple 4K random  asynchronous  I/O
506              writes  using  the  Linux  aio system calls io_setup(2), io_sub‐
507              mit(2), io_getevents(2) and  io_destroy(2).   By  default,  each
508              worker process will handle 16 concurrent I/O requests.
509
510       --aiol-ops N
511              stop  Linux  asynchronous  I/O workers after N bogo asynchronous
512              I/O requests.
513
514       --aiol-requests N
515              specify the number  of  Linux  asynchronous  I/O  requests  each
516              worker should issue, the default is 16; 1 to 4096 are allowed.
517
518       --apparmor N
519              start  N workers that exercise various parts of the AppArmor in‐
520              terface. Currently one needs root permission to run this partic‐
521              ular test. Only available on Linux systems with AppArmor support
522              and requires the CAP_MAC_ADMIN capability.
523
524       --apparmor-ops
525              stop the AppArmor workers after N bogo operations.
526
527       --atomic N
528              start N workers that exercise various GCC __atomic_*() built  in
529              operations  on  8,  16,  32  and 64 bit integers that are shared
530              among the N workers. This stressor is only available for  builds
531              using  GCC  4.7.4  or higher. The stressor forces many front end
532              cache stalls and cache references.
533
534       --atomic-ops N
535              stop the atomic workers after N bogo atomic operations.
536
537       --bad-altstack N
538              start N workers that create broken alternative signal stacks for
539              SIGSEGV  and  SIGBUS  handling  that  in  turn  create secondary
540              SIGSEGV/SIGBUS errors.  A variety of randonly selected nefarious
541              methods are used to create the stacks:
542
543              • Unmapping  the alternative signal stack, before triggering the
544                signal handling.
545              • Changing the alternative signal stack to just being read only,
546                write only, execute only.
547              • Using a NULL alternative signal stack.
548              • Using  the  signal  handler  object  as the alternative signal
549                stack.
550              • Unmapping the alternative signal stack during execution of the
551                signal handler.
552              • Using  a  read-only  text  segment  for the alternative signal
553                stack.
554              • Using an undersized alternative signal stack.
555              • Using the VDSO as an alternative signal stack.
556              • Using an alternative stack mapped onto /dev/zero.
557              • Using an alternative stack mapped to a  zero  sized  temporary
558                file to generate a SIGBUS error.
559
560       --bad-altstack-ops N
561              stop  the  bad  alternative stack stressors after N SIGSEGV bogo
562              operations.
563
564
565       --bad-ioctl N
566              start N workers that perform a range of illegal bad read  ioctls
567              (using  _IOR)  across  the  device  drivers. This exercises page
568              size, 64 bit, 32 bit, 16 bit and 8 bit reads as well as NULL ad‐
569              dresses,  non-readable  pages  and  PROT_NONE mapped pages. Cur‐
570              rently only for Linux and requires the --pathological option.
571
572       --bad-ioctl-ops N
573              stop the bad ioctl stressors after N bogo ioctl operations.
574
575       -B N, --bigheap N
576              start N workers that grow their heaps by reallocating memory. If
577              the  out of memory killer (OOM) on Linux kills the worker or the
578              allocation fails then the allocating  process  starts  all  over
579              again.   Note  that  the OOM adjustment for the worker is set so
580              that the OOM killer will treat these workers as the first candi‐
581              date processes to kill.
582
583       --bigheap-ops N
584              stop the big heap workers after N bogo allocation operations are
585              completed.
586
587       --bigheap-growth N
588              specify amount of memory to grow heap by per iteration. Size can
589              be from 4K to 64MB. Default is 64K.
590
591       --binderfs N
592              start  N  workers that mount, exercise and unmount binderfs. The
593              binder  control  device  is  exercised   with   256   sequential
594              BINDER_CTL_ADD ioctl calls per loop.
595
596       --binderfs-ops N
597              stop after N binderfs cycles.
598
599       --bind-mount N
600              start  N workers that repeatedly bind mount / to / inside a user
601              namespace. This can consume resources rapidly,  forcing  out  of
602              memory  situations.  Do not use this stressor unless you want to
603              risk hanging your machine.
604
605       --bind-mount-ops N
606              stop after N bind mount bogo operations.
607
608       --branch N
609              start N workers that randomly jump to 256 randomly selected  lo‐
610              cations and hence exercise the CPU branch prediction logic.
611
612       --branch-ops N
613              stop the branch stressors after N jumps
614
615       --brk N
616              start N workers that grow the data segment by one page at a time
617              using multiple brk(2) calls.  Each  successfully  allocated  new
618              page  is  touched to ensure it is resident in memory.  If an out
619              of memory condition occurs then the test  will  reset  the  data
620              segment  to the point before it started and repeat the data seg‐
621              ment resizing over again.  The process adjusts the out of memory
622              setting  so  that  it  may  be killed by the out of memory (OOM)
623              killer before other processes.  If  it  is  killed  by  the  OOM
624              killer  then it will be automatically re-started by a monitoring
625              parent process.
626
627       --brk-ops N
628              stop the brk workers after N bogo brk operations.
629
630       --brk-mlock
631              attempt to mlock future brk pages into memory causing more  mem‐
632              ory pressure. If mlock(MCL_FUTURE) is implemented then this will
633              stop new brk pages from being swapped out.
634
635       --brk-notouch
636              do not touch each newly allocated data segment page.  This  dis‐
637              ables  the  default  of  touching  each newly allocated page and
638              hence avoids the kernel from necessarily backing the  page  with
639              real physical memory.
640
641       --bsearch N
642              start  N workers that binary search a sorted array of 32 bit in‐
643              tegers using bsearch(3). By default, there are 65536 elements in
644              the array.  This is a useful method to exercise random access of
645              memory and processor cache.
646
647       --bsearch-ops N
648              stop the bsearch worker after N bogo bsearch operations are com‐
649              pleted.
650
651       --bsearch-size N
652              specify  the  size  (number  of 32 bit integers) in the array to
653              bsearch. Size can be from 1K to 4M.
654
655       -C N, --cache N
656              start N workers that perform random wide spread memory read  and
657              writes to thrash the CPU cache.  The code does not intelligently
658              determine the CPU cache configuration and so it may be sub-opti‐
659              mal  in  producing hit-miss read/write activity for some proces‐
660              sors.
661
662       --cache-fence
663              force write serialization on each store  operation  (x86  only).
664              This is a no-op for non-x86 architectures.
665
666       --cache-flush
667              force  flush cache on each store operation (x86 only). This is a
668              no-op for non-x86 architectures.
669
670       --cache-level N
671              specify level of cache to  exercise  (1=L1  cache,  2=L2  cache,
672              3=L3/LLC cache (the default)).  If the cache hierarchy cannot be
673              determined, built-in defaults will apply.
674
675       --cache-no-affinity
676              do not change processor affinity when --cache is in effect.
677
678       --cache-sfence
679              force write serialization on  each  store  operation  using  the
680              sfence  instruction  (x86 only). This is a no-op for non-x86 ar‐
681              chitectures.
682
683       --cache-ops N
684              stop cache thrash workers after N bogo cache thrash operations.
685
686       --cache-prefetch
687              force read prefetch on next read address on  architectures  that
688              support prefetching.
689
690       --cache-ways N
691              specify the number of cache ways to exercise. This allows a sub‐
692              set of the overall cache size to be exercised.
693
694       --cap N
695              start N workers that read per process capabilities via calls  to
696              capget(2) (Linux only).
697
698       --cap-ops N
699              stop after N cap bogo operations.
700
701       --chattr N
702              start N workers that attempt to exercise file attributes via the
703              EXT2_IOC_SETFLAGS ioctl. This is intended  to  be  intentionally
704              racy  and  exercise a range of chattr attributes by enabling and
705              disabling them on a file shared amongst the  N  chattr  stressor
706              processes. (Linux only).
707
708       --chattr-ops N
709              stop after N chattr bogo operations.
710
711       --chdir N
712              start  N workers that change directory between directories using
713              chdir(2).
714
715       --chdir-ops N
716              stop after N chdir bogo operations.
717
718       --chdir-dirs N
719              exercise chdir on N directories. The default  is  8192  directo‐
720              ries, this allows 64 to 65536 directories to be used instead.
721
722       --chmod N
723              start  N workers that change the file mode bits via chmod(2) and
724              fchmod(2) on the same file. The greater the value for N then the
725              more  contention  on  the  single  file.  The stressor will work
726              through all the combination of mode bits.
727
728       --chmod-ops N
729              stop after N chmod bogo operations.
730
731       --chown N
732              start N workers that exercise chown(2) on  the  same  file.  The
733              greater  the  value for N then the more contention on the single
734              file.
735
736       --chown-ops N
737              stop the chown workers after N bogo chown(2) operations.
738
739       --chroot N
740              start N workers that exercise chroot(2) on various valid and in‐
741              valid chroot paths. Only available on Linux systems and requires
742              the CAP_SYS_ADMIN capability.
743
744       --chroot-ops N
745              stop the chroot workers after N bogo chroot(2) operations.
746
747       --clock N
748              start N workers exercising clocks  and  POSIX  timers.  For  all
749              known clock types this will exercise clock_getres(2), clock_get‐
750              time(2) and clock_nanosleep(2).  For all known  timers  it  will
751              create  a  50000ns  timer  and  busy poll this until it expires.
752              This stressor will cause frequent context switching.
753
754       --clock-ops N
755              stop clock stress workers after N bogo operations.
756
757       --clone N
758              start N  workers  that  create  clones  (via  the  clone(2)  and
759              clone3()  system  calls).  This will rapidly try to create a de‐
760              fault of 8192 clones that immediately die and wait in  a  zombie
761              state  until they are reaped.  Once the maximum number of clones
762              is reached (or clone fails because one has reached  the  maximum
763              allowed)  the  oldest  clone thread is reaped and a new clone is
764              then created in a first-in first-out manner, and then  repeated.
765              A  random  clone flag is selected for each clone to try to exer‐
766              cise different clone operations.  The clone stressor is a  Linux
767              only option.
768
769       --clone-ops N
770              stop clone stress workers after N bogo clone operations.
771
772       --clone-max N
773              try  to  create  as  many  as  N  clone threads. This may not be
774              reached if the system limit is less than N.
775
776       --close N
777              start N workers that try to force  race  conditions  on  closing
778              opened  file  descriptors.   These  file  descriptors  have been
779              opened in various ways to  try  and  exercise  different  kernel
780              close handlers.
781
782       --close-ops N
783              stop close workers after N bogo close operations.
784
785       --context N
786              start  N  workers that run three threads that use swapcontext(3)
787              to implement the thread-to-thread context switching. This  exer‐
788              cises  rapid  process  context saving and restoring and is band‐
789              width limited by register and memory save and restore rates.
790
791       --context-ops N
792              stop context workers after N bogo  context  switches.   In  this
793              stressor, 1 bogo op is equivalent to 1000 swapcontext calls.
794
795       --copy-file N
796              start   N   stressors   that   copy   a  file  using  the  Linux
797              copy_file_range(2) system call. 2MB chunks of  data  are  copied
798              from  random  locations  from  one file to random locations to a
799              destination file.  By default, the files are  256  MB  in  size.
800              Data  is  sync'd to the filesystem after each copy_file_range(2)
801              call.
802
803       --copy-file-ops N
804              stop after N copy_file_range() calls.
805
806       --copy-file-bytes N
807              copy file size, the default is 256 MB. One can specify the  size
808              as  %  of  free  space  on the file system or in units of Bytes,
809              KBytes, MBytes and GBytes using the suffix b, k, m or g.
810
811       -c N, --cpu N
812              start N workers  exercising  the  CPU  by  sequentially  working
813              through  all  the different CPU stress methods. Instead of exer‐
814              cising all the CPU stress methods, one can  specify  a  specific
815              CPU stress method with the --cpu-method option.
816
817       --cpu-ops N
818              stop cpu stress workers after N bogo operations.
819
820       -l P, --cpu-load P
821              load CPU with P percent loading for the CPU stress workers. 0 is
822              effectively a sleep (no load) and  100  is  full  loading.   The
823              loading  loop is broken into compute time (load%) and sleep time
824              (100% - load%). Accuracy depends on the overall load of the pro‐
825              cessor  and  the  responsiveness of the scheduler, so the actual
826              load may be different from the desired load.  Note that the num‐
827              ber  of  bogo CPU operations may not be linearly scaled with the
828              load as some systems employ CPU frequency scaling and so heavier
829              loads  produce  an  increased CPU frequency and greater CPU bogo
830              operations.
831
832              Note: This option only applies to the --cpu stressor option  and
833              not to all of the cpu class of stressors.
834
835       --cpu-load-slice S
836              note  -  this option is only useful when --cpu-load is less than
837              100%. The CPU load is broken into multiple busy and idle cycles.
838              Use this option to specify the duration of a busy time slice.  A
839              negative value for S specifies the number of iterations  to  run
840              before  idling  the CPU (e.g. -30 invokes 30 iterations of a CPU
841              stress loop).  A zero value selects a random busy time between 0
842              and 0.5 seconds.  A positive value for S specifies the number of
843              milliseconds to run before idling the CPU (e.g.  100  keeps  the
844              CPU  busy for 0.1 seconds).  Specifying small values for S lends
845              to  small  time  slices  and   smoother   scheduling.    Setting
846              --cpu-load  as a relatively low value and --cpu-load-slice to be
847              large will cycle the CPU between long idle and busy  cycles  and
848              exercise  different  CPU  frequencies.  The thermal range of the
849              CPU is also cycled, so this is a good mechanism to exercise  the
850              scheduler,  frequency scaling and passive/active thermal cooling
851              mechanisms.
852
853              Note: This option only applies to the --cpu stressor option  and
854              not to all of the cpu class of stressors.
855
856       --cpu-method method
857              specify  a cpu stress method. By default, all the stress methods
858              are exercised sequentially, however one  can  specify  just  one
859              method to be used if required.  Available cpu stress methods are
860              described as follows:
861
862              Method           Description
863              all              iterate over all the below cpu stress methods
864              ackermann        Ackermann function: compute A(3, 7), where:
865                                A(m, n) = n + 1 if m = 0;
866                                A(m - 1, 1) if m > 0 and n = 0;
867                                A(m - 1, A(m, n - 1)) if m > 0 and n > 0
868              apery            calculate Apery's constant  ζ(3);  the  sum  of
869                               1/(n ↑ 3) for to a precision of 1.0x10↑14
870              bitops           various  bit  operations  from bithack, namely:
871                               reverse bits, parity check, bit count, round to
872                               nearest power of 2
873              callfunc         recursively  call  8  argument  C function to a
874                               depth of 1024 calls and unwind
875              cfloat           1000 iterations of a mix of floating point com‐
876                               plex operations
877
878
879              cdouble          1000  iterations  of  a  mix of double floating
880                               point complex operations
881              clongdouble      1000 iterations of a mix of long double  float‐
882                               ing point complex operations
883              collatz          compute  the 1348 steps in the collatz sequence
884                               from starting number 989345275647.  Where  f(n)
885                               = n / 2 (for even n) and f(n) = 3n + 1 (for odd
886                               n).
887              correlate        perform a 8192 × 512 correlation of random dou‐
888                               bles
889              cpuid            fetch  cpu specific information using the cpuid
890                               instruction (x86 only)
891              crc16            compute 1024 rounds of CCITT  CRC16  on  random
892                               data
893              decimal32        1000  iterations  of  a  mix  of 32 bit decimal
894                               floating point operations (GCC only)
895              decimal64        1000 iterations of a  mix  of  64  bit  decimal
896                               floating point operations (GCC only)
897              decimal128       1000  iterations  of  a  mix of 128 bit decimal
898                               floating point operations (GCC only)
899              dither           Floyd–Steinberg dithering of a 1024 × 768  ran‐
900                               dom image from 8 bits down to 1 bit of depth
901              div32            50,000 32 bit unsigned integer divisions
902              div64            50,000 64 bit unsigned integer divisions
903              djb2a            128  rounds  of  hash DJB2a (Dan Bernstein hash
904                               using the xor variant) on 128  to  1  bytes  of
905                               random strings
906              double           1000  iterations  of  a mix of double precision
907                               floating point operations
908              euler            compute e using n = (1 + (1 ÷ n)) ↑ n
909              explog           iterate on n = exp(log(n) ÷ 1.00002)
910              factorial        find factorials from  1..150  using  Stirling's
911                               and Ramanujan's approximations
912              fibonacci        compute  Fibonacci  sequence  of 0, 1, 1, 2, 5,
913                               8...
914              fft              4096 sample Fast Fourier Transform
915              fletcher16       1024 rounds of a naive implementation of  a  16
916                               bit Fletcher's checksum
917              float            1000  iterations of a mix of floating point op‐
918                               erations
919              float16          1000 iterations of a mix  of  16  bit  floating
920                               point operations
921              float32          1000  iterations  of  a  mix of 32 bit floating
922                               point operations
923              float64          1000 iterations of a mix  of  64  bit  floating
924                               point operations
925              float80          1000  iterations  of  a  mix of 80 bit floating
926                               point operations
927              float128         1000 iterations of a mix of  128  bit  floating
928                               point operations
929              floatconversion  perform 65536 iterations of floating point con‐
930                               versions between float, double and long  double
931                               floating point variables.
932              fnv1a            128  rounds of hash FNV-1a (Fowler–Noll–Vo hash
933                               using the xor then multiply variant) on 128  to
934                               1 bytes of random strings
935              gamma            calculate the Euler-Mascheroni constant γ using
936                               the limiting difference  between  the  harmonic
937                               series  (1  +  1/2 + 1/3 + 1/4 + 1/5 ... + 1/n)
938                               and the natural logarithm ln(n), for n = 80000.
939              gcd              compute GCD of integers
940              gray             calculate binary to gray  code  and  gray  code
941                               back to binary for integers from 0 to 65535
942
943
944
945
946
947
948
949              hamming          compute  Hamming H(8,4) codes on 262144 lots of
950                               4 bit data. This turns 4 bit data  into  8  bit
951                               Hamming code containing 4 parity bits. For data
952                               bits d1..d4, parity bits are computed as:
953                                 p1 = d2 + d3 + d4
954                                 p2 = d1 + d3 + d4
955                                 p3 = d1 + d2 + d4
956                                 p4 = d1 + d2 + d3
957              hanoi            solve a 21 disc Towers of Hanoi stack using the
958                               recursive solution
959              hyperbolic       compute sinh(θ) × cosh(θ) + sinh(2θ) + cosh(3θ)
960                               for float, double and  long  double  hyperbolic
961                               sine  and cosine functions where θ = 0 to 2π in
962                               1500 steps
963              idct             8 × 8 IDCT (Inverse Discrete Cosine Transform).
964              int8             1000 iterations of a mix of 8 bit integer oper‐
965                               ations.
966              int16            1000  iterations of a mix of 16 bit integer op‐
967                               erations.
968              int32            1000 iterations of a mix of 32 bit integer  op‐
969                               erations.
970              int64            1000  iterations of a mix of 64 bit integer op‐
971                               erations.
972              int128           1000 iterations of a mix of 128 bit integer op‐
973                               erations (GCC only).
974              int32float       1000  iterations of a mix of 32 bit integer and
975                               floating point operations.
976              int32double      1000 iterations of a mix of 32 bit integer  and
977                               double precision floating point operations.
978              int32longdouble  1000  iterations of a mix of 32 bit integer and
979                               long double  precision  floating  point  opera‐
980                               tions.
981              int64float       1000  iterations of a mix of 64 bit integer and
982                               floating point operations.
983              int64double      1000 iterations of a mix of 64 bit integer  and
984                               double precision floating point operations.
985              int64longdouble  1000  iterations of a mix of 64 bit integer and
986                               long double  precision  floating  point  opera‐
987                               tions.
988              int128float      1000 iterations of a mix of 128 bit integer and
989                               floating point operations (GCC only).
990              int128double     1000 iterations of a mix of 128 bit integer and
991                               double precision floating point operations (GCC
992                               only).
993              int128longdouble 1000 iterations of a mix of 128 bit integer and
994                               long double precision floating point operations
995                               (GCC only).
996              int128decimal32  1000 iterations of a mix of 128 bit integer and
997                               32  bit  decimal floating point operations (GCC
998                               only).
999              int128decimal64  1000 iterations of a mix of 128 bit integer and
1000                               64  bit  decimal floating point operations (GCC
1001                               only).
1002              int128decimal128 1000 iterations of a mix of 128 bit integer and
1003                               128  bit decimal floating point operations (GCC
1004                               only).
1005              intconversion    perform 65536 iterations of integer conversions
1006                               between int16, int32 and int64 variables.
1007              ipv4checksum     compute 1024 rounds of the 16 bit ones' comple‐
1008                               ment IPv4 checksum.
1009              jenkin           Jenkin's integer hash on 128 rounds  of  128..1
1010                               bytes of random data.
1011              jmp              Simple  unoptimised  compare  >,  <, == and jmp
1012                               branching.
1013              lfsr32           16384 iterations of  a  32  bit  Galois  linear
1014                               feedback  shift  register  using the polynomial
1015                               x↑32 + x↑31 + x↑29 + x + 1.  This  generates  a
1016                               ring of 2↑32 - 1 unique values (all 32 bit val‐
1017                               ues except for 0).
1018
1019              ln2              compute ln(2) based on series:
1020                                1 - 1/2 + 1/3 - 1/4 + 1/5 - 1/6 ...
1021              longdouble       1000 iterations of a mix of long double  preci‐
1022                               sion floating point operations.
1023              loop             simple empty loop.
1024              matrixprod       matrix  product  of  two  128 × 128 matrices of
1025                               double floats. Testing on 64 bit  x86  hardware
1026                               shows  that this is provides a good mix of mem‐
1027                               ory, cache and floating point operations and is
1028                               probably  the  best CPU method to use to make a
1029                               CPU run hot.
1030              murmur3_32       murmur3_32 hash (Austin Appleby's Murmur3 hash,
1031                               32  bit  variant)  on  128  rounds of of 128..1
1032                               bytes of random data.
1033              nhash            exim's nhash on 128 rounds of 128..1  bytes  of
1034                               random data.
1035              nsqrt            compute  sqrt()  of  long doubles using Newton-
1036                               Raphson.
1037              omega            compute the omega constant defined by Ωe↑Ω =  1
1038                               using  efficient iteration of Ωn+1 = (1 + Ωn) /
1039                               (1 + e↑Ωn).
1040              parity           compute parity using various methods  from  the
1041                               Standford  Bit  Twiddling  Hacks.   Methods em‐
1042                               ployed are: the naïve way, the naïve  way  with
1043                               the  Brian  Kernigan bit counting optimisation,
1044                               the multiply way, the  parallel  way,  and  the
1045                               lookup table ways (2 variations).
1046              phi              compute the Golden Ratio ϕ using series.
1047              pi               compute  π  using  the Srinivasa Ramanujan fast
1048                               convergence algorithm.
1049              pjw              128 rounds of hash pjw function  on  128  to  1
1050                               bytes of random strings.
1051              prime            find  the  first  10000  prime  numbers using a
1052                               slightly optimised brute force naïve trial  di‐
1053                               vision search.
1054              psi              compute  ψ  (the reciprocal Fibonacci constant)
1055                               using the sum of the  reciprocals  of  the  Fi‐
1056                               bonacci numbers.
1057              queens           compute  all  the  solutions  of  the classic 8
1058                               queens problem for board sizes 1..11.
1059              rand             16384 iterations of rand(), where rand  is  the
1060                               MWC  pseudo  random  number generator.  The MWC
1061                               random function concatenates two 16 bit  multi‐
1062                               ply-with-carry generators:
1063                                x(n) = 36969 × x(n - 1) + carry,
1064                                y(n) = 18000 × y(n - 1) + carry mod 2 ↑ 16
1065
1066                               and has period of around 2 ↑ 60.
1067              rand48           16384 iterations of drand48(3) and lrand48(3).
1068              rgb              convert RGB to YUV and back to RGB (CCIR 601).
1069              sdbm             128  rounds  of  hash sdbm (as used in the SDBM
1070                               database and GNU awk) on 128 to 1 bytes of ran‐
1071                               dom strings.
1072              sieve            find  the  first  10000 prime numbers using the
1073                               sieve of Eratosthenes.
1074              stats            calculate minimum,  maximum,  arithmetic  mean,
1075                               geometric mean, harmoninc mean and standard de‐
1076                               viation on 250 randomly generated positive dou‐
1077                               ble precision values.
1078              sqrt             compute  sqrt(rand()),  where  rand  is the MWC
1079                               pseudo random number generator.
1080              trig             compute sin(θ) × cos(θ) + sin(2θ) + cos(3θ) for
1081                               float,  double  and long double sine and cosine
1082                               functions where θ = 0 to 2π in 1500 steps.
1083              union            perform integer arithmetic  on  a  mix  of  bit
1084                               fields  in  a C union.  This exercises how well
1085                               the compiler and CPU can  perform  integer  bit
1086                               field loads and stores.
1087
1088
1089              zeta             compute  the Riemann Zeta function ζ(s) for s =
1090                               2.0..10.0
1091
1092              Note that some of these methods try to  exercise  the  CPU  with
1093              computations  found  in  some real world use cases. However, the
1094              code has not been optimised on a per-architecture basis, so  may
1095              be  a  sub-optimal  compared to hand-optimised code used in some
1096              applications.  They do try to represent the typical  instruction
1097              mixes found in these use cases.
1098
1099       --cpu-online N
1100              start  N workers that put randomly selected CPUs offline and on‐
1101              line. This Linux only stressor requires root privilege  to  per‐
1102              form  this action. By default the first CPU (CPU 0) is never of‐
1103              flined as this has been found to be problematic on some  systems
1104              and can result in a shutdown.
1105
1106       --cpu-online-all
1107              The default is to never offline the first CPU.  This option will
1108              offline and online all the CPUs include CPU 0.  This  may  cause
1109              some systems to shutdown.
1110
1111       --cpu-online-ops N
1112              stop after offline/online operations.
1113
1114       --crypt N
1115              start  N workers that encrypt a 16 character random password us‐
1116              ing crypt(3).  The password is encrypted using MD5, SHA-256  and
1117              SHA-512 encryption methods.
1118
1119       --crypt-ops N
1120              stop after N bogo encryption operations.
1121
1122       --cyclic N
1123              start  N workers that exercise the real time FIFO or Round Robin
1124              schedulers with cyclic nanosecond  sleeps.  Normally  one  would
1125              just  use  1  worker instance with this stressor to get reliable
1126              statistics.  This stressor measures the first 10 thousand laten‐
1127              cies  and  calculates the mean, mode, minimum, maximum latencies
1128              along with various latency percentiles for the  just  the  first
1129              cyclic  stressor  instance.  One  has  to run this stressor with
1130              CAP_SYS_NICE capability to enable the real time scheduling poli‐
1131              cies. The FIFO scheduling policy is the default.
1132
1133       --cyclic-ops N
1134              stop after N sleeps.
1135
1136       --cyclic-dist N
1137              calculate  and print a latency distribution with the interval of
1138              N nanoseconds.  This is helpful to see where the  latencies  are
1139              clustering.
1140
1141       --cyclic-method  [  clock_ns  |  itimer  |  poll | posix_ns | pselect |
1142       usleep ]
1143              specify the cyclic method to be used, the default  is  clock_ns.
1144              The available cyclic methods are as follows:
1145
1146              Method           Description
1147              clock_ns         sleep   for   the   specified  time  using  the
1148                               clock_nanosleep(2)  high  resolution  nanosleep
1149                               and the CLOCK_REALTIME real time clock.
1150              itimer           wakeup  a  paused process with a CLOCK_REALTIME
1151                               itimer signal.
1152              poll             delay for the specified time using a poll delay
1153                               loop   that   checks  for  time  changes  using
1154                               clock_gettime(2) on the CLOCK_REALTIME clock.
1155              posix_ns         sleep for the specified time  using  the  POSIX
1156                               nanosleep(2) high resolution nanosleep.
1157
1158
1159              pselect          sleep  for  the specified time using pselect(2)
1160                               with null file descriptors.
1161              usleep           sleep  to   the   nearest   microsecond   using
1162                               usleep(2).
1163
1164       --cyclic-policy [ fifo | rr ]
1165              specify  the  desired real time scheduling policy, ff (first-in,
1166              first-out) or rr (round robin).
1167
1168       --cyclic-prio P
1169              specify the scheduling priority P. Range from 1 (lowest) to  100
1170              (highest).
1171
1172       --cyclic-sleep N
1173              sleep  for N nanoseconds per test cycle using clock_nanosleep(2)
1174              with the  CLOCK_REALTIME  timer.  Range  from  1  to  1000000000
1175              nanoseconds.
1176
1177       --daemon N
1178              start  N workers that each create a daemon that dies immediately
1179              after creating another daemon and so on. This effectively  works
1180              through the process table with short lived processes that do not
1181              have a parent and are waited for by init.  This puts pressure on
1182              init  to  do  rapid child reaping.  The daemon processes perform
1183              the usual mix of calls to turn into  typical  UNIX  daemons,  so
1184              this artificially mimics very heavy daemon system stress.
1185
1186       --daemon-ops N
1187              stop daemon workers after N daemons have been created.
1188
1189       --dccp N
1190              start  N  workers  that send and receive data using the Datagram
1191              Congestion Control Protocol (DCCP) (RFC4340).  This  involves  a
1192              pair  of  client/server processes performing rapid connect, send
1193              and receives and disconnects on the local host.
1194
1195       --dccp-domain D
1196              specify the domain to use, the default is ipv4.  Currently  ipv4
1197              and ipv6 are supported.
1198
1199       --dccp-port P
1200              start  DCCP at port P. For N dccp worker processes, ports P to P
1201              - 1 are used.
1202
1203       --dccp-ops N
1204              stop dccp stress workers after N bogo operations.
1205
1206       --dccp-opts [ send | sendmsg | sendmmsg ]
1207              by default, messages are sent using send(2). This option  allows
1208              one  to  specify the sending method using send(2), sendmsg(2) or
1209              sendmmsg(2).  Note that sendmmsg is  only  available  for  Linux
1210              systems that support this system call.
1211
1212       -D N, --dentry N
1213              start  N workers that create and remove directory entries.  This
1214              should create file system meta data activity. The directory  en‐
1215              try  names  are suffixed by a gray-code encoded number to try to
1216              mix up the hashing of the namespace.
1217
1218       --dentry-ops N
1219              stop denty thrash workers after N bogo dentry operations.
1220
1221       --dentry-order [ forward | reverse | stride | random ]
1222              specify unlink order of dentries, can be  one  of  forward,  re‐
1223              verse,  stride  or random.  By default, dentries are unlinked in
1224              random order.  The forward order will unlink them from first  to
1225              last,  reverse order will unlink them from last to first, stride
1226              order will unlink them by stepping around order in a  quasi-ran‐
1227              dom  pattern  and  random order will randomly select one of for‐
1228              ward, reverse or stride orders.
1229
1230       --dentries N
1231              create N dentries per dentry thrashing loop, default is 2048.
1232
1233       --dev N
1234              start N workers that exercise the /dev devices. Each worker runs
1235              5  concurrent  threads that perform open(2), fstat(2), lseek(2),
1236              poll(2), fcntl(2), mmap(2), munmap(2), fsync(2) and close(2)  on
1237              each device.  Note that watchdog devices are not exercised.
1238
1239       --dev-ops N
1240              stop dev workers after N bogo device exercising operations.
1241
1242       --dev-file filename
1243              specify  the device file to exercise, for example, /dev/null. By
1244              default the stressor will work through all the device  files  it
1245              can fine, however, this option allows a single device file to be
1246              exercised.
1247
1248       --dev-shm N
1249              start N workers that fallocate large files in /dev/shm and  then
1250              mmap  these  into memory and touch all the pages. This exercises
1251              pages being moved to/from the buffer cache. Linux only.
1252
1253       --dev-shm-ops N
1254              stop after N bogo allocation and mmap /dev/shm operations.
1255
1256       --dir N
1257              start N workers that create and remove directories  using  mkdir
1258              and rmdir.
1259
1260       --dir-ops N
1261              stop directory thrash workers after N bogo directory operations.
1262
1263       --dir-dirs N
1264              exercise  dir on N directories. The default is 8192 directories,
1265              this allows 64 to 65536 directories to be used instead.
1266
1267       --dirdeep N
1268              start N workers that create a depth-first tree of directories to
1269              a  maximum  depth  as limited by PATH_MAX or ENAMETOOLONG (which
1270              ever occurs first).  By default, each level of the tree contains
1271              one directory, but this can be increased to a maximum of 10 sub-
1272              trees using the --dirdeep-dir option.  To stress inode creation,
1273              a  symlink  and  a hardlink to a file at the root of the tree is
1274              created in each level.
1275
1276       --dirdeep-ops N
1277              stop directory depth workers after N bogo directory operations.
1278
1279       --dirdeep-dirs N
1280              create N directories at each tree level. The default is  just  1
1281              but can be increased to a maximum of 10 per level.
1282
1283       --dirdeep-inodes N
1284              consume  up  to N inodes per dirdeep stressor while creating di‐
1285              rectories and links. The value N can be the number of inodes  or
1286              a  percentage of the total available free inodes on the filesys‐
1287              tem being used.
1288
1289       --dirmany N
1290              start N stressors that create as many empty files in a directory
1291              as  possible and then remove them. The file creation phase stops
1292              when an error occurs (for  example,  out  of  inodes,  too  many
1293              files, quota reached, etc.) and then the files are removed. This
1294              cycles until the the run time is reached or  the  file  creation
1295              count  bogo-ops  metric  is  reached.  This is a much faster and
1296              light weight directory exercising stressor compared to the  den‐
1297              try stressor.
1298
1299       --dirmany-ops N
1300              stop dirmany stressors after N empty files have been created.
1301
1302       --dnotify N
1303              start  N  workers performing file system activities such as mak‐
1304              ing/deleting files/directories, renaming files, etc.  to  stress
1305              exercise the various dnotify events (Linux only).
1306
1307       --dnotify-ops N
1308              stop inotify stress workers after N dnotify bogo operations.
1309
1310       --dup N
1311              start N workers that perform dup(2) and then close(2) operations
1312              on /dev/zero.  The maximum opens at one time is system  defined,
1313              so  the test will run up to this maximum, or 65536 open file de‐
1314              scriptors, which ever comes first.
1315
1316       --dup-ops N
1317              stop the dup stress workers after N bogo open operations.
1318
1319       --dynlib N
1320              start N workers that dynamically load and unload various  shared
1321              libraries.  This exercises memory mapping and dynamic code load‐
1322              ing and symbol lookups. See dlopen(3) for more details  of  this
1323              mechanism.
1324
1325       --dynlib-ops N
1326              stop workers after N bogo load/unload cycles.
1327
1328       --efivar N
1329              start N works that exercise the Linux /sys/firmware/efi/vars in‐
1330              terface by reading the EFI  variables.  This  is  a  Linux  only
1331              stress  test  for  platforms that support the EFI vars interface
1332              and requires the CAP_SYS_ADMIN capability.
1333
1334       --efivar-ops N
1335              stop the efivar stressors after N EFI variable read operations.
1336
1337       --enosys N
1338              start N workers that exercise non-functional  system  call  num‐
1339              bers.  This  calls a wide range of system call numbers to see if
1340              it can break a system where these are not  wired  up  correctly.
1341              It  also keeps track of system calls that exist (ones that don't
1342              return ENOSYS) so that it can focus on purely finding and  exer‐
1343              cising non-functional system calls. This stressor exercises sys‐
1344              tem calls from 0 to __NR_syscalls + 1024,  random  system  calls
1345              within  constrained in the ranges of 0 to 2^8, 2^16, 2^24, 2^32,
1346              2^40, 2^48, 2^56 and 2^64 bits, high  system  call  numbers  and
1347              various  other bit patterns to try to get wide good coverage. To
1348              keep the environment clean, each system call being  tested  runs
1349              in a child process with reduced capabilities.
1350
1351       --enosys-ops N
1352              stop after N bogo enosys system call attempts
1353
1354       --env N
1355              start  N  workers  that creates numerous large environment vari‐
1356              ables  to  try  to  trigger  out  of  memory  conditions   using
1357              setenv(3).  If ENOMEM occurs then the environment is emptied and
1358              another memory filling retry occurs.  The process  is  restarted
1359              if it is killed by the Out Of Memory (OOM) killer.
1360
1361       --env-ops N
1362              stop after N bogo setenv/unsetenv attempts.
1363
1364       --epoll N
1365              start  N  workers that perform various related socket stress ac‐
1366              tivity using epoll_wait(2) to monitor  and  handle  new  connec‐
1367              tions.  This  involves  client/server processes performing rapid
1368              connect, send/receives and disconnects on the local host.  Using
1369              epoll  allows  a  large  number of connections to be efficiently
1370              handled, however, this can lead to the connection table  filling
1371              up  and  blocking further socket connections, hence impacting on
1372              the epoll bogo op stats.  For ipv4 and  ipv6  domains,  multiple
1373              servers are spawned on multiple ports. The epoll stressor is for
1374              Linux only.
1375
1376       --epoll-domain D
1377              specify the domain to use, the default is unix (aka local). Cur‐
1378              rently ipv4, ipv6 and unix are supported.
1379
1380       --epoll-port P
1381              start at socket port P. For N epoll worker processes, ports P to
1382              (P * 4) - 1 are used for ipv4, ipv6 domains and ports P to P - 1
1383              are used for the unix domain.
1384
1385       --epoll-ops N
1386              stop epoll workers after N bogo operations.
1387
1388       --eventfd N
1389              start  N parent and child worker processes that read and write 8
1390              byte event messages  between  them  via  the  eventfd  mechanism
1391              (Linux only).
1392
1393       --eventfd-ops N
1394              stop eventfd workers after N bogo operations.
1395
1396       --eventfd-nonblock N
1397              enable  EFD_NONBLOCK to allow non-blocking on the event file de‐
1398              scriptor. This will cause reads and writes to return with EAGAIN
1399              rather  the  blocking  and  hence causing a high rate of polling
1400              I/O.
1401
1402       --exec N
1403              start N workers continually forking children that exec stress-ng
1404              and  then  exit almost immediately. If a system has pthread sup‐
1405              port then 1 in 4 of the exec's will be from inside a pthread  to
1406              exercise exec'ing from inside a pthread context.
1407
1408       --exec-ops N
1409              stop exec stress workers after N bogo operations.
1410
1411       --exec-max P
1412              create  P  child processes that exec stress-ng and then wait for
1413              them to exit per iteration. The default is just 1; higher values
1414              will  create many temporary zombie processes that are waiting to
1415              be reaped. One can potentially fill up the process  table  using
1416              high values for --exec-max and --exec.
1417
1418       -F N, --fallocate N
1419              start  N  workers  continually  fallocating  (preallocating file
1420              space) and ftruncating (file truncating)  temporary  files.   If
1421              the  file  is larger than the free space, fallocate will produce
1422              an ENOSPC error which is ignored by this stressor.
1423
1424       --fallocate-bytes N
1425              allocated file size, the default is 1 GB. One  can  specify  the
1426              size as % of free space on the file system or in units of Bytes,
1427              KBytes, MBytes and GBytes using the suffix b, k, m or g.
1428
1429       --fallocate-ops N
1430              stop fallocate stress workers after N bogo fallocate operations.
1431
1432       --fanotify N
1433              start N workers performing file system activities such as creat‐
1434              ing,  opening,  writing, reading and unlinking files to exercise
1435              the fanotify  event  monitoring  interface  (Linux  only).  Each
1436              stressor runs a child process to generate file events and a par‐
1437              ent process to read file events using fanotify. Has  to  be  run
1438              with CAP_SYS_ADMIN capability.
1439
1440       --fanotify-ops N
1441              stop fanotify stress workers after N bogo fanotify events.
1442
1443       --fault N
1444              start N workers that generates minor and major page faults.
1445
1446       --fault-ops N
1447              stop the page fault workers after N bogo page fault operations.
1448
1449       --fcntl N
1450              start  N  workers  that perform fcntl(2) calls with various com‐
1451              mands.  The exercised  commands  (if  available)  are:  F_DUPFD,
1452              F_DUPFD_CLOEXEC,  F_GETFD,  F_SETFD, F_GETFL, F_SETFL, F_GETOWN,
1453              F_SETOWN, F_GETOWN_EX, F_SETOWN_EX, F_GETSIG, F_SETSIG, F_GETLK,
1454              F_SETLK, F_SETLKW, F_OFD_GETLK, F_OFD_SETLK and F_OFD_SETLKW.
1455
1456       --fcntl-ops N
1457              stop the fcntl workers after N bogo fcntl operations.
1458
1459       --fiemap N
1460              start  N  workers  that  each  create  a file with many randomly
1461              changing extents and has  4  child  processes  per  worker  that
1462              gather the extent information using the FS_IOC_FIEMAP ioctl(2).
1463
1464       --fiemap-ops N
1465              stop after N fiemap bogo operations.
1466
1467       --fiemap-bytes N
1468              specify the size of the fiemap'd file in bytes.  One can specify
1469              the size as % of free space on the file system or  in  units  of
1470              Bytes,  KBytes, MBytes and GBytes using the suffix b, k, m or g.
1471              Larger files will contain more extents, causing more stress when
1472              gathering extent information.
1473
1474       --fifo N
1475              start  N  workers  that exercise a named pipe by transmitting 64
1476              bit integers.
1477
1478       --fifo-ops N
1479              stop fifo workers after N bogo pipe write operations.
1480
1481       --fifo-readers N
1482              for each worker, create N fifo  reader  workers  that  read  the
1483              named pipe using simple blocking reads.
1484
1485       --file-ioctl N
1486              start  N  workers  that  exercise various file specific ioctl(2)
1487              calls. This will attempt to use the FIONBIO, FIOQSIZE, FIGETBSZ,
1488              FIOCLEX,   FIONCLEX,   FIONBIO,  FIOASYNC,  FIOQSIZE,  FIFREEZE,
1489              FITHAW,   FICLONE,   FICLONERANGE,   FIONREAD,   FIONWRITE   and
1490              FS_IOC_RESVSP ioctls if these are defined.
1491
1492       --file-ioctl-ops N
1493              stop file-ioctl workers after N file ioctl bogo operations.
1494
1495       --filename N
1496              start N workers that exercise file creation using various length
1497              filenames containing a range  of  allowed  filename  characters.
1498              This  will  try  to see if it can exceed the file system allowed
1499              filename length was well as test various  filename  lengths  be‐
1500              tween 1 and the maximum allowed by the file system.
1501
1502       --filename-ops N
1503              stop filename workers after N bogo filename tests.
1504
1505       --filename-opts opt
1506              use  characters in the filename based on option 'opt'. Valid op‐
1507              tions are:
1508
1509              Option           Description
1510              probe            default option, probe the file system for valid
1511                               allowed characters in a file name and use these
1512              posix            use  characters  as specified by The Open Group
1513                               Base  Specifications  Issue  7,   POSIX.1-2008,
1514                               3.278 Portable Filename Character Set
1515              ext              use  characters allowed by the ext2, ext3, ext4
1516                               file systems, namely any 8 bit character  apart
1517                               from NUL and /
1518
1519       --flock N
1520              start N workers locking on a single file.
1521
1522       --flock-ops N
1523              stop flock stress workers after N bogo flock operations.
1524
1525       -f N, --fork N
1526              start  N  workers  continually forking children that immediately
1527              exit.
1528
1529       --fork-ops N
1530              stop fork stress workers after N bogo operations.
1531
1532       --fork-max P
1533              create P child processes and then wait for them to exit per  it‐
1534              eration.  The  default is just 1; higher values will create many
1535              temporary zombie processes that are waiting to  be  reaped.  One
1536              can  potentially fill up the process table using high values for
1537              --fork-max and --fork.
1538
1539       --fork-vm
1540              enable detrimental performance virtual memory advice using  mad‐
1541              vise  on  all  pages  of the forked process. Where possible this
1542              will try to set every page in the new process with using madvise
1543              MADV_MERGEABLE,  MADV_WILLNEED,  MADV_HUGEPAGE  and  MADV_RANDOM
1544              flags. Linux only.
1545
1546       --fp-error N
1547              start N workers that generate floating point exceptions.  Compu‐
1548              tations  are  performed to force and check for the FE_DIVBYZERO,
1549              FE_INEXACT, FE_INVALID, FE_OVERFLOW and FE_UNDERFLOW exceptions.
1550              EDOM and ERANGE errors are also checked.
1551
1552       --fp-error-ops N
1553              stop after N bogo floating point exceptions.
1554
1555       --fstat N
1556              start  N  workers  fstat'ing  files  in  a directory (default is
1557              /dev).
1558
1559       --fstat-ops N
1560              stop fstat stress workers after N bogo fstat operations.
1561
1562       --fstat-dir directory
1563              specify the directory to fstat to override the default of  /dev.
1564              All the files in the directory will be fstat'd repeatedly.
1565
1566       --full N
1567              start N workers that exercise /dev/full.  This attempts to write
1568              to the device (which should always get error  ENOSPC),  to  read
1569              from  the  device (which should always return a buffer of zeros)
1570              and to seek randomly on the device  (which  should  always  suc‐
1571              ceed).  (Linux only).
1572
1573       --full-ops N
1574              stop the stress full workers after N bogo I/O operations.
1575
1576       --funccall N
1577              start N workers that call functions of 1 through to 9 arguments.
1578              By default functions with uint64_t arguments  are  called,  how‐
1579              ever, this can be changed using the --funccall-method option.
1580
1581       --funccall-ops N
1582              stop the funccall workers after N bogo function call operations.
1583              Each bogo operation is 1000 calls of functions of 1 through to 9
1584              arguments of the chosen argument type.
1585
1586       --funccall-method method
1587              specify the method of funccall argument type to be used. The de‐
1588              fault is uint64_t but can be one of bool, uint8, uint16, uint32,
1589              uint64,  uint128,  float,  double,  longdouble,  cfloat (complex
1590              float), cdouble (complex double), clongdouble (complex long dou‐
1591              ble),  float16,  float32, float64, float80, float128, decimal32,
1592              decimal64 and decimal128.  Note that some  of  these  types  are
1593              only  available  with  specific  architectures and compiler ver‐
1594              sions.
1595
1596       --funcret N
1597              start N workers that pass and return by value various  small  to
1598              large data types.
1599
1600       --funcret-ops N
1601              stop the funcret workers after N bogo function call operations.
1602
1603       --funcret-method method
1604              specify  the method of funcret argument type to be used. The de‐
1605              fault is uint64_t but can be one of uint8 uint16  uint32  uint64
1606              uint128 float double longdouble float80 float128 decimal32 deci‐
1607              mal64 decimal128 uint8x32 uint8x128 uint64x128.
1608
1609       --futex N
1610              start N workers that rapidly exercise  the  futex  system  call.
1611              Each worker has two processes, a futex waiter and a futex waker.
1612              The waiter waits with a very small timeout to stress the timeout
1613              and  rapid polled futex waiting. This is a Linux specific stress
1614              option.
1615
1616       --futex-ops N
1617              stop futex workers after N bogo  successful  futex  wait  opera‐
1618              tions.
1619
1620       --get N
1621              start  N workers that call system calls that fetch data from the
1622              kernel, currently these are: getpid,  getppid,  getcwd,  getgid,
1623              getegid,  getuid,  getgroups, getpgrp, getpgid, getpriority, ge‐
1624              tresgid, getresuid, getrlimit, prlimit, getrusage, getsid,  get‐
1625              tid,  getcpu,  gettimeofday,  uname,  adjtimex,  sysfs.  Some of
1626              these system calls are OS specific.
1627
1628       --get-ops N
1629              stop get workers after N bogo get operations.
1630
1631       --getdent N
1632              start N workers that recursively read directories /proc,  /dev/,
1633              /tmp, /sys and /run using getdents and getdents64 (Linux only).
1634
1635       --getdent-ops N
1636              stop getdent workers after N bogo getdent bogo operations.
1637
1638       --getrandom N
1639              start N workers that get 8192 random bytes from the /dev/urandom
1640              pool using the getrandom(2) system call (Linux) or getentropy(2)
1641              (OpenBSD).
1642
1643       --getrandom-ops N
1644              stop getrandom workers after N bogo get operations.
1645
1646       --handle N
1647              start  N  workers  that  exercise  the  name_to_handle_at(2) and
1648              open_by_handle_at(2) system calls. (Linux only).
1649
1650       --handle-ops N
1651              stop after N handle bogo operations.
1652
1653       -d N, --hdd N
1654              start N workers continually writing, reading and removing tempo‐
1655              rary files. The default mode is to stress test sequential writes
1656              and reads.  With the --aggressive  option  enabled  without  any
1657              --hdd-opts  options  the  hdd stressor will work through all the
1658              --hdd-opt options one by one to cover a range of I/O options.
1659
1660       --hdd-bytes N
1661              write N bytes for each hdd process, the default is 1 GB. One can
1662              specify  the  size  as  % of free space on the file system or in
1663              units of Bytes, KBytes, MBytes and GBytes using the suffix b, k,
1664              m or g.
1665
1666       --hdd-opts list
1667              specify  various  stress test options as a comma separated list.
1668              Options are as follows:
1669
1670              Option           Description
1671              direct           try to minimize cache effects of the I/O.  File
1672                               I/O  writes  are  performed  directly from user
1673                               space buffers and synchronous transfer is  also
1674                               attempted.   To guarantee synchronous I/O, also
1675                               use the sync option.
1676              dsync            ensure output has been transferred to  underly‐
1677                               ing hardware and file metadata has been updated
1678                               (using the O_DSYNC open flag). This is  equiva‐
1679                               lent  to each write(2) being followed by a call
1680                               to fdatasync(2). See also the fdatasync option.
1681              fadv-dontneed    advise kernel to expect the data  will  not  be
1682                               accessed in the near future.
1683              fadv-noreuse     advise kernel to expect the data to be accessed
1684                               only once.
1685              fadv-normal      advise kernel there are no explicit access pat‐
1686                               tern  for  the data. This is the default advice
1687                               assumption.
1688              fadv-rnd         advise kernel to expect random access  patterns
1689                               for the data.
1690              fadv-seq         advise  kernel to expect sequential access pat‐
1691                               terns for the data.
1692              fadv-willneed    advise kernel to expect the data to be accessed
1693                               in the near future.
1694              fsync            flush  all  modified  in-core  data  after each
1695                               write to the output device  using  an  explicit
1696                               fsync(2) call.
1697              fdatasync        similar to fsync, but do not flush the modified
1698                               metadata unless metadata is required for  later
1699                               data  reads  to be handled correctly. This uses
1700                               an explicit fdatasync(2) call.
1701              iovec            use readv/writev multiple  buffer  I/Os  rather
1702                               than read/write. Instead of 1 read/write opera‐
1703                               tion, the buffer is broken into an iovec of  16
1704                               buffers.
1705              noatime          do  not  update the file last access timestamp,
1706                               this can reduce metadata writes.
1707              sync             ensure output has been transferred to  underly‐
1708                               ing hardware (using the O_SYNC open flag). This
1709                               is equivalent to a each write(2) being followed
1710                               by  a  call to fsync(2). See also the fsync op‐
1711                               tion.
1712              rd-rnd           read data randomly. By default, written data is
1713                               not  read back, however, this option will force
1714                               it to be read back randomly.
1715              rd-seq           read data  sequentially.  By  default,  written
1716                               data  is  not  read  back, however, this option
1717                               will force it to be read back sequentially.
1718
1719              syncfs           write all buffered modifications of file  meta‐
1720                               data  and  data on the filesystem that contains
1721                               the hdd worker files.
1722              utimes           force update of file timestamp  which  may  in‐
1723                               crease metadata writes.
1724              wr-rnd           write  data  randomly. The wr-seq option cannot
1725                               be used at the same time.
1726              wr-seq           write data sequentially. This is the default if
1727                               no write modes are specified.
1728
1729       Note  that  some  of these options are mutually exclusive, for example,
1730       there can be only one method of  writing  or  reading.   Also,  fadvise
1731       flags  may  be  mutually exclusive, for example fadv-willneed cannot be
1732       used with fadv-dontneed.
1733
1734       --hdd-ops N
1735              stop hdd stress workers after N bogo operations.
1736
1737       --hdd-write-size N
1738              specify size of each write in bytes. Size can be from 1 byte  to
1739              4MB.
1740
1741       --heapsort N
1742              start  N  workers  that sort 32 bit integers using the BSD heap‐
1743              sort.
1744
1745       --heapsort-ops N
1746              stop heapsort stress workers after N bogo heapsorts.
1747
1748       --heapsort-size N
1749              specify number of 32 bit integers to  sort,  default  is  262144
1750              (256 × 1024).
1751
1752       --hrtimers N
1753              start  N  workers  that exercise high resolution times at a high
1754              frequency. Each stressor starts 32 processes that run with  ran‐
1755              dom  timer  intervals  of  0..499999  nanoseconds.  Running this
1756              stressor with appropriate privilege  will  run  these  with  the
1757              SCHED_RR policy.
1758
1759       --hrtimers-ops N
1760              stop hrtimers stressors after N timer event bogo operations
1761
1762       --hsearch N
1763              start  N  workers  that  search  a  80%  full  hash  table using
1764              hsearch(3). By default, there are 8192  elements  inserted  into
1765              the  hash  table.  This is a useful method to exercise access of
1766              memory and processor cache.
1767
1768       --hsearch-ops N
1769              stop the hsearch workers after N  bogo  hsearch  operations  are
1770              completed.
1771
1772       --hsearch-size N
1773              specify  the number of hash entries to be inserted into the hash
1774              table. Size can be from 1K to 4M.
1775
1776       --icache N
1777              start N workers that stress the instruction cache by forcing in‐
1778              struction  cache  reloads.  This is achieved by modifying an in‐
1779              struction cache line,  causing the processor to reload  it  when
1780              we call a function in inside it. Currently only verified and en‐
1781              abled for Intel x86 CPUs.
1782
1783       --icache-ops N
1784              stop the icache workers after N bogo icache operations are  com‐
1785              pleted.
1786
1787       --icmp-flood N
1788              start  N  workers  that flood localhost with randonly sized ICMP
1789              ping packets.  This stressor requires the CAP_NET_RAW capbility.
1790
1791       --icmp-flood-ops N
1792              stop icmp flood workers after N  ICMP  ping  packets  have  been
1793              sent.
1794
1795       --idle-scan N
1796              start N workers that scan the idle page bitmap across a range of
1797              physical pages. This sets and checks for idle pages via the idle
1798              page  tracking  interface /sys/kernel/mm/page_idle/bitmap.  This
1799              is for Linux only.
1800
1801       --idle-scan-ops N
1802              stop after N bogo page scan operations. Currently one bogo  page
1803              scan operation is equivalent to setting and checking 64 physical
1804              pages.
1805
1806       --idle-page N
1807              start N workers that walks through  every  page  exercising  the
1808              Linux    /sys/kernel/mm/page_idle/bitmap   interface.   Requires
1809              CAP_SYS_RESOURCE capability.
1810
1811       --idle-page-ops N
1812              stop after N bogo idle page operations.
1813
1814       --inode-flags N
1815              start N workers that exercise inode flags using the  FS_IOC_GET‐
1816              FLAGS  and  FS_IOC_SETFLAGS ioctl(2). This attempts to apply all
1817              the available inode flags onto a directory and file even if  the
1818              underlying  file  system may not support these flags (errors are
1819              just ignored).  Each worker runs 4  threads  that  exercise  the
1820              flags on the same directory and file to try to force races. This
1821              is a Linux only stressor, see ioctl_iflags(2) for more details.
1822
1823       --inode-flags-ops N
1824              stop the inode-flags workers after  N  ioctl  flag  setting  at‐
1825              tempts.
1826
1827       --inotify N
1828              start  N  workers performing file system activities such as mak‐
1829              ing/deleting files/directories, moving files, etc. to stress ex‐
1830              ercise the various inotify events (Linux only).
1831
1832       --inotify-ops N
1833              stop inotify stress workers after N inotify bogo operations.
1834
1835       -i N, --io N
1836              start  N  workers  continuously calling sync(2) to commit buffer
1837              cache to disk.  This can be used in conjunction with  the  --hdd
1838              options.
1839
1840       --io-ops N
1841              stop io stress workers after N bogo operations.
1842
1843       --iomix N
1844              start  N  workers  that  perform a mix of sequential, random and
1845              memory mapped read/write operations as well as  forced  sync'ing
1846              and  (if  run as root) cache dropping.  Multiple child processes
1847              are spawned to all share a single file and perform different I/O
1848              operations on the same file.
1849
1850       --iomix-bytes N
1851              write  N  bytes  for each iomix worker process, the default is 1
1852              GB. One can specify the size as % of free space on the file sys‐
1853              tem  or  in  units of Bytes, KBytes, MBytes and GBytes using the
1854              suffix b, k, m or g.
1855
1856       --iomix-ops N
1857              stop iomix stress workers after N bogo iomix I/O operations.
1858
1859       --ioport N
1860              start N workers than perform bursts of 16 reads and 16 writes of
1861              ioport  0x80  (x86  Linux  systems  only).  I/O performed on x86
1862              platforms on port 0x80 will cause delays on the  CPU  performing
1863              the I/O.
1864
1865       --ioport-ops N
1866              stop the ioport stressors after N bogo I/O operations
1867
1868       --ioport-opts [ in | out | inout ]
1869              specify  if  port  reads  in,  port read writes out or reads and
1870              writes are to be performed.  The default is both in and out.
1871
1872       --ioprio N
1873              start  N  workers  that  exercise  the  ioprio_get(2)  and   io‐
1874              prio_set(2) system calls (Linux only).
1875
1876       --ioprio-ops N
1877              stop after N io priority bogo operations.
1878
1879       --io-uring N
1880              start N workers that perform iovec write and read I/O operations
1881              using the Linux io-uring interface. On each bogo-loop 1024 × 512
1882              byte writes and 1024 × reads are performed on a temporary file.
1883
1884       --io-uring-ops
1885              stop after N rounds of write and reads.
1886
1887       --ipsec-mb N
1888              start  N workers that perform cryptographic processing using the
1889              highly optimized Intel Multi-Buffer Crypto  for  IPsec  library.
1890              Depending  on  the  features available, SSE, AVX, AVX and AVX512
1891              CPU features will be used on data encrypted by SHA,  DES,  CMAC,
1892              CTR, HMAC MD5, HMAC SHA1 and HMAC SHA512 cryptographic routines.
1893              This is only available for x86-64 modern Intel CPUs.
1894
1895       --ipsec-mb-ops N
1896              stop after N rounds of processing  of  data  using  the  crypto‐
1897              graphic routines.
1898
1899       --ipsec-mb-feature [ sse | avx | avx2 | avx512 ]
1900              Just  use  the  specified processor CPU feature. By default, all
1901              the available features for the CPU are exercised.
1902
1903       --itimer N
1904              start N workers that exercise the system interval  timers.  This
1905              sets  up  an ITIMER_PROF itimer that generates a SIGPROF signal.
1906              The default frequency for the itimer  is  1  MHz,  however,  the
1907              Linux kernel will set this to be no more that the jiffy setting,
1908              hence high frequency SIGPROF signals are not normally  possible.
1909              A busy loop spins on getitimer(2) calls to consume CPU and hence
1910              decrement the itimer based on amount of time spent  in  CPU  and
1911              system time.
1912
1913       --itimer-ops N
1914              stop itimer stress workers after N bogo itimer SIGPROF signals.
1915
1916       --itimer-freq F
1917              run  itimer  at  F  Hz; range from 1 to 1000000 Hz. Normally the
1918              highest frequency is limited by the number of  jiffy  ticks  per
1919              second, so running above 1000 Hz is difficult to attain in prac‐
1920              tice.
1921
1922       --itimer-rand
1923              select an interval timer frequency  based  around  the  interval
1924              timer  frequency  +/-  12.5%  random jitter. This tries to force
1925              more variability in the timer interval to  make  the  scheduling
1926              less predictable.
1927
1928       --judy N
1929              start  N  workers that insert, search and delete 32 bit integers
1930              in a Judy array using a predictable yet sparse array  index.  By
1931              default, there are 131072 integers used in the Judy array.  This
1932              is a useful method to exercise random access of memory and  pro‐
1933              cessor cache.
1934
1935       --judy-ops N
1936              stop  the  judy  workers  after  N bogo judy operations are com‐
1937              pleted.
1938
1939       --judy-size N
1940              specify the size (number of 32 bit integers) in the  Judy  array
1941              to exercise.  Size can be from 1K to 4M 32 bit integers.
1942
1943       --kcmp N
1944              start  N  workers  that  use kcmp(2) to compare parent and child
1945              processes to determine if they share kernel resources. Supported
1946              only for Linux and requires CAP_SYS_PTRACE capability.
1947
1948       --kcmp-ops N
1949              stop kcmp workers after N bogo kcmp operations.
1950
1951       --key N
1952              start N workers that create and manipulate keys using add_key(2)
1953              and ketctl(2). As many keys are created as the  per  user  limit
1954              allows  and  then the following keyctl commands are exercised on
1955              each key:  KEYCTL_SET_TIMEOUT,  KEYCTL_DESCRIBE,  KEYCTL_UPDATE,
1956              KEYCTL_READ, KEYCTL_CLEAR and KEYCTL_INVALIDATE.
1957
1958       --key-ops N
1959              stop key workers after N bogo key operations.
1960
1961       --kill N
1962              start N workers sending SIGUSR1 kill signals to a SIG_IGN signal
1963              handler. Most of the process time will end up in kernel space.
1964
1965       --kill-ops N
1966              stop kill workers after N bogo kill operations.
1967
1968       --klog N
1969              start N workers exercising the  kernel  syslog(2)  system  call.
1970              This will attempt to read the kernel log with various sized read
1971              buffers. Linux only.
1972
1973       --klog-ops N
1974              stop klog workers after N syslog operations.
1975
1976       --l1cache N
1977              start N workers that exercise the CPU level 1 cache  with  reads
1978              and  writes.  A  cache  aligned buffer that is twice the level 1
1979              cache size is read and then written in level 1 cache  set  sized
1980              steps  over each level 1 cache set. This is designed to exercise
1981              cache block evictions. The bogo-op count measures the number  of
1982              million  cache lines touched.  Where possible, the level 1 cache
1983              geometry is determined from the kernel,  however,  this  is  not
1984              possible  on  some  architectures or kernels, so one may need to
1985              specify these manually. One can specify 3 out  of  the  4  cache
1986              geometric parameters, these are as follows:
1987
1988       --l1cache-line-size N
1989              specify the level 1 cache line size (in bytes)
1990
1991       --l1cache-sets N
1992              specify the number of level 1 cache sets
1993
1994       --l1cache-size N
1995              specify the level 1 cache size (in bytes)
1996
1997       --l1cache-ways N
1998              specify the number of level 1 cache ways
1999
2000       --landlock N
2001              start N workers that exercise Linux 5.13 landlocking. A range of
2002              landlock_create_ruleset flags are exercised  with  a  read  only
2003              file rule to see if a directory can be accessed and a read-write
2004              file create can be blocked. Each ruleset attempt is exercised in
2005              a new child context and this is the limiting factor on the speed
2006              of the stressor.
2007
2008       --landlock-ops N
2009              stop the landlock stressors after N landlock ruleset bogo opera‐
2010              tions.
2011
2012       --lease N
2013              start  N  workers locking, unlocking and breaking leases via the
2014              fcntl(2) F_SETLEASE operation. The parent processes  continually
2015              lock and unlock a lease on a file while a user selectable number
2016              of child processes open the file with  a  non-blocking  open  to
2017              generate SIGIO lease breaking notifications to the parent.  This
2018              stressor is only available if F_SETLEASE,  F_WRLCK  and  F_UNLCK
2019              support is provided by fcntl(2).
2020
2021       --lease-ops N
2022              stop lease workers after N bogo operations.
2023
2024       --lease-breakers N
2025              start  N  lease  breaker child processes per lease worker.  Nor‐
2026              mally one child is plenty to force many SIGIO lease breaking no‐
2027              tification  signals  to  the parent, however, this option allows
2028              one to specify more child processes if required.
2029
2030       --link N
2031              start N workers creating and removing hardlinks.
2032
2033       --link-ops N
2034              stop link stress workers after N bogo operations.
2035
2036       --list N
2037              start N workers that exercise list data structures. The  default
2038              is  to  add,  find and remove 5,000 64 bit integers into circleq
2039              (doubly linked circle queue), list (doubly linked  list),  slist
2040              (singly  linked  list),  slistt (singly linked list using tail),
2041              stailq (singly linked tail queue) and tailq (doubly linked  tail
2042              queue) lists. The intention of this stressor is to exercise mem‐
2043              ory and cache with the various list operations.
2044
2045       --list-ops N
2046              stop list stressors after N bogo ops. A bogo op covers the addi‐
2047              tion, finding and removing all the items into the list(s).
2048
2049       --list-size N
2050              specify  the  size  of the list, where N is the number of 64 bit
2051              integers to be added into the list.
2052
2053       --list-method [ all | circleq | list | slist | stailq | tailq ]
2054              specify the list to be used. By default, all  the  list  methods
2055              are used (the 'all' option).
2056
2057       --loadavg N
2058              start  N  workers  that  attempt to create thousands of pthreads
2059              that run at the lowest nice priority to force very high load av‐
2060              erages. Linux systems will also perform some I/O writes as pend‐
2061              ing I/O is also factored into system load accounting.
2062
2063       --loadavg-ops N
2064              stop loadavg workers after  N  bogo  scheduling  yields  by  the
2065              pthreads have been reached.
2066
2067       --lockbus N
2068              start N workers that rapidly lock and increment 64 bytes of ran‐
2069              domly chosen memory from a 16MB mmap'd region (Intel x86 and ARM
2070              CPUs  only).   This  will cause cacheline misses and stalling of
2071              CPUs.
2072
2073       --lockbus-ops N
2074              stop lockbus workers after N bogo operations.
2075
2076       --locka N
2077              start N workers that randomly lock and unlock regions of a  file
2078              using  the  POSIX  advisory  locking  mechanism  (see  fcntl(2),
2079              F_SETLK, F_GETLK). Each worker creates a 1024 KB  file  and  at‐
2080              tempts  to  hold a maximum of 1024 concurrent locks with a child
2081              process that also tries to hold 1024 concurrent locks. Old locks
2082              are unlocked in a first-in, first-out basis.
2083
2084       --locka-ops N
2085              stop locka workers after N bogo locka operations.
2086
2087       --lockf N
2088              start  N workers that randomly lock and unlock regions of a file
2089              using the POSIX lockf(3) locking mechanism. Each worker  creates
2090              a  64  KB file and attempts to hold a maximum of 1024 concurrent
2091              locks with a child process that also tries to hold 1024  concur‐
2092              rent  locks. Old locks are unlocked in a first-in, first-out ba‐
2093              sis.
2094
2095       --lockf-ops N
2096              stop lockf workers after N bogo lockf operations.
2097
2098       --lockf-nonblock
2099              instead of using blocking F_LOCK  lockf(3)  commands,  use  non-
2100              blocking  F_TLOCK  commands and re-try if the lock failed.  This
2101              creates extra system call overhead and CPU  utilisation  as  the
2102              number  of  lockf  workers increases and should increase locking
2103              contention.
2104
2105       --lockofd N
2106              start N workers that randomly lock and unlock regions of a  file
2107              using  the  Linux  open  file  description  locks (see fcntl(2),
2108              F_OFD_SETLK, F_OFD_GETLK).  Each worker creates a 1024  KB  file
2109              and  attempts  to hold a maximum of 1024 concurrent locks with a
2110              child process that also tries to hold 1024 concurrent locks. Old
2111              locks are unlocked in a first-in, first-out basis.
2112
2113       --lockofd-ops N
2114              stop lockofd workers after N bogo lockofd operations.
2115
2116       --longjmp N
2117              start  N  workers  that  exercise  setjmp(3)/longjmp(3) by rapid
2118              looping on longjmp calls.
2119
2120       --longjmp-ops N
2121              stop longjmp stress workers after N bogo longjmp  operations  (1
2122              bogo op is 1000 longjmp calls).
2123
2124       --loop N
2125              start  N workers that exercise the loopback control device. This
2126              creates 2MB loopback devices, expands them to 4MB, performs some
2127              loopback  status  information  get  and  set operations and then
2128              destoys them. Linux only and requires CAP_SYS_ADMIN capability.
2129
2130       --loop-ops N
2131              stop after N bogo loopback creation/deletion operations.
2132
2133       --lsearch N
2134              start N workers that linear search a unsorted array  of  32  bit
2135              integers  using  lsearch(3). By default, there are 8192 elements
2136              in the array.  This is a useful method  to  exercise  sequential
2137              access of memory and processor cache.
2138
2139       --lsearch-ops N
2140              stop  the  lsearch  workers  after N bogo lsearch operations are
2141              completed.
2142
2143       --lsearch-size N
2144              specify the size (number of 32 bit integers)  in  the  array  to
2145              lsearch. Size can be from 1K to 4M.
2146
2147       --madvise N
2148              start  N workers that apply random madvise(2) advise settings on
2149              pages of a 4MB file backed shared memory mapping.
2150
2151       --madvise-ops N
2152              stop madvise stressors after N bogo madvise operations.
2153
2154       --malloc N
2155              start N workers continuously calling malloc(3), calloc(3), real‐
2156              loc(3)  and  free(3). By default, up to 65536 allocations can be
2157              active at any point, but this can be  altered  with  the  --mal‐
2158              loc-max option.  Allocation, reallocation and freeing are chosen
2159              at random; 50% of the time memory  is  allocation  (via  malloc,
2160              calloc  or  realloc) and 50% of the time allocations are free'd.
2161              Allocation sizes are also random, with  the  maximum  allocation
2162              size  controlled  by the --malloc-bytes option, the default size
2163              being 64K.  The worker is re-started if it is killed by the  out
2164              of memory (OOM) killer.
2165
2166       --malloc-bytes N
2167              maximum  per  allocation/reallocation size. Allocations are ran‐
2168              domly selected from 1 to N bytes. One can specify the size as  %
2169              of  total  available memory or in units of Bytes, KBytes, MBytes
2170              and GBytes using the suffix b, k,  m  or  g.   Large  allocation
2171              sizes  cause the memory allocator to use mmap(2) rather than ex‐
2172              panding the heap using brk(2).
2173
2174       --malloc-max N
2175              maximum number of active allocations  allowed.  Allocations  are
2176              chosen at random and placed in an allocation slot. Because about
2177              50%/50% split between allocation and freeing, typically half  of
2178              the allocation slots are in use at any one time.
2179
2180       --malloc-ops N
2181              stop after N malloc bogo operations. One bogo operations relates
2182              to a successful malloc(3), calloc(3) or realloc(3).
2183
2184       --malloc-pthreads N
2185              specify number of malloc stressing concurrent pthreads  to  run.
2186              The  default is 0 (just one main process, no pthreads). This op‐
2187              tion will do nothing if pthreads are not supported.
2188
2189       --malloc-thresh N
2190              specify the threshold  where  malloc  uses  mmap(2)  instead  of
2191              sbrk(2)  to allocate more memory. This is only available on sys‐
2192              tems that provide the GNU C mallopt(3) tuning function.
2193
2194       --matrix N
2195              start N workers that perform various matrix operations on float‐
2196              ing point values. Testing on 64 bit x86 hardware shows that this
2197              provides a good mix of memory, cache and floating  point  opera‐
2198              tions and is an excellent way to make a CPU run hot.
2199
2200              By default, this will exercise all the matrix stress methods one
2201              by one.  One can specify a specific matrix  stress  method  with
2202              the --matrix-method option.
2203
2204       --matrix-ops N
2205              stop matrix stress workers after N bogo operations.
2206
2207       --matrix-method method
2208              specify  a matrix stress method. Available matrix stress methods
2209              are described as follows:
2210
2211              Method           Description
2212              all              iterate over all the below matrix stress  meth‐
2213                               ods
2214              add              add two N × N matrices
2215              copy             copy one N × N matrix to another
2216              div              divide an N × N matrix by a scalar
2217              frobenius        Frobenius product of two N × N matrices
2218              hadamard         Hadamard product of two N × N matrices
2219              identity         create an N × N identity matrix
2220              mean             arithmetic mean of two N × N matrices
2221              mult             multiply an N × N matrix by a scalar
2222              negate           negate an N × N matrix
2223              prod             product of two N × N matrices
2224              sub              subtract  one  N  × N matrix from another N × N
2225                               matrix
2226              square           multiply an N × N matrix by itself
2227              trans            transpose an N × N matrix
2228
2229              zero             zero an N × N matrix
2230
2231       --matrix-size N
2232              specify the N × N size of the matrices.  Smaller  values  result
2233              in  a floating point compute throughput bound stressor, where as
2234              large values result in a cache  and/or  memory  bandwidth  bound
2235              stressor.
2236
2237       --matrix-yx
2238              perform  matrix  operations  in order y by x rather than the de‐
2239              fault x by y. This is suboptimal ordering compared  to  the  de‐
2240              fault and will perform more data cache stalls.
2241
2242       --matrix-3d N
2243              start  N  workers  that  perform various 3D matrix operations on
2244              floating point values. Testing on 64 bit x86 hardware shows that
2245              this provides a good mix of memory, cache and floating point op‐
2246              erations and is an excellent way to make a CPU run hot.
2247
2248              By default, this will exercise all the 3D matrix stress  methods
2249              one  by one.  One can specify a specific 3D matrix stress method
2250              with the --matrix-3d-method option.
2251
2252       --matrix-3d-ops N
2253              stop the 3D matrix stress workers after N bogo operations.
2254
2255       --matrix-3d-method method
2256              specify a 3D matrix stress method. Available  3D  matrix  stress
2257              methods are described as follows:
2258
2259              Method           Description
2260              all              iterate  over all the below matrix stress meth‐
2261                               ods
2262              add              add two N × N × N matrices
2263              copy             copy one N × N × N matrix to another
2264              div              divide an N × N × N matrix by a scalar
2265              frobenius        Frobenius product of two N × N × N matrices
2266              hadamard         Hadamard product of two N × N × N matrices
2267              identity         create an N × N × N identity matrix
2268              mean             arithmetic mean of two N × N × N matrices
2269              mult             multiply an N × N × N matrix by a scalar
2270              negate           negate an N × N × N matrix
2271              sub              subtract one N × N × N matrix from another N  ×
2272                               N × N matrix
2273              trans            transpose an N × N × N matrix
2274              zero             zero an N × N × N matrix
2275
2276       --matrix-3d-size N
2277              specify  the N × N × N size of the matrices.  Smaller values re‐
2278              sult in a floating  point  compute  throughput  bound  stressor,
2279              where  as large values result in a cache and/or memory bandwidth
2280              bound stressor.
2281
2282       --matrix-3d-zyx
2283              perform matrix operations in order z by y by x rather  than  the
2284              default x by y by z. This is suboptimal ordering compared to the
2285              default and will perform more data cache stalls.
2286
2287       --mcontend N
2288              start N workers that produce memory contention  read/write  pat‐
2289              terns.  Each stressor runs with 5 threads that read and write to
2290              two different mappings of the  same  underlying  physical  page.
2291              Various caching operations are also exercised to cause sub-opti‐
2292              mal memory access patterns.  The threads  also  randomly  change
2293              CPU affinity to exercise CPU and memory migration stress.
2294
2295       --mcontend-ops N
2296              stop mcontend stressors after N bogo read/write operations.
2297
2298       --membarrier N
2299              start  N workers that exercise the membarrier system call (Linux
2300              only).
2301
2302       --membarrier-ops N
2303              stop membarrier stress workers after N  bogo  membarrier  opera‐
2304              tions.
2305
2306       --memcpy N
2307              start  N workers that copy 2MB of data from a shared region to a
2308              buffer using memcpy(3) and then move the data in the buffer with
2309              memmove(3)  with 3 different alignments. This will exercise pro‐
2310              cessor cache and system memory.
2311
2312       --memcpy-ops N
2313              stop memcpy stress workers after N bogo memcpy operations.
2314
2315       --memcpy-method [ all | libc | builtin | naive ]
2316              specify a memcpy copying method. Available  memcpy  methods  are
2317              described as follows:
2318
2319              Method           Description
2320              all              use libc, builtin and naive methods
2321              libc             use  libc memcpy and memmove functions, this is
2322                               the default
2323              builtin          use the compiler built in optimized memcpy  and
2324                               memmove functions
2325              naive            use  naive byte by byte copying and memory mov‐
2326                               ing build with  default  compiler  optimization
2327                               flags
2328              naive_o0         use  unoptimized naive byte by byte copying and
2329                               memory moving
2330              naive_o3         use optimized naive byte by  byte  copying  and
2331                               memory  moving  build with -O3 optimization and
2332                               where possible use CPU specific optimizations
2333
2334       --memfd N
2335              start N workers that create  allocations  of  1024  pages  using
2336              memfd_create(2)  and  ftruncate(2) for allocation and mmap(2) to
2337              map the allocation  into  the  process  address  space.   (Linux
2338              only).
2339
2340       --memfd-bytes N
2341              allocate  N bytes per memfd stress worker, the default is 256MB.
2342              One can specify the size in as % of total available memory or in
2343              units of Bytes, KBytes, MBytes and GBytes using the suffix b, k,
2344              m or g.
2345
2346       --memfd-fds N
2347              create N memfd file descriptors, the default is 256. One can se‐
2348              lect 8 to 4096 memfd file descriptions with this option.
2349
2350       --memfd-ops N
2351              stop after N memfd-create(2) bogo operations.
2352
2353       --memhotplug N
2354              start  N workers that offline and online memory hotplug regions.
2355              Linux only and requires CAP_SYS_ADMIN capabilities.
2356
2357       --memhotplug-ops N
2358              stop memhotplug stressors after N memory offline and online bogo
2359              operations.
2360
2361       --memrate N
2362              start N workers that exercise a buffer with 64, 32, 16 and 8 bit
2363              reads and writes.  This memory stressor allows one to also spec‐
2364              ify  the maximum read and write rates. The stressors will run at
2365              maximum speed if no read or write rates are specified.
2366
2367       --memrate-ops N
2368              stop after N bogo memrate operations.
2369
2370       --memrate-bytes N
2371              specify the size of the memory buffer being exercised.  The  de‐
2372              fault size is 256MB. One can specify the size in units of Bytes,
2373              KBytes, MBytes and GBytes using the suffix b, k, m or g.
2374
2375       --memrate-rd-mbs N
2376              specify the maximum allowed read rate in MB/sec. The actual read
2377              rate  is dependent on scheduling jitter and memory accesses from
2378              other running processes.
2379
2380       --memrate-wr-mbs N
2381              specify the maximum allowed read  rate  in  MB/sec.  The  actual
2382              write rate is dependent on scheduling jitter and memory accesses
2383              from other running processes.
2384
2385       --memthrash N
2386              start N workers that thrash and exercise a 16MB buffer in  vari‐
2387              ous  ways  to  try and trip thermal overrun.  Each stressor will
2388              start 1 or more threads.  The number of  threads  is  chosen  so
2389              that  there will be at least 1 thread per CPU. Note that the op‐
2390              timal choice for N is a value that divides into  the  number  of
2391              CPUs.
2392
2393       --memthrash-ops N
2394              stop after N memthrash bogo operations.
2395
2396       --memthrash-method method
2397              specify  a  memthrash  stress method. Available memthrash stress
2398              methods are described as follows:
2399
2400              Method           Description
2401              all              iterate over all the below memthrash methods
2402              chunk1           memset 1 byte chunks of random data into random
2403                               locations
2404              chunk8           memset 8 byte chunks of random data into random
2405                               locations
2406              chunk64          memset 64 byte chunks of random data into  ran‐
2407                               dom locations
2408              chunk256         memset 256 byte chunks of random data into ran‐
2409                               dom locations
2410              chunkpage        memset page size chunks  of  random  data  into
2411                               random locations
2412              flip             flip (invert) all bits in random locations
2413              flush            flush cache line in random locations
2414              lock             lock randomly choosing locations (Intel x86 and
2415                               ARM CPUs only)
2416              matrix           treat memory as a 2 × 2 matrix and swap  random
2417                               elements
2418              memmove          copy  all the data in buffer to the next memory
2419                               location
2420              memset           memset the memory with random data
2421              mfence           stores with write serialization
2422              prefetch         prefetch data at random memory locations
2423              random           randomly run any of the memthrash  methods  ex‐
2424                               cept for 'random' and 'all'
2425              spinread         spin  loop  read  the same random location 2^19
2426                               times
2427              spinwrite        spin loop write the same random  location  2^19
2428                               times
2429              swap             step  through memory swapping bytes in steps of
2430                               65 and 129 byte strides
2431
2432       --mergesort N
2433              start N workers that sort 32 bit integers using the  BSD  merge‐
2434              sort.
2435
2436       --mergesort-ops N
2437              stop mergesort stress workers after N bogo mergesorts.
2438
2439       --mergesort-size N
2440              specify  number  of  32  bit integers to sort, default is 262144
2441              (256 × 1024).
2442
2443       --mincore N
2444              start N workers that walk through all of memory 1 page at a time
2445              checking if the page mapped and also is resident in memory using
2446              mincore(2). It also maps and unmaps a page to check if the  page
2447              is mapped or not using mincore(2).
2448
2449       --mincore-ops N
2450              stop  after  N  mincore  bogo operations. One mincore bogo op is
2451              equivalent to a 300 mincore(2) calls.  --mincore-random  instead
2452              of  walking  through pages sequentially, select pages at random.
2453              The chosen address is iterated over by  shifting  it  right  one
2454              place  and checked by mincore until the address is less or equal
2455              to the page size.
2456
2457       --misaligned N
2458              start N workers that perform misaligned read and writes. By  de‐
2459              fault,  this will exercise 128 bit misaligned read and writes in
2460              8 x 16 bits, 4 x 32 bits, 2 x 64 bits and 1  x  128  bits.  Mis‐
2461              aligned  read and writes operate at 1 byte offset from the natu‐
2462              rual alignment of the data type. On some architectures this  can
2463              cause  SIGBUS, SIGILL or SIGSEGV, these are handled and the mis‐
2464              aligned stressor method causing the error is disabled.
2465
2466       --misaligned-ops N
2467              stop after N misaligned bogo operation. A misaligned bogo op  is
2468              equivalent to 65536 x 128 bit reads or writes.
2469
2470       --misaligned-method M
2471              Available misaligned stress methods are described as follows:
2472
2473              Method         Description
2474              all            iterate over all the following misaligned methods
2475              int16rd        8 x 16 bit integer reads
2476              int16wr        8 x 16 bit integer writes
2477              int16inc       8 x 16 bit integer increments
2478              int16atomic    8 x 16 bit atomic integer increments
2479              int32rd        4 x 32 bit integer reads
2480              int32wr        4 x 32 bit integer writes
2481              int32inc       4 x 32 bit integer increments
2482              int32atomic    4 x 32 bit atomic integer increments
2483              int64rd        2 x 64 bit integer reads
2484              int64wr        2 x 64 bit integer writes
2485              int64inc       2 x 64 bit integer increments
2486              int64atomic    2 x 64 bit atomic integer increments
2487              int128rd       1 x 128 bit integer reads
2488              int128wr       1 x 128 bit integer writes
2489              int128inc      1 x 128 bit integer increments
2490              int128atomic   1 x 128 bit atomic integer increments
2491
2492       Note  that  some of these options (128 bit integer and/or atomic opera‐
2493       tions) may not be availabe on some systems.
2494
2495       --mknod N
2496              start N workers that create and remove fifos,  empty  files  and
2497              named sockets using mknod and unlink.
2498
2499       --mknod-ops N
2500              stop directory thrash workers after N bogo mknod operations.
2501
2502       --mlock N
2503              start  N  workers that lock and unlock memory mapped pages using
2504              mlock(2), munlock(2), mlockall(2)  and  munlockall(2).  This  is
2505              achieved by the mapping of three contiguous pages and then lock‐
2506              ing the second page, hence  ensuring  non-contiguous  pages  are
2507              locked  . This is then repeated until the maximum allowed mlocks
2508              or a maximum of 262144 mappings are made.  Next, all future map‐
2509              pings  are  mlocked and the worker attempts to map 262144 pages,
2510              then all pages are munlocked and the pages are unmapped.
2511
2512       --mlock-ops N
2513              stop after N mlock bogo operations.
2514
2515       --mlockmany N
2516              start N workers that fork off a default of 1024 child  processes
2517              in  total; each child will attempt to anonymously mmap and mlock
2518              the maximum allowed mlockable memory size.  The stress test  at‐
2519              tempts to avoid swapping by tracking low memory and swap alloca‐
2520              tions (but some swapping may occur).  Once  either  the  maximum
2521              number of child process is reached or all mlockable in-core mem‐
2522              ory is locked then child processes are  killed  and  the  stress
2523              test is repeated.
2524
2525       --mlockmany-ops N
2526              stop after N mlockmany (mmap and mlock) operations.
2527
2528       --mlockmany-procs N
2529              set  the  number  of child processes to create per stressor. The
2530              default is to start a maximum of 1024 child processes  in  total
2531              across  all  the  stressors. This option allows the setting of N
2532              child processes per stressor.
2533
2534       --mmap N
2535              start N workers  continuously  calling  mmap(2)/munmap(2).   The
2536              initial   mapping   is   a   large   chunk  (size  specified  by
2537              --mmap-bytes) followed  by  pseudo-random  4K  unmappings,  then
2538              pseudo-random  4K mappings, and then linear 4K unmappings.  Note
2539              that this can cause systems to trip the  kernel  OOM  killer  on
2540              Linux  systems  if  not  enough  physical memory and swap is not
2541              available.  The MAP_POPULATE option is used  to  populate  pages
2542              into memory on systems that support this.  By default, anonymous
2543              mappings are used, however, the --mmap-file and --mmap-async op‐
2544              tions allow one to perform file based mappings if desired.
2545
2546       --mmap-ops N
2547              stop mmap stress workers after N bogo operations.
2548
2549       --mmap-async
2550              enable  file based memory mapping and use asynchronous msync'ing
2551              on each page, see --mmap-file.
2552
2553       --mmap-bytes N
2554              allocate N bytes per mmap stress worker, the default  is  256MB.
2555              One  can  specify  the size as % of total available memory or in
2556              units of Bytes, KBytes, MBytes and GBytes using the suffix b, k,
2557              m or g.
2558
2559       --mmap-file
2560              enable  file based memory mapping and by default use synchronous
2561              msync'ing on each page.
2562
2563       --mmap-mmap2
2564              use mmap2 for 4K page aligned offsets  if  mmap2  is  available,
2565              otherwise fall back to mmap.
2566
2567       --mmap-mprotect
2568              change  protection settings on each page of memory.  Each time a
2569              page or a group of pages are mapped or remapped then this option
2570              will  make the pages read-only, write-only, exec-only, and read-
2571              write.
2572
2573       --mmap-odirect
2574              enable file based memory mapping and use O_DIRECT direct I/O.
2575
2576       --mmap-osync
2577              enable file based memory mapping and used O_SYNC synchronous I/O
2578              integrity completion.
2579
2580       --mmapaddr N
2581              start  N  workers that memory map pages at a random memory loca‐
2582              tion that is not already mapped.  On 64 bit machines the  random
2583              address is randomly chosen 32 bit or 64 bit address. If the map‐
2584              ping works a second page is memory mapped from the first  mapped
2585              address.  The  stressor  exercises mmap/munmap, mincore and seg‐
2586              fault handling.
2587
2588       --mmapaddr-ops N
2589              stop after N random address mmap bogo operations.
2590
2591       --mmapfork N
2592              start N workers that each fork off 32 child processes,  each  of
2593              which tries to allocate some of the free memory left in the sys‐
2594              tem (and trying to avoid any  swapping).   The  child  processes
2595              then hint that the allocation will be needed with madvise(2) and
2596              then memset it to zero and hint that it is no longer needed with
2597              madvise before exiting.  This produces significant amounts of VM
2598              activity, a lot of cache misses and with minimal swapping.
2599
2600       --mmapfork-ops N
2601              stop after N mmapfork bogo operations.
2602
2603       --mmapfixed N
2604              start N workers that perform fixed address allocations from  the
2605              top  virtual address down to 128K.  The allocated sizes are from
2606              1 page to 8  pages  and  various  random  mmap  flags  are  used
2607              MAP_SHARED/MAP_PRIVATE, MAP_LOCKED, MAP_NORESERVE, MAP_POPULATE.
2608              If successfully map'd then the allocation is remap'd to  an  ad‐
2609              dress  that  is  several  pages  higher  in memory. Mappings and
2610              remappings are madvised with random madvise options  to  further
2611              exercise the mappings.
2612
2613       --mmapfixed-ops N
2614              stop after N mmapfixed memory mapping bogo operations.
2615
2616       --mmaphuge N
2617              start  N  workers  that  attempt to mmap a set of huge pages and
2618              large huge page sized mappings. Successful mappings are madvised
2619              with  MADV_NOHUGEPAGE and MADV_HUGEPAGE settings and then 1/64th
2620              of the normal small page size pages are touched. Finally, an at‐
2621              tempt  to unmap a small page size page at the end of the mapping
2622              is made (these may fail on huge pages) before the set  of  pages
2623              are  unmapped.  By default 8192 mappings are attempted per round
2624              of mappings or until swapping is detected.
2625
2626       --mmaphuge-ops N
2627              stop after N mmaphuge bogo operations
2628
2629       --mmaphuge-mmaps N
2630              set the number of huge page mappings to attempt in each round of
2631              mappings. The default is 8192 mappings.
2632
2633       --mmapmany N
2634              start  N workers that attempt to create the maximum allowed per-
2635              process memory mappings. This is achieved by mapping 3  contigu‐
2636              ous pages and then unmapping the middle page hence splitting the
2637              mapping into two. This is then repeated until  the  maximum  al‐
2638              lowed mappings or a maximum of 262144 mappings are made.
2639
2640       --mmapmany-ops N
2641              stop after N mmapmany bogo operations
2642
2643       --mq N start  N sender and receiver processes that continually send and
2644              receive messages using POSIX message queues. (Linux only).
2645
2646       --mq-ops N
2647              stop after N bogo POSIX message send operations completed.
2648
2649       --mq-size N
2650              specify size of POSIX message queue. The default size is 10 mes‐
2651              sages  and  most  Linux systems this is the maximum allowed size
2652              for normal users. If the given size is greater than the  allowed
2653              message  queue size then a warning is issued and the maximum al‐
2654              lowed size is used instead.
2655
2656       --mremap N
2657              start N workers continuously calling mmap(2), mremap(2) and mun‐
2658              map(2).   The  initial  anonymous mapping is a large chunk (size
2659              specified by --mremap-bytes) and then iteratively halved in size
2660              by remapping all the way down to a page size and then back up to
2661              the original size.  This worker is only available for Linux.
2662
2663       --mremap-ops N
2664              stop mremap stress workers after N bogo operations.
2665
2666       --mremap-bytes N
2667              initially allocate N bytes per remap stress worker, the  default
2668              is  256MB.  One  can specify the size in units of Bytes, KBytes,
2669              MBytes and GBytes using the suffix b, k, m or g.
2670
2671       --mremap-mlock
2672              attempt to mlock remapped pages  into  memory  prohibiting  them
2673              from being paged out.  This is a no-op if mlock(2) is not avail‐
2674              able.
2675
2676       --msg N
2677              start N sender and receiver processes that continually send  and
2678              receive messages using System V message IPC.
2679
2680       --msg-ops N
2681              stop after N bogo message send operations completed.
2682
2683       --msg-types N
2684              select  the quality of message types (mtype) to use. By default,
2685              msgsnd sends messages with a mtype of 1, this option allows  one
2686              to send messages types in the range 1..N to exercise the message
2687              queue receive ordering. This will also impact throughput perfor‐
2688              mance.
2689
2690       --msync N
2691              start N stressors that msync data from a file backed memory map‐
2692              ping from memory back to the file and msync modified  data  from
2693              the  file back to the mapped memory. This exercises the msync(2)
2694              MS_SYNC and MS_INVALIDATE sync operations.
2695
2696       --msync-ops N
2697              stop after N msync bogo operations completed.
2698
2699       --msync-bytes N
2700              allocate N bytes for the memory  mapped  file,  the  default  is
2701              256MB.  One  can specify the size as % of total available memory
2702              or in units of Bytes, KBytes, MBytes and GBytes using the suffix
2703              b, k, m or g.
2704
2705       --munmap N
2706              start  N  stressors  that  exercise unmapping of shared non-exe‐
2707              cutable mapped regions of child processes (Linux only). The  un‐
2708              mappings  map  shared  memory  regions page by page with a prime
2709              sized stride that creates many temporary mapping holes.  One the
2710              unmappings  are  complete  the  child will exit and a new one is
2711              started.  Note that this may trigger segmentation faults in  the
2712              child  process,  these are handled where possible by forcing the
2713              child process to call _exit(2).
2714
2715       --munmap-ops N
2716              stop after N page unmappings.
2717
2718       --nanosleep N
2719              start N workers that each run 256 pthreads that  call  nanosleep
2720              with random delays from 1 to 2^18 nanoseconds. This should exer‐
2721              cise the high resolution timers and scheduler.
2722
2723       --nanosleep-ops N
2724              stop the nanosleep stressor after N bogo nanosleep operations.
2725
2726       --netdev N
2727              start N workers that exercise various netdevice  ioctl  commands
2728              across  all  the available network devices. The ioctls exercised
2729              by this stressor  are  as  follows:  SIOCGIFCONF,  SIOCGIFINDEX,
2730              SIOCGIFNAME, SIOCGIFFLAGS, SIOCGIFADDR, SIOCGIFNETMASK, SIOCGIF‐
2731              METRIC, SIOCGIFMTU, SIOCGIFHWADDR, SIOCGIFMAP and SIOCGIFTXQLEN.
2732              See netdevice(7) for more details of these ioctl commands.
2733
2734       --netdev-ops N
2735              stop after N netdev bogo operations completed.
2736
2737       --netlink-proc N
2738              start   N   workers  that  spawn  child  processes  and  monitor
2739              fork/exec/exit process events via the  proc  netlink  connector.
2740              Each  event  received is counted as a bogo op. This stressor can
2741              only be run on Linux and requires CAP_NET_ADMIN capability.
2742
2743       --netlink-proc-ops N
2744              stop the proc netlink connector stressors after N bogo ops.
2745
2746       --netlink-task N
2747              start N workers that collect task  statistics  via  the  netlink
2748              taskstats interface.  This stressor can only be run on Linux and
2749              requires CAP_NET_ADMIN capability.
2750
2751       --netlink-task-ops N
2752              stop the taskstats netlink connector stressors after N bogo ops.
2753
2754       --nice N
2755              start N cpu consuming workers that exercise the  available  nice
2756              levels.  Each  iteration  forks  off  a  child process that runs
2757              through the all the nice levels running a busy loop for 0.1 sec‐
2758              onds per level and then exits.
2759
2760       --nice-ops N
2761              stop after N nice bogo nice loops
2762
2763       --nop N
2764              start  N  workers that consume cpu cycles issuing no-op instruc‐
2765              tions. This stressor is available if the assembler supports  the
2766              "nop" instruction.
2767
2768       --nop-ops N
2769              stop nop workers after N no-op bogo operations. Each bogo-opera‐
2770              tion is equivalent to 256 loops of 256 no-op instructions.
2771
2772       --nop-instr INSTR
2773              use alternative nop instruction INSTR. For x86 CPUs INSTR can be
2774              one  of  nop, pause, nop2 (2 byte nop) through to nop11 (11 byte
2775              nop). For ARM CPUs, INSTR can be one of nop and yield. For other
2776              processors,  INSTR is only nop. If the chosen INSTR generates an
2777              SIGILL signal, then the stressor falls back to the  vanilla  nop
2778              instruction.
2779
2780       --null N
2781              start N workers writing to /dev/null.
2782
2783       --null-ops N
2784              stop  null  stress  workers  after N /dev/null bogo write opera‐
2785              tions.
2786
2787       --numa N
2788              start N workers that migrate stressors and a 4MB  memory  mapped
2789              buffer  around  all  the  available  NUMA  nodes.  This uses mi‐
2790              grate_pages(2)  to  move  the   stressors   and   mbind(2)   and
2791              move_pages(2) to move the pages of the mapped buffer. After each
2792              move, the buffer is written to force activity over the bus which
2793              results  cache misses.  This test will only run on hardware with
2794              NUMA enabled and more than 1 NUMA node.
2795
2796       --numa-ops N
2797              stop NUMA stress workers after N bogo NUMA operations.
2798
2799       --oom-pipe N
2800              start N workers that create as many pipes as allowed  and  exer‐
2801              cise  expanding  and  shrinking  the pipes from the largest pipe
2802              size down to a page size. Data is written  into  the  pipes  and
2803              read  out  again to fill the pipe buffers. With the --aggressive
2804              mode enabled the data is not read out when the pipes are shrunk,
2805              causing  the kernel to OOM processes aggressively.  Running many
2806              instances of this stressor will force kernel  to  OOM  processes
2807              due to the many large pipe buffer allocations.
2808
2809       --oom-pipe-ops N
2810              stop after N bogo pipe expand/shrink operations.
2811
2812       --opcode N
2813              start  N  workers  that  fork off children that execute randomly
2814              generated executable code.  This will generate  issues  such  as
2815              illegal  instructions,  bus  errors, segmentation faults, traps,
2816              floating point errors that are handled gracefully by the  stres‐
2817              sor.
2818
2819       --opcode-ops N
2820              stop after N attempts to execute illegal code.
2821
2822       --opcode-method [ inc | mixed | random | text ]
2823              select  the  opcode generation method.  By default, random bytes
2824              are used to generate the executable code. This option allows one
2825              to select one of the three methods:
2826
2827              Method                  Description
2828              inc                     use  incrementing 32 bit opcode patterns
2829                                      from 0x00000000 to 0xfffffff inclusive.
2830              mixed                   use a mix of incrementing 32 bit  opcode
2831                                      patterns  and  random 32 bit opcode pat‐
2832                                      terns that are  also  inverted,  encoded
2833                                      with gray encoding and bit reversed.
2834              random                  generate opcodes using random bytes from
2835                                      a mwc random generator.
2836              text                    copies random chunks of  code  from  the
2837                                      stress-ng   text  segment  and  randomly
2838                                      flips single bits in a random choice  of
2839                                      1/8th of the code.
2840
2841       -o N, --open N
2842              start  N  workers  that perform open(2) and then close(2) opera‐
2843              tions on /dev/zero. The maximum opens at one time is system  de‐
2844              fined,  so  the  test will run up to this maximum, or 65536 open
2845              file descriptors, which ever comes first.
2846
2847       --open-ops N
2848              stop the open stress workers after N bogo open operations.
2849
2850       --open-fd
2851              run a child process that scans  /proc/$PID/fd  and  attempts  to
2852              open the files that the stressor has opened. This exercises rac‐
2853              ing open/close operations on the proc interface.
2854
2855       --pci N
2856              exercise PCI sysfs by running N  workers  that  read  data  (and
2857              mmap/unmap  PCI  config or PCI resource files). Linux only. Run‐
2858              ning as root will allow config and resource mmappings to be read
2859              and exercises PCI I/O mapping.
2860
2861       --pci-ops N
2862              stop  pci stress workers after N PCI subdirectory exercising op‐
2863              erations.
2864
2865       --personality N
2866              start N workers that attempt to set personality and get all  the
2867              available personality types (process execution domain types) via
2868              the personality(2) system call. (Linux only).
2869
2870       --personality-ops N
2871              stop personality stress workers after N bogo personality  opera‐
2872              tions.
2873
2874       --physpage N
2875              start N workers that use /proc/self/pagemap and /proc/kpagecount
2876              to determine the physical page  and  page  count  of  a  virtual
2877              mapped  page  and a page that is shared among all the stressors.
2878              Linux only and requires the CAP_SYS_ADMIN capabilities.
2879
2880       --physpage-ops N
2881              stop physpage stress  workers  after  N  bogo  physical  address
2882              lookups.
2883
2884       --pidfd N
2885              start   N   workers   that   exercise  signal  sending  via  the
2886              pidfd_send_signal system call.  This stressor creates child pro‐
2887              cesses  and  checks  if they exist and can be stopped, restarted
2888              and killed using the pidfd_send_signal system call.
2889
2890       --pidfd-ops N
2891              stop pidfd stress workers after N child processes have been cre‐
2892              ated, tested and killed with pidfd_send_signal.
2893
2894       --ping-sock N
2895              start  N workers that send small randomized ICMP messages to the
2896              localhost across a range of ports (1024..65535) using  a  "ping"
2897              socket  with  an AF_INET domain, a SOCK_DGRAM socket type and an
2898              IPPROTO_ICMP protocol.
2899
2900       --ping-sock-ops N
2901              stop the ping-sock stress workers  after  N  ICMP  messages  are
2902              sent.
2903
2904       -p N, --pipe N
2905              start  N workers that perform large pipe writes and reads to ex‐
2906              ercise pipe I/O.  This exercises memory write and reads as  well
2907              as  context  switching.  Each worker has two processes, a reader
2908              and a writer.
2909
2910       --pipe-ops N
2911              stop pipe stress workers after N bogo pipe write operations.
2912
2913       --pipe-data-size N
2914              specifies the size in bytes of each write  to  the  pipe  (range
2915              from  4  bytes  to  4096  bytes). Setting a small data size will
2916              cause more writes to be buffered in the pipe, hence reducing the
2917              context switch rate between the pipe writer and pipe reader pro‐
2918              cesses. Default size is the page size.
2919
2920       --pipe-size N
2921              specifies the size of the pipe in bytes (for systems  that  sup‐
2922              port  the  F_SETPIPE_SZ  fcntl()  command). Setting a small pipe
2923              size will cause the pipe to  fill  and  block  more  frequently,
2924              hence increasing the context switch rate between the pipe writer
2925              and the pipe reader processes. Default size is 512 bytes.
2926
2927       --pipeherd N
2928              start N workers that pass a 64 bit  token  counter  to/from  100
2929              child  processes  over a shared pipe. This forces a high context
2930              switch rate and can trigger a "thundering herd"  of  wakeups  on
2931              processes that are blocked on pipe waits.
2932
2933       --pipeherd-ops N
2934              stop pipe stress workers after N bogo pipe write operations.
2935
2936       --pipeherd-yield
2937              force  a  scheduling  yield after each write, this increases the
2938              context switch rate.
2939
2940       --pkey N
2941              start N workers that change memory protection using a protection
2942              key  (pkey)  and  the pkey_mprotect call (Linux only). This will
2943              try to allocate a pkey and use this  for  the  page  protection,
2944              however,  if  this  fails  then the special pkey -1 will be used
2945              (and the kernel will use the normal mprotect mechanism instead).
2946              Various  page  protection  mixes of read/write/exec/none will be
2947              cycled through on randomly chosen pre-allocated pages.
2948
2949       --pkey-ops N
2950              stop after N pkey_mprotect page protection cycles.
2951
2952       -P N, --poll N
2953              start N workers  that  perform  zero  timeout  polling  via  the
2954              poll(2),  ppoll(2),  select(2),  pselect(2)  and sleep(3) calls.
2955              This wastes system and user time doing nothing.
2956
2957       --poll-ops N
2958              stop poll stress workers after N bogo poll operations.
2959
2960       --poll-fds N
2961              specify the number of file descriptors to poll/ppoll/select/pse‐
2962              lect  on.   The  maximum number for select/pselect is limited by
2963              FD_SETSIZE and the upper maximum is also limited by the  maximum
2964              number of pipe open descriptors allowed.
2965
2966       --prctl N
2967              start  N workers that exercise the majority of the prctl(2) sys‐
2968              tem call options. Each batch of prctl calls is performed  inside
2969              a  new  child  process to ensure the limit of prctl is contained
2970              inside a new process every time.  Some prctl options are  archi‐
2971              tecture  specific,  however,  this  stressor will exercise these
2972              even if they are not implemented.
2973
2974       --prctl-ops N
2975              stop prctl workers after N batches of prctl calls
2976
2977       --procfs N
2978              start N workers that read files from /proc and recursively  read
2979              files from /proc/self (Linux only).
2980
2981       --procfs-ops N
2982              stop  procfs  reading  after N bogo read operations. Note, since
2983              the number of entries may vary between kernels,  this  bogo  ops
2984              metric is probably very misleading.
2985
2986       --pthread N
2987              start N workers that iteratively creates and terminates multiple
2988              pthreads (the default is 1024 pthreads per worker). In each  it‐
2989              eration,  each  newly created pthread waits until the worker has
2990              created all the pthreads and then they all terminate together.
2991
2992       --pthread-ops N
2993              stop pthread workers after N bogo pthread create operations.
2994
2995       --pthread-max N
2996              create N pthreads per worker. If the product of  the  number  of
2997              pthreads by the number of workers is greater than the soft limit
2998              of allowed pthreads then the maximum is re-adjusted down to  the
2999              maximum allowed.
3000
3001       --ptrace N
3002              start  N  workers  that  fork  and trace system calls of a child
3003              process using ptrace(2).
3004
3005       --ptrace-ops N
3006              stop ptracer workers after N bogo system calls are traced.
3007
3008       --pty N
3009              start N workers that repeatedly attempt to open  pseudoterminals
3010              and  perform  various  pty  ioctls  upon the ptys before closing
3011              them.
3012
3013       --pty-ops N
3014              stop pty workers after N pty bogo operations.
3015
3016       --pty-max N
3017              try to open a maximum  of  N  pseudoterminals,  the  default  is
3018              65536. The allowed range of this setting is 8..65536.
3019
3020       -Q, --qsort N
3021              start N workers that sort 32 bit integers using qsort.
3022
3023       --qsort-ops N
3024              stop qsort stress workers after N bogo qsorts.
3025
3026       --qsort-size N
3027              specify  number  of  32  bit integers to sort, default is 262144
3028              (256 × 1024).
3029
3030       --quota N
3031              start N workers that exercise the Q_GETQUOTA,  Q_GETFMT,  Q_GET‐
3032              INFO,  Q_GETSTATS  and  Q_SYNC  quotactl(2)  commands on all the
3033              available mounted block based file systems. Requires CAP_SYS_AD‐
3034              MIN capability to run.
3035
3036       --quota-ops N
3037              stop quota stress workers after N bogo quotactl operations.
3038
3039       --radixsort N
3040              start N workers that sort random 8 byte strings using radixsort.
3041
3042       --radixsort-ops N
3043              stop radixsort stress workers after N bogo radixsorts.
3044
3045       --radixsort-size N
3046              specify  number  of  strings  to  sort, default is 262144 (256 ×
3047              1024).
3048
3049       --ramfs N
3050              start N workers mounting a memory based file system using  ramfs
3051              and  tmpfs  (Linux  only).  This alternates between mounting and
3052              umounting a ramfs or tmpfs file  system  using  the  traditional
3053              mount(2)  and  umount(2)  system call as well as the newer Linux
3054              5.2 fsopen(2), fsmount(2), fsconfig(2) and move_mount(2)  system
3055              calls if they are available. The default ram file system size is
3056              2MB.
3057
3058       --ramfs-ops N
3059              stop after N ramfs mount operations.
3060
3061       --ramfs-size N
3062              set the ramfs size (must be multiples of the page size).
3063
3064       --rawdev N
3065              start N workers that read the underlying raw drive device  using
3066              direct  IO  reads.  The device (with minor number 0) that stores
3067              the current working directory is the raw device to  be  read  by
3068              the stressor.  The read size is exactly the size of the underly‐
3069              ing device block size.  By default, this stressor will  exercise
3070              all  the of the rawdev methods (see the --rawdev-method option).
3071              This is a Linux only stressor and requires root privilege to  be
3072              able to read the raw device.
3073
3074       --rawdev-ops N
3075              stop  the rawdev stress workers after N raw device read bogo op‐
3076              erations.
3077
3078       --rawdev-method M
3079              Available rawdev stress methods are described as follows:
3080
3081              Method           Description
3082              all              iterate over all the rawdev stress  methods  as
3083                               listed below:
3084              sweep            repeatedly  read across the raw device from the
3085                               0th block to the end block in steps of the num‐
3086                               ber  of  blocks on the device / 128 and back to
3087                               the start again.
3088              wiggle           repeatedly read across the raw  device  in  128
3089                               evenly steps with each step reading 1024 blocks
3090                               backwards from each step.
3091              ends             repeatedly read the first and  last  128  start
3092                               and  end  blocks  of the raw device alternating
3093                               from start of the device to the end of the  de‐
3094                               vice.
3095              random           repeatedly read 256 random blocks
3096              burst            repeatedly  read 256 sequential blocks starting
3097                               from a random block on the raw device.
3098
3099       --rawsock N
3100              start N workers that send and  receive  packet  data  using  raw
3101              sockets on the localhost. Requires CAP_NET_RAW to run.
3102
3103       --rawsock-ops N
3104              stop rawsock workers after N packets are received.
3105
3106       --rawpkt N
3107              start  N  workers that sends and receives ethernet packets using
3108              raw packets on the localhost via the loopback  device.  Requires
3109              CAP_NET_RAW to run.
3110
3111       --rawpkt-ops N
3112              stop  rawpkt workers after N packets from the sender process are
3113              received.
3114
3115       --rawpkt-port N
3116              start at port P. For N rawpkt worker processes, ports P to (P  *
3117              4) - 1 are used. The default starting port is port 14000.
3118
3119       --rawudp N
3120              start  N  workers  that  send  and receive UDP packets using raw
3121              sockets on the localhost. Requires CAP_NET_RAW to run.
3122
3123       --rawudp-ops N
3124              stop rawudp workers after N packets are received.
3125
3126       --rawudp-port N
3127              start at port P. For N rawudp worker processes, ports P to (P  *
3128              4) - 1 are used. The default starting port is port 13000.
3129
3130       --rdrand N
3131              start N workers that read a random number from an on-chip random
3132              number generator This uses the rdrand instruction on Intel  pro‐
3133              cessors or the darn instruction on Power9 processors.
3134
3135       --rdrand-ops N
3136              stop  rdrand  stress  workers  after N bogo rdrand operations (1
3137              bogo op = 2048 random bits successfully read).
3138
3139       --readahead N
3140              start N  workers  that  randomly  seek  and  perform  4096  byte
3141              read/write  I/O operations on a file with readahead. The default
3142              file size is 64 MB.  Readaheads and reads are  batched  into  16
3143              readaheads and then 16 reads.
3144
3145       --readahead-bytes N
3146              set  the  size  of  readahead file, the default is 1 GB. One can
3147              specify the size as % of free space on the  file  system  or  in
3148              units of Bytes, KBytes, MBytes and GBytes using the suffix b, k,
3149              m or g.
3150
3151       --readahead-ops N
3152              stop readahead stress workers after N bogo read operations.
3153
3154       --reboot N
3155              start N workers that exercise the reboot(2)  system  call.  When
3156              possible,  it  will create a process in a PID namespace and per‐
3157              form a  reboot  power  off  command  that  should  shutdown  the
3158              process.  Also, the stressor exercises invalid reboot magic val‐
3159              ues and invalid reboots when there are  insufficient  privileges
3160              that will not actually reboot the system.
3161
3162       --reboot-ops N
3163              stop the reboot stress workers after N bogo reboot cycles.
3164
3165       --remap N
3166              start  N workers that map 512 pages and re-order these pages us‐
3167              ing the deprecated system call remap_file_pages(2). Several page
3168              re-orderings  are  exercised:  forward, reverse, random and many
3169              pages to 1 page.
3170
3171       --remap-ops N
3172              stop after N remapping bogo operations.
3173
3174       -R N, --rename N
3175              start N workers that each create a file and then repeatedly  re‐
3176              name it.
3177
3178       --rename-ops N
3179              stop rename stress workers after N bogo rename operations.
3180
3181       --resources N
3182              start  N  workers  that  consume  various system resources. Each
3183              worker will spawn 1024 child processes that iterate  1024  times
3184              consuming  shared memory, heap, stack, temporary files and vari‐
3185              ous file descriptors (eventfds, memoryfds,  userfaultfds,  pipes
3186              and sockets).
3187
3188       --resources-ops N
3189              stop after N resource child forks.
3190
3191       --revio N
3192              start N workers continually writing in reverse position order to
3193              temporary files. The default mode is to stress test reverse  po‐
3194              sition  ordered  writes with randomly sized sparse holes between
3195              each write.  With the --aggressive option  enabled  without  any
3196              --revio-opts  options  the  revio stressor will work through all
3197              the --revio-opt options one by one to cover a range of  I/O  op‐
3198              tions.
3199
3200       --revio-bytes N
3201              write  N  bytes for each revio process, the default is 1 GB. One
3202              can specify the size as % of free space on the file system or in
3203              units of Bytes, KBytes, MBytes and GBytes using the suffix b, k,
3204              m or g.
3205
3206       --revio-opts list
3207              specify various stress test options as a comma  separated  list.
3208              Options are the same as --hdd-opts but without the iovec option.
3209
3210       --revio-ops N
3211              stop revio stress workers after N bogo operations.
3212
3213       --revio-write-size N
3214              specify  size of each write in bytes. Size can be from 1 byte to
3215              4MB.
3216
3217       --rlimit N
3218              start N workers that exceed CPU and file  size  resource  imits,
3219              generating SIGXCPU and SIGXFSZ signals.
3220
3221       --rlimit-ops N
3222              stop  after  N bogo resource limited SIGXCPU and SIGXFSZ signals
3223              have been caught.
3224
3225       --rmap N
3226              start N workers that exercise the VM reverse-mapping. This  cre‐
3227              ates  16  processes  per  worker  that write/read multiple file-
3228              backed memory mappings. There are 64 lots  of  4  page  mappings
3229              made  onto  the file, with each mapping overlapping the previous
3230              by 3 pages and at least 1 page of non-mapped memory between each
3231              of  the mappings. Data is synchronously msync'd to the file 1 in
3232              every 256 iterations in a random manner.
3233
3234       --rmap-ops N
3235              stop after N bogo rmap memory writes/reads.
3236
3237       --rseq N
3238              start N workers that  exercise  restartable  sequences  via  the
3239              rseq(2)  system  call.  This loops over a long duration critical
3240              section that is likely to be interrupted.  A rseq abort  handler
3241              keeps  count of the number of interruptions and a SIGSEV handler
3242              also tracks any failed rseq aborts that can occur if there is  a
3243              mistmatch in a rseq check signature. Linux only.
3244
3245       --rseq-ops N
3246              stop  after  N bogo rseq operations. Each bogo rseq operation is
3247              equivalent to 10000 iterations over a long duration rseq handled
3248              critical section.
3249
3250       --rtc N
3251              start  N  workers that exercise the real time clock (RTC) inter‐
3252              faces  via  /dev/rtc  and  /sys/class/rtc/rtc0.  No  destructive
3253              writes (modifications) are performed on the RTC. This is a Linux
3254              only stressor.
3255
3256       --rtc-ops N
3257              stop after N bogo RTC interface accesses.
3258
3259       --schedpolicy N
3260              start N workers that work set the worker  to  various  available
3261              scheduling policies out of SCHED_OTHER, SCHED_BATCH, SCHED_IDLE,
3262              SCHED_FIFO, SCHED_RR and  SCHED_DEADLINE.   For  the  real  time
3263              scheduling  policies a random sched priority is selected between
3264              the minimum and maximum scheduling priority settings.
3265
3266       --schedpolicy-ops N
3267              stop after N bogo scheduling policy changes.
3268
3269       --sctp N
3270              start N workers that perform network sctp stress activity  using
3271              the  Stream Control Transmission Protocol (SCTP).  This involves
3272              client/server processes performing rapid connect,  send/receives
3273              and disconnects on the local host.
3274
3275       --sctp-domain D
3276              specify  the  domain to use, the default is ipv4. Currently ipv4
3277              and ipv6 are supported.
3278
3279       --sctp-ops N
3280              stop sctp workers after N bogo operations.
3281
3282       --sctp-port P
3283              start at sctp port P. For N sctp worker processes, ports P to (P
3284              *  4)  -  1 are used for ipv4, ipv6 domains and ports P to P - 1
3285              are used for the unix domain.
3286
3287       --seal N
3288              start N workers that exercise the fcntl(2) SEAL  commands  on  a
3289              small  anonymous file created using memfd_create(2).  After each
3290              SEAL command is issued the stressor also sanity  checks  if  the
3291              seal operation has sealed the file correctly.  (Linux only).
3292
3293       --seal-ops N
3294              stop after N bogo seal operations.
3295
3296       --seccomp N
3297              start  N workers that exercise Secure Computing system call fil‐
3298              tering. Each worker creates child processes that write  a  short
3299              message  to  /dev/null and then exits. 2% of the child processes
3300              have a seccomp filter that disallows the write system  call  and
3301              hence  it  is  killed  by seccomp with a SIGSYS.  Note that this
3302              stressor can generate many audit  log  messages  each  time  the
3303              child is killed.  Requires CAP_SYS_ADMIN to run.
3304
3305       --seccomp-ops N
3306              stop seccomp stress workers after N seccomp filter tests.
3307
3308       --secretmem N
3309              start  N  workers  that  mmap  pages  using  file  mapping off a
3310              memfd_secret file descriptor. Each stress  loop  iteration  will
3311              expand  the  mappable region by 3 pages using ftruncate and mmap
3312              and touches the pages. The pages are then fragmented  by  unmap‐
3313              ping the middle page and then umapping the first and last pages.
3314              This tries to force page fragmentation and also trigger  out  of
3315              memory (OOM) kills of the stressor when the secret memory is ex‐
3316              hausted.  Note this is a Linux 5.11+ only stressor and the  ker‐
3317              nel  needs  to  be booted with "secretmem=" option to allocate a
3318              secret memory reservation.
3319
3320       --secretmem-ops N
3321              stop secretmem stress workers after N stress loop iterations.
3322
3323       --seek N
3324              start N workers  that  randomly  seeks  and  performs  512  byte
3325              read/write I/O operations on a file. The default file size is 16
3326              GB.
3327
3328       --seek-ops N
3329              stop seek stress workers after N bogo seek operations.
3330
3331       --seek-punch
3332              punch randomly located 8K holes into the file to cause more  ex‐
3333              tents to force a more demanding seek stressor, (Linux only).
3334
3335       --seek-size N
3336              specify  the  size  of the file in bytes. Small file sizes allow
3337              the I/O to occur in the cache, causing greater CPU  load.  Large
3338              file  sizes force more I/O operations to drive causing more wait
3339              time and more I/O on the drive. One  can  specify  the  size  in
3340              units of Bytes, KBytes, MBytes and GBytes using the suffix b, k,
3341              m or g.
3342
3343       --sem N
3344              start N workers that perform POSIX semaphore wait and post oper‐
3345              ations.  By  default,  a  parent  and 4 children are started per
3346              worker  to  provide  some  contention  on  the  semaphore.  This
3347              stresses  fast  semaphore  operations and produces rapid context
3348              switching.
3349
3350       --sem-ops N
3351              stop semaphore stress workers after N bogo semaphore operations.
3352
3353       --sem-procs N
3354              start N child workers per worker to provide  contention  on  the
3355              semaphore, the default is 4 and a maximum of 64 are allowed.
3356
3357       --sem-sysv N
3358              start  N  workers  that perform System V semaphore wait and post
3359              operations. By default, a parent and 4 children are started  per
3360              worker  to  provide  some  contention  on  the  semaphore.  This
3361              stresses fast semaphore operations and  produces  rapid  context
3362              switching.
3363
3364       --sem-sysv-ops N
3365              stop  semaphore  stress  workers after N bogo System V semaphore
3366              operations.
3367
3368       --sem-sysv-procs N
3369              start N child processes per worker to provide contention on  the
3370              System V semaphore, the default is 4 and a maximum of 64 are al‐
3371              lowed.
3372
3373       --sendfile N
3374              start N workers that send an empty file to /dev/null. This oper‐
3375              ation  spends  nearly  all  the time in the kernel.  The default
3376              sendfile size is 4MB.  The sendfile options are for Linux only.
3377
3378       --sendfile-ops N
3379              stop sendfile workers after N sendfile bogo operations.
3380
3381       --sendfile-size S
3382              specify the size to be copied with each sendfile call.  The  de‐
3383              fault  size  is 4MB. One can specify the size in units of Bytes,
3384              KBytes, MBytes and GBytes using the suffix b, k, m or g.
3385
3386       --session N
3387              start N workers that create child and grandchild processes  that
3388              set  and  get their session ids. 25% of the grandchild processes
3389              are not waited for by the child to create orphaned sessions that
3390              need to be reaped by init.
3391
3392       --session-ops N
3393              stop  session  workers  after  N child processes are spawned and
3394              reaped.
3395
3396       --set N
3397              start N workers that call system calls that try to set  data  in
3398              the  kernel,  currently these are: setgid, sethostname, setpgid,
3399              setpgrp, setuid, setgroups, setreuid, setregid,  setresuid,  se‐
3400              tresgid  and  setrlimit.  Some of these system calls are OS spe‐
3401              cific.
3402
3403       --set-ops N
3404              stop set workers after N bogo set operations.
3405
3406       --shellsort N
3407              start N workers that sort 32 bit integers using shellsort.
3408
3409       --shellsort-ops N
3410              stop shellsort stress workers after N bogo shellsorts.
3411
3412       --shellsort-size N
3413              specify number of 32 bit integers to  sort,  default  is  262144
3414              (256 × 1024).
3415
3416       --shm N
3417              start N workers that open and allocate shared memory objects us‐
3418              ing the POSIX shared memory interfaces.  By  default,  the  test
3419              will  repeatedly  create  and  destroy 32 shared memory objects,
3420              each of which is 8MB in size.
3421
3422       --shm-ops N
3423              stop after N POSIX shared memory create and destroy bogo  opera‐
3424              tions are complete.
3425
3426       --shm-bytes N
3427              specify  the  size of the POSIX shared memory objects to be cre‐
3428              ated. One can specify the size as % of total available memory or
3429              in units of Bytes, KBytes, MBytes and GBytes using the suffix b,
3430              k, m or g.
3431
3432       --shm-objs N
3433              specify the number of shared memory objects to be created.
3434
3435       --shm-sysv N
3436              start N workers that allocate shared memory using the  System  V
3437              shared  memory  interface.  By default, the test will repeatedly
3438              create and destroy 8 shared memory segments, each  of  which  is
3439              8MB in size.
3440
3441       --shm-sysv-ops N
3442              stop  after  N  shared memory create and destroy bogo operations
3443              are complete.
3444
3445       --shm-sysv-bytes N
3446              specify the size of the shared memory segment to be created. One
3447              can  specify the size as % of total available memory or in units
3448              of Bytes, KBytes, MBytes and GBytes using the suffix b, k, m  or
3449              g.
3450
3451       --shm-sysv-segs N
3452              specify  the number of shared memory segments to be created. The
3453              default is 8 segments.
3454
3455       --sigabrt N
3456              start N workers that create children that are killed by  SIGABRT
3457              signals or by calling abort(3).
3458
3459       --sigabrt-ops N
3460              stop  the  sigabrt  workers after N SIGABRT signals are success‐
3461              fully handled.
3462
3463       --sigchld N
3464              start N workers that create children to  generate  SIGCHLD  sig‐
3465              nals. This exercises children that exit (CLD_EXITED), get killed
3466              (CLD_KILLED), get stopped (CLD_STOPPED) or  continued  (CLD_CON‐
3467              TINUED).
3468
3469       --sigchld-ops N
3470              stop  the  sigchld  workers after N SIGCHLD signals are success‐
3471              fully handled.
3472
3473       --sigfd N
3474              start N workers that generate SIGRT signals and are  handled  by
3475              reads  by  a  child process using a file descriptor set up using
3476              signalfd(2).  (Linux only). This will generate a  heavy  context
3477              switch load when all CPUs are fully loaded.
3478
3479       --sigfd-ops
3480              stop sigfd workers after N bogo SIGUSR1 signals are sent.
3481
3482       --sigfpe N
3483              start  N  workers  that  rapidly  cause  division by zero SIGFPE
3484              faults.
3485
3486       --sigfpe-ops N
3487              stop sigfpe stress workers after N bogo SIGFPE faults.
3488
3489       --sigio N
3490              start N workers that read data from a child process via  a  pipe
3491              and  generate SIGIO signals. This exercises asynchronous I/O via
3492              SIGIO.
3493
3494       --sigio-ops N
3495              stop sigio stress workers after handling N SIGIO signals.
3496
3497       --signal N
3498              start N workers that exercise the signal system call three  dif‐
3499              ferent  signal handlers, SIG_IGN (ignore), a SIGCHLD handler and
3500              SIG_DFL (default action).  For the SIGCHLD handler, the stressor
3501              sends itself a SIGCHLD signal and checks if it has been handled.
3502              For other handlers, the stressor checks that the SIGCHLD handler
3503              has  not  been called.  This stress test calls the signal system
3504              call directly when possible and will try to avoid the C  library
3505              attempt  to replace signal with the more modern sigaction system
3506              call.
3507
3508       --signal-ops N
3509              stop signal stress workers after N rounds of signal handler set‐
3510              ting.
3511
3512       --signest N
3513              start  N  workers that exercise nested signal handling. A signal
3514              is raised and inside the signal handler a  different  signal  is
3515              raised, working through a list of signals to exercise. An alter‐
3516              native signal stack is used that is large enough to  handle  all
3517              the nested signal calls.  The -v option will log the approximate
3518              size of the stack required and the average stack size per nested
3519              call.
3520
3521       --signest-ops N
3522              stop after handling N nested signals.
3523
3524       --sigpending N
3525              start  N workers that check if SIGUSR1 signals are pending. This
3526              stressor masks SIGUSR1, generates a SIGUSR1 signal and uses sig‐
3527              pending(2)  to see if the signal is pending. Then it unmasks the
3528              signal and checks if the signal is no longer pending.
3529
3530       --sigpending-ops N
3531              stop sigpending stress workers after  N  bogo  sigpending  pend‐
3532              ing/unpending checks.
3533
3534       --sigpipe N
3535              start N workers that repeatedly spawn off child process that ex‐
3536              its before a parent can complete a pipe write, causing a SIGPIPE
3537              signal.   The  child process is either spawned using clone(2) if
3538              it is available or use the slower fork(2) instead.
3539
3540       --sigpipe-ops N
3541              stop N workers after N SIGPIPE signals have been caught and han‐
3542              dled.
3543
3544       --sigq N
3545              start   N  workers  that  rapidly  send  SIGUSR1  signals  using
3546              sigqueue(3) to child processes that wait for the signal via sig‐
3547              waitinfo(2).
3548
3549       --sigq-ops N
3550              stop sigq stress workers after N bogo signal send operations.
3551
3552       --sigrt N
3553              start  N  workers  that  each  create  child processes to handle
3554              SIGRTMIN to SIGRMAX real time signals.  The  parent  sends  each
3555              child  process  a RT signal via siqueue(2) and the child process
3556              waits for this via sigwaitinfo(2).  When the child receives  the
3557              signal  it then sends a RT signal to one of the other child pro‐
3558              cesses also via sigqueue(2).
3559
3560       --sigrt-ops N
3561              stop sigrt stress workers after N bogo sigqueue signal send  op‐
3562              erations.
3563
3564       --sigsegv N
3565              start  N  workers  that  rapidly  create  and catch segmentation
3566              faults.
3567
3568       --sigsegv-ops N
3569              stop sigsegv stress workers after N bogo segmentation faults.
3570
3571       --sigsuspend N
3572              start N workers that each spawn off 4 child processes that  wait
3573              for  a  SIGUSR1  signal from the parent using sigsuspend(2). The
3574              parent sends SIGUSR1 signals to each child in rapid  succession.
3575              Each sigsuspend wakeup is counted as one bogo operation.
3576
3577       --sigsuspend-ops N
3578              stop sigsuspend stress workers after N bogo sigsuspend wakeups.
3579
3580       --sigtrap N
3581              start  N  workers  that exercise the SIGTRAP signal. For systems
3582              that support SIGTRAP, the signal is generated  using  raise(SIG‐
3583              TRAP).  Only  x86 Linux systems the SIGTRAP is also generated by
3584              an int 3 instruction.
3585
3586       --sigtrap-ops N
3587              stop sigtrap stress workers after N SIGTRAPs have been handled.
3588
3589       --skiplist N
3590              start N workers that store and then search for integers using  a
3591              skiplist.   By  default,  65536 integers are added and searched.
3592              This is a useful method to exercise random access of memory  and
3593              processor cache.
3594
3595       --skiplist-ops N
3596              stop  the  skiplist worker after N skiplist store and search cy‐
3597              cles are completed.
3598
3599       --skiplist-size N
3600              specify the size (number of integers) to store and search in the
3601              skiplist. Size can be from 1K to 4M.
3602
3603       --sleep N
3604              start  N  workers that spawn off multiple threads that each per‐
3605              form multiple sleeps of ranges 1us to 0.1s.  This creates multi‐
3606              ple context switches and timer interrupts.
3607
3608       --sleep-ops N
3609              stop after N sleep bogo operations.
3610
3611       --sleep-max P
3612              start P threads per worker. The default is 1024, the maximum al‐
3613              lowed is 30000.
3614
3615       --smi N
3616              start N workers that attempt to generate system  management  in‐
3617              terrupts  (SMIs)  into  the  x86  ring -2 system management mode
3618              (SMM) by exercising the advanced  power  management  (APM)  port
3619              0xb2. This requires the --pathological option and root privilege
3620              and is only implemented on x86 Linux  platforms.  This  probably
3621              does  not  work in a virtualized environment.  The stressor will
3622              attempt to determine the time stolen by  SMIs  with  some  naive
3623              benchmarking.
3624
3625       --smi-ops N
3626              stop after N attempts to trigger the SMI.
3627
3628       -S N, --sock N
3629              start  N  workers  that  perform various socket stress activity.
3630              This involves a pair of client/server processes performing rapid
3631              connect, send and receives and disconnects on the local host.
3632
3633       --sock-domain D
3634              specify  the domain to use, the default is ipv4. Currently ipv4,
3635              ipv6 and unix are supported.
3636
3637       --sock-nodelay
3638              This disables the TCP Nagle algorithm, so data segments are  al‐
3639              ways  sent  as  soon  as  possible.   This stops data from being
3640              buffered before being transmitted,  hence  resulting  in  poorer
3641              network utilisation and more context switches between the sender
3642              and receiver.
3643
3644       --sock-port P
3645              start at socket port P. For N socket worker processes,  ports  P
3646              to P - 1 are used.
3647
3648       --sock-protocol P
3649              Use  the  specified  protocol P, default is tcp. Options are tcp
3650              and mptcp (if supported by the operating system).
3651
3652       --sock-ops N
3653              stop socket stress workers after N bogo operations.
3654
3655       --sock-opts [ random | send | sendmsg | sendmmsg ]
3656              by default, messages are sent using send(2). This option  allows
3657              one  to  specify  the  sending method using send(2), sendmsg(2),
3658              sendmmsg(2) or a random selection of one of thse 3 on each iter‐
3659              ation.   Note  that sendmmsg is only available for Linux systems
3660              that support this system call.
3661
3662       --sock-type [ stream | seqpacket ]
3663              specify the socket type to use. The default type is stream. seq‐
3664              packet currently only works for the unix socket domain.
3665
3666       --sock-zerocopy
3667              enable  zerocopy  for send and recv calls if the MSG_ZEROCOPY is
3668              supported.
3669
3670       --sockabuse N
3671              start N workers that abuse a socket file descriptor with various
3672              file based system that don't normally act on sockets. The kernel
3673              should handle these illegal and unexpected calls gracefully.
3674
3675       --sockabuse-ops N
3676              stop after N iterations of the socket abusing stressor loop.
3677
3678       --sockdiag N
3679              start N workers that exercise the Linux sock_diag netlink socket
3680              diagnostics  (Linux  only).  This currently requests diagnostics
3681              using    UDIAG_SHOW_NAME,    UDIAG_SHOW_VFS,    UDIAG_SHOW_PEER,
3682              UDIAG_SHOW_ICONS,  UDIAG_SHOW_RQLEN  and  UDIAG_SHOW_MEMINFO for
3683              the AF_UNIX family of socket connections.
3684
3685       --sockdiag-ops N
3686              stop after receiving N sock_diag diagnostic messages.
3687
3688       --sockfd N
3689              start N workers that pass file descriptors over  a  UNIX  domain
3690              socket  using  the  CMSG(3)  ancillary  data mechanism. For each
3691              worker, pair of client/server processes are created, the  server
3692              opens  as  many  file  descriptors  on /dev/null as possible and
3693              passing these over the socket to a client that reads these  from
3694              the CMSG data and immediately closes the files.
3695
3696       --sockfd-ops N
3697              stop sockfd stress workers after N bogo operations.
3698
3699       --sockfd-port P
3700              start  at  socket port P. For N socket worker processes, ports P
3701              to P - 1 are used.
3702
3703       --sockmany N
3704              start N workers that use a client process to attempt to open  as
3705              many  as  100000  TCP/IP  socket connections to a server on port
3706              10000.
3707
3708       --sockmany-ops N
3709              stop after N connections.
3710
3711       --sockpair N
3712              start N workers that perform socket pair I/O  read/writes.  This
3713              involves  a  pair of client/server processes performing randomly
3714              sized socket I/O operations.
3715
3716       --sockpair-ops N
3717              stop socket pair stress workers after N bogo operations.
3718
3719       --softlockup N
3720              start N workers that flip between with the "real-time" SCHED_FIO
3721              and  SCHED_RR  scheduling  policies  at  the highest priority to
3722              force softlockups. This can only be run with CAP_SYS_NICE  capa‐
3723              bility and for best results the number of stressors should be at
3724              least the number of online CPUs. Once running, this  is  practi‐
3725              cally impossible to stop and it will force softlockup issues and
3726              may trigger watchdog timeout reboots.
3727
3728       --softlockup-ops N
3729              stop softlockup stress workers after  N  bogo  scheduler  policy
3730              changes.
3731
3732       --spawn N
3733              start  N workers continually spawn children using posix_spawn(3)
3734              that exec stress-ng and then exit almost immediately.  Currently
3735              Linux only.
3736
3737       --spawn-ops N
3738              stop spawn stress workers after N bogo spawns.
3739
3740       --splice N
3741              move data from /dev/zero to /dev/null through a pipe without any
3742              copying between kernel address space and user address space  us‐
3743              ing splice(2). This is only available for Linux.
3744
3745       --splice-ops N
3746              stop after N bogo splice operations.
3747
3748       --splice-bytes N
3749              transfer  N  bytes  per splice call, the default is 64K. One can
3750              specify the size as % of total available memory or in  units  of
3751              Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g.
3752
3753       --stack N
3754              start  N workers that rapidly cause and catch stack overflows by
3755              use of large recursive stack allocations.   Much  like  the  brk
3756              stressor, this can eat up pages rapidly and may trigger the ker‐
3757              nel OOM killer on the process, however, the killed  stressor  is
3758              respawned again by a monitoring parent process.
3759
3760       --stack-fill
3761              the default action is to touch the lowest page on each stack al‐
3762              location. This option touches all the pages by filling  the  new
3763              stack  allocation  with  zeros which forces physical pages to be
3764              allocated and hence is more aggressive.
3765
3766       --stack-mlock
3767              attempt to mlock stack pages into memory prohibiting  them  from
3768              being paged out.  This is a no-op if mlock(2) is not available.
3769
3770       --stack-ops N
3771              stop stack stress workers after N bogo stack overflows.
3772
3773       --stackmmap N
3774              start  N workers that use a 2MB stack that is memory mapped onto
3775              a temporary file. A recursive function works down the stack  and
3776              flushes  dirty  stack pages back to the memory mapped file using
3777              msync(2) until the end of the stack is reached (stack overflow).
3778              This exercises dirty page and stack exception handling.
3779
3780       --stackmmap-ops N
3781              stop workers after N stack overflows have occurred.
3782
3783       --str N
3784              start  N  workers that exercise various libc string functions on
3785              random strings.
3786
3787       --str-method strfunc
3788              select a specific libc  string  function  to  stress.  Available
3789              string  functions to stress are: all, index, rindex, strcasecmp,
3790              strcat, strchr, strcoll, strcmp,  strcpy,  strlen,  strncasecmp,
3791              strncat,  strncmp,  strrchr and strxfrm.  See string(3) for more
3792              information on these string functions.  The 'all' method is  the
3793              default and will exercise all the string methods.
3794
3795       --str-ops N
3796              stop after N bogo string operations.
3797
3798       --stream N
3799              start  N  workers exercising a memory bandwidth stressor loosely
3800              based on the STREAM "Sustainable Memory Bandwidth in  High  Per‐
3801              formance Computers" benchmarking tool by John D. McCalpin, Ph.D.
3802              This stressor allocates buffers that are at least  4  times  the
3803              size of the CPU L2 cache and continually performs rounds of fol‐
3804              lowing computations on large arrays of double precision floating
3805              point numbers:
3806
3807              Operation            Description
3808              copy                 c[i] = a[i]
3809              scale                b[i] = scalar * c[i]
3810              add                  c[i] = a[i] + b[i]
3811              triad                a[i] = b[i] + (c[i] * scalar)
3812
3813              Since this is loosely based on a variant of the STREAM benchmark
3814              code, DO NOT submit results based on this as it is  intended  to
3815              in  stress-ng just to stress memory and compute and NOT intended
3816              for STREAM accurate tuned or non-tuned benchmarking  whatsoever.
3817              Use the official STREAM benchmarking tool if you desire accurate
3818              and standardised STREAM benchmarks.
3819
3820       --stream-ops N
3821              stop after N stream bogo operations, where a bogo  operation  is
3822              one round of copy, scale, add and triad operations.
3823
3824       --stream-index N
3825              specify number of stream indices used to index into the data ar‐
3826              rays a, b and c.  This adds indirection into the data lookup  by
3827              using  randomly  shuffled  indexing  into the three data arrays.
3828              Level 0 (no indexing) is the default, and 3 is where all  3  ar‐
3829              rays  are indexed via 3 different randomly shuffled indexes. The
3830              higher the index setting the more impact this has on L1, L2  and
3831              L3 caching and hence forces higher memory read/write latencies.
3832
3833       --stream-l3-size N
3834              Specify  the  CPU  Level 3 cache size in bytes.  One can specify
3835              the size in units of Bytes, KBytes, MBytes and GBytes using  the
3836              suffix b, k, m or g.  If the L3 cache size is not provided, then
3837              stress-ng will attempt to determine the cache size, and  failing
3838              this, will default the size to 4MB.
3839
3840       --stream-madvise [ hugepage | nohugepage | normal ]
3841              Specify  the  madvise  options  used on the memory mapped buffer
3842              used in the stream stressor. Non-linux systems  will  only  have
3843              the 'normal' madvise advice. The default is 'normal'.
3844
3845       --swap N
3846              start  N  workers  that add and remove small randomly sizes swap
3847              partitions (Linux only).  Note that if too many swap  partitions
3848              are  added  then  the  stressors  may exit with exit code 3 (not
3849              enough resources).  Requires CAP_SYS_ADMIN to run.
3850
3851       --swap-ops N
3852              stop the swap workers after N swapon/swapoff iterations.
3853
3854       -s N, --switch N
3855              start N workers that send messages via pipe to a child to  force
3856              context switching.
3857
3858       --switch-ops N
3859              stop context switching workers after N bogo operations.
3860
3861       --switch-rate R
3862              run  the context switching at the rate of R context switches per
3863              second. Note that the specified switch rate may not be  achieved
3864              because of CPU speed and memory bandwidth limitations.
3865
3866       --symlink N
3867              start N workers creating and removing symbolic links.
3868
3869       --symlink-ops N
3870              stop symlink stress workers after N bogo operations.
3871
3872       --sync-file N
3873              start N workers that perform a range of data syncs across a file
3874              using sync_file_range(2).  Three mixes of syncs  are  performed,
3875              from  start to the end of the file,  from end of the file to the
3876              start, and a random mix. A random selection of valid sync  types
3877              are     used,    covering    the    SYNC_FILE_RANGE_WAIT_BEFORE,
3878              SYNC_FILE_RANGE_WRITE and SYNC_FILE_RANGE_WAIT_AFTER flag bits.
3879
3880       --sync-file-ops N
3881              stop sync-file workers after N bogo sync operations.
3882
3883       --sync-file-bytes N
3884              specify the size of the file to be sync'd. One can  specify  the
3885              size  as  %  of free space on the file system in units of Bytes,
3886              KBytes, MBytes and GBytes using the suffix b, k, m or g.
3887
3888       --sysbadaddr N
3889              start N workers that pass bad addresses to system calls to exer‐
3890              cise bad address and fault handling. The addresses used are null
3891              pointers, read only pages, write only pages, unmapped addresses,
3892              text  only  pages,  unaligned  addresses  and  top of memory ad‐
3893              dresses.
3894
3895       --sysbadaddr-ops N
3896              stop the sysbadaddr stressors after N bogo system calls.
3897
3898       --sysinfo N
3899              start N workers that continually read system  and  process  spe‐
3900              cific information.  This reads the process user and system times
3901              using the times(2) system call.   For  Linux  systems,  it  also
3902              reads overall system statistics using the sysinfo(2) system call
3903              and also the file system statistics for all mounted file systems
3904              using statfs(2).
3905
3906       --sysinfo-ops N
3907              stop the sysinfo workers after N bogo operations.
3908
3909       --sysinval N
3910              start  N workers that exercise system calls in random order with
3911              permutations of invalid arguments to force kernel error handling
3912              checks. The stress test autodetects system calls that cause pro‐
3913              cesses to crash or exit prematurely and will blocklist these af‐
3914              ter several repeated breakages. System call arguments that cause
3915              system calls to work successfully are also  detected  an  block‐
3916              listed too.  Linux only.
3917
3918       --sysinval-ops N
3919              stop sysinval workers after N system call attempts.
3920
3921       --sysfs N
3922              start  N  workers  that  recursively read files from /sys (Linux
3923              only).  This may cause specific kernel drivers to emit  messages
3924              into the kernel log.
3925
3926       --sys-ops N
3927              stop sysfs reading after N bogo read operations. Note, since the
3928              number of entries may vary between kernels, this bogo ops metric
3929              is probably very misleading.
3930
3931       --tee N
3932              move  data  from  a  writer  process to a reader process through
3933              pipes and to /dev/null without any copying  between  kernel  ad‐
3934              dress  space  and  user address space using tee(2). This is only
3935              available for Linux.
3936
3937       --tee-ops N
3938              stop after N bogo tee operations.
3939
3940       -T N, --timer N
3941              start N workers creating timer events at a default rate of 1 MHz
3942              (Linux  only);  this  can create a many thousands of timer clock
3943              interrupts. Each timer event is caught by a signal  handler  and
3944              counted as a bogo timer op.
3945
3946       --timer-ops N
3947              stop  timer  stress  workers  after  N  bogo timer events (Linux
3948              only).
3949
3950       --timer-freq F
3951              run timers at F Hz; range from 1 to 1000000000 Hz (Linux  only).
3952              By  selecting  an  appropriate  frequency stress-ng can generate
3953              hundreds of thousands of interrupts per  second.   Note:  it  is
3954              also  worth  using  --timer-slack 0 for high frequencies to stop
3955              the kernel from coalescing timer events.
3956
3957       --timer-rand
3958              select a timer frequency based around the  timer  frequency  +/-
3959              12.5% random jitter. This tries to force more variability in the
3960              timer interval to make the scheduling less predictable.
3961
3962       --timerfd N
3963              start N workers creating timerfd events at a default rate  of  1
3964              MHz  (Linux  only);  this  can  create a many thousands of timer
3965              clock events. Timer events are waited for on the timer file  de‐
3966              scriptor  using  select(2)  and  then read and counted as a bogo
3967              timerfd op.
3968
3969       --timerfd-ops N
3970              stop timerfd stress workers after N bogo timerfd  events  (Linux
3971              only).
3972
3973       --timerfs-fds N
3974              try to use a maximum of N timerfd file descriptors per stressor.
3975
3976       --timerfd-freq F
3977              run  timers at F Hz; range from 1 to 1000000000 Hz (Linux only).
3978              By selecting an appropriate  frequency  stress-ng  can  generate
3979              hundreds of thousands of interrupts per second.
3980
3981       --timerfd-rand
3982              select  a timerfd frequency based around the timer frequency +/-
3983              12.5% random jitter. This tries to force more variability in the
3984              timer interval to make the scheduling less predictable.
3985
3986       --tlb-shootdown N
3987              start  N  workers  that force Translation Lookaside Buffer (TLB)
3988              shootdowns.  This is achieved by creating up to  16  child  pro‐
3989              cesses that all share a region of memory and these processes are
3990              shared amongst the available CPUs.   The  processes  adjust  the
3991              page  mapping  settings  causing TLBs to be force flushed on the
3992              other processors, causing the TLB shootdowns.
3993
3994       --tlb-shootdown-ops N
3995              stop after N bogo TLB shootdown operations are completed.
3996
3997       --tmpfs N
3998              start N workers that create a temporary  file  on  an  available
3999              tmpfs file system and perform various file based mmap operations
4000              upon it.
4001
4002       --tmpfs-ops N
4003              stop tmpfs stressors after N bogo mmap operations.
4004
4005       --tmpfs-mmap-async
4006              enable file based memory mapping and use asynchronous  msync'ing
4007              on each page, see --tmpfs-mmap-file.
4008
4009       --tmpfs-mmap-file
4010              enable  tmpfs  file based memory mapping and by default use syn‐
4011              chronous msync'ing on each page.
4012
4013       --tree N
4014              start N workers that exercise tree data structures. The  default
4015              is  to  add,  find  and  remove 250,000 64 bit integers into AVL
4016              (avl), Red-Black (rb), Splay (splay) and binary trees.  The  in‐
4017              tention  of  this  stressor is to exercise memory and cache with
4018              the various tree operations.
4019
4020       --tree-ops N
4021              stop tree stressors after N bogo ops. A bogo op covers the addi‐
4022              tion, finding and removing all the items into the tree(s).
4023
4024       --tree-size N
4025              specify  the  size  of the tree, where N is the number of 64 bit
4026              integers to be added into the tree.
4027
4028       --tree-method [ all | avl | binary | rb | splay ]
4029              specify the tree to be used. By default, both the  rb  ad  splay
4030              trees are used (the 'all' option).
4031
4032       --tsc N
4033              start N workers that read the Time Stamp Counter (TSC) 256 times
4034              per loop iteration (bogo operation).  This exercises the tsc in‐
4035              struction  for x86, the mftb instruction for ppc64 and the rdcy‐
4036              cle instruction for RISC-V.
4037
4038       --tsc-ops N
4039              stop the tsc workers after N bogo operations are completed.
4040
4041       --tsearch N
4042              start N workers that insert, search and delete 32  bit  integers
4043              on  a  binary tree using tsearch(3), tfind(3) and tdelete(3). By
4044              default, there are 65536 randomized integers used in  the  tree.
4045              This  is a useful method to exercise random access of memory and
4046              processor cache.
4047
4048       --tsearch-ops N
4049              stop the tsearch workers after N bogo tree operations  are  com‐
4050              pleted.
4051
4052       --tsearch-size N
4053              specify  the  size  (number  of 32 bit integers) in the array to
4054              tsearch. Size can be from 1K to 4M.
4055
4056       --tun N
4057              start N workers that create a network tunnel  device  and  sends
4058              and receives packets over the tunnel using UDP and then destroys
4059              it. A new random 192.168.*.* IPv4 address is used  each  time  a
4060              tunnel is created.
4061
4062       --tun-ops N
4063              stop after N iterations of creating/sending/receiving/destroying
4064              a tunnel.
4065
4066       --tun-tap
4067              use network tap device using level 2  frames  (bridging)  rather
4068              than a tun device for level 3 raw packets (tunnelling).
4069
4070       --udp N
4071              start  N  workers  that transmit data using UDP. This involves a
4072              pair of client/server processes performing rapid  connect,  send
4073              and receives and disconnects on the local host.
4074
4075       --udp-domain D
4076              specify  the domain to use, the default is ipv4. Currently ipv4,
4077              ipv6 and unix are supported.
4078
4079       --udp-lite
4080              use the UDP-Lite (RFC 3828) protocol (only for ipv4 and ipv6 do‐
4081              mains).
4082
4083       --udp-ops N
4084              stop udp stress workers after N bogo operations.
4085
4086       --udp-port P
4087              start  at  port  P. For N udp worker processes, ports P to P - 1
4088              are used. By default, ports 7000 upwards are used.
4089
4090       --udp-flood N
4091              start N workers that attempt to flood the host with UDP  packets
4092              to random ports. The IP address of the packets are currently not
4093              spoofed.  This  is  only  available  on  systems  that   support
4094              AF_PACKET.
4095
4096       --udp-flood-domain D
4097              specify  the  domain to use, the default is ipv4. Currently ipv4
4098              and ipv6 are supported.
4099
4100       --udp-flood-ops N
4101              stop udp-flood stress workers after N bogo operations.
4102
4103       --unshare N
4104              start N workers that each fork off 32 child processes,  each  of
4105              which  exercises  the  unshare(2)  system call by disassociating
4106              parts of the process execution context. (Linux only).
4107
4108       --unshare-ops N
4109              stop after N bogo unshare operations.
4110
4111       --uprobe N
4112              start N workers that trace the entry to libc  function  getpid()
4113              using  the  Linux uprobe kernel tracing mechanism. This requires
4114              CAP_SYS_ADMIN capabilities and a  modern  Linux  uprobe  capable
4115              kernel.
4116
4117       --uprobe-ops N
4118              stop uprobe tracing after N trace events of the function that is
4119              being traced.
4120
4121       -u N, --urandom N
4122              start N workers reading /dev/urandom  (Linux  only).  This  will
4123              load the kernel random number source.
4124
4125       --urandom-ops N
4126              stop urandom stress workers after N urandom bogo read operations
4127              (Linux only).
4128
4129       --userfaultfd N
4130              start N workers that generate  write  page  faults  on  a  small
4131              anonymously  mapped  memory region and handle these faults using
4132              the user space fault handling  via  the  userfaultfd  mechanism.
4133              This  will  generate  a  large quantity of major page faults and
4134              also context switches during the handling of  the  page  faults.
4135              (Linux only).
4136
4137       --userfaultfd-ops N
4138              stop userfaultfd stress workers after N page faults.
4139
4140       --userfaultfd-bytes N
4141              mmap  N  bytes  per userfaultfd worker to page fault on, the de‐
4142              fault is 16MB.  One can specify the size as % of total available
4143              memory or in units of Bytes, KBytes, MBytes and GBytes using the
4144              suffix b, k, m or g.
4145
4146       --utime N
4147              start N workers updating file timestamps.  This  is  mainly  CPU
4148              bound  when  the  default is used as the system flushes metadata
4149              changes only periodically.
4150
4151       --utime-ops N
4152              stop utime stress workers after N utime bogo operations.
4153
4154       --utime-fsync
4155              force metadata changes on  each  file  timestamp  update  to  be
4156              flushed  to  disk.  This forces the test to become I/O bound and
4157              will result in many dirty metadata writes.
4158
4159       --vdso N
4160              start N workers that repeatedly call each  of  the  system  call
4161              functions in the vDSO (virtual dynamic shared object).  The vDSO
4162              is a shared library that the kernel maps into the address  space
4163              of  all  user-space  applications to allow fast access to kernel
4164              data to some system calls without the need of performing an  ex‐
4165              pensive system call.
4166
4167       --vdso-ops N
4168              stop after N vDSO functions calls.
4169
4170       --vdso-func F
4171              Instead  of  calling  all the vDSO functions, just call the vDSO
4172              function F. The functions depend on the kernel being  used,  but
4173              are typically clock_gettime, getcpu, gettimeofday and time.
4174
4175       --vecmath N
4176              start N workers that perform various unsigned integer math oper‐
4177              ations on various 128 bit vectors. A mix of vector  math  opera‐
4178              tions  are  performed on the following vectors: 16 × 8 bits, 8 ×
4179              16 bits, 4 × 32 bits, 2 × 64 bits. The metrics produced by  this
4180              mix depend on the processor architecture and the vector math op‐
4181              timisations produced by the compiler.
4182
4183       --vecmath-ops N
4184              stop after N bogo vector integer math operations.
4185
4186       --verity N
4187              start N workers that exercise read-only  file  based  authenticy
4188              protection  using  the  verity  ioctls  FS_IOC_ENABLE_VERITY and
4189              FS_IOC_MEASURE_VERITY.  This requires file systems  with  verity
4190              support  (currently ext4 and f2fs on Linux) with the verity fea‐
4191              ture enabled. The test attempts to creates  a  small  file  with
4192              multiple  small extents and enables verity on the file and veri‐
4193              fies it. It also checks to see if the file  has  verity  enabled
4194              with the FS_VERITY_FL bit set on the file flags.
4195
4196       --verity-ops N
4197              stop  the  verity  workers  after  N file create, enable verity,
4198              check verity and unlink cycles.
4199
4200       --vfork N
4201              start N workers continually vforking children  that  immediately
4202              exit.
4203
4204       --vfork-ops N
4205              stop vfork stress workers after N bogo operations.
4206
4207       --vfork-max P
4208              create P processes and then wait for them to exit per iteration.
4209              The default is just 1; higher values will create many  temporary
4210              zombie  processes  that are waiting to be reaped. One can poten‐
4211              tially  fill  up  the  process  table  using  high  values   for
4212              --vfork-max and --vfork.
4213
4214       --vfork-vm
4215              enable  detrimental performance virtual memory advice using mad‐
4216              vise on all pages of the vforked process.  Where  possible  this
4217              will try to set every page in the new process with using madvise
4218              MADV_MERGEABLE,  MADV_WILLNEED,  MADV_HUGEPAGE  and  MADV_RANDOM
4219              flags. Linux only.
4220
4221       --vforkmany N
4222              start  N  workers that spawn off a chain of vfork children until
4223              the process table  fills  up  and/or  vfork  fails.   vfork  can
4224              rapidly  create  child  processes  and the parent process has to
4225              wait until the child dies, so this stressor rapidly fills up the
4226              process table.
4227
4228       --vforkmany-ops N
4229              stop vforkmany stressors after N vforks have been made.
4230
4231       --vforkmany-vm
4232              enable  detrimental performance virtual memory advice using mad‐
4233              vise on all pages of the vforked process.  Where  possible  this
4234              will try to set every page in the new process with using madvise
4235              MADV_MERGEABLE,  MADV_WILLNEED,  MADV_HUGEPAGE  and  MADV_RANDOM
4236              flags. Linux only.
4237
4238       -m N, --vm N
4239              start N workers continuously calling mmap(2)/munmap(2) and writ‐
4240              ing to the allocated memory. Note that this can cause systems to
4241              trip the kernel OOM killer on Linux systems if not enough physi‐
4242              cal memory and swap is not available.
4243
4244       --vm-bytes N
4245              mmap N bytes per vm worker, the default is 256MB. One can  spec‐
4246              ify  the  size  as  %  of  total available memory or in units of
4247              Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g.
4248
4249       --vm-ops N
4250              stop vm workers after N bogo operations.
4251
4252       --vm-hang N
4253              sleep N seconds before unmapping memory,  the  default  is  zero
4254              seconds.  Specifying 0 will do an infinite wait.
4255
4256       --vm-keep
4257              do not continually unmap and map memory, just keep on re-writing
4258              to it.
4259
4260       --vm-locked
4261              Lock the pages of the  mapped  region  into  memory  using  mmap
4262              MAP_LOCKED  (since  Linux  2.5.37).   This is similar to locking
4263              memory as described in mlock(2).
4264
4265       --vm-madvise advice
4266              Specify the madvise 'advice' option used on  the  memory  mapped
4267              regions  used  in  the  vm stressor. Non-linux systems will only
4268              have the 'normal' madvise advice, linux systems  support  'dont‐
4269              need',  'hugepage',  'mergeable' , 'nohugepage', 'normal', 'ran‐
4270              dom', 'sequential', 'unmergeable' and 'willneed' advice. If this
4271              option  is  not  used then the default is to pick random madvise
4272              advice for each mmap call. See madvise(2) for more details.
4273
4274       --vm-method m
4275              specify a vm stress method. By default, all the  stress  methods
4276              are  exercised  sequentially,  however  one can specify just one
4277              method to be used if required.  Each of the vm  workers  have  3
4278              phases:
4279
4280              1. Initialised. The anonymously memory mapped region is set to a
4281              known pattern.
4282
4283              2. Exercised. Memory is modified in  a  known  predictable  way.
4284              Some  vm  workers  alter  memory sequentially, some use small or
4285              large strides to step along memory.
4286
4287              3. Checked. The modified memory is checked to see if it  matches
4288              the expected result.
4289
4290              The vm methods containing 'prime' in their name have a stride of
4291              the largest prime less than 2^64, allowing to them to thoroughly
4292              step through memory and touch all locations just once while also
4293              doing without touching memory cells next  to  each  other.  This
4294              strategy exercises the cache and page non-locality.
4295
4296              Since  the memory being exercised is virtually mapped then there
4297              is no guarantee of touching page  addresses  in  any  particular
4298              physical  order.   These workers should not be used to test that
4299              all the system's memory is working correctly either,  use  tools
4300              such as memtest86 instead.
4301
4302              The vm stress methods are intended to exercise memory in ways to
4303              possibly find memory issues and to try to force thermal errors.
4304
4305              Available vm stress methods are described as follows:
4306
4307              Method                  Description
4308              all                     iterate over all the vm  stress  methods
4309                                      as listed below.
4310              flip                    sequentially   work   through  memory  8
4311                                      times, each time just one bit in  memory
4312                                      flipped  (inverted).  This  will  effec‐
4313                                      tively invert each byte in 8 passes.
4314              galpat-0                galloping pattern zeros. This  sets  all
4315                                      bits  to 0 and flips just 1 in 4096 bits
4316                                      to 1. It then checks to see  if  the  1s
4317                                      are pulled down to 0 by their neighbours
4318                                      or of the neighbours have been pulled up
4319                                      to 1.
4320              galpat-1                galloping  pattern  ones.  This sets all
4321                                      bits to 1 and flips just 1 in 4096  bits
4322                                      to  0.  It  then checks to see if the 0s
4323                                      are pulled up to 1 by  their  neighbours
4324                                      or  of  the  neighbours have been pulled
4325                                      down to 0.
4326              gray                    fill the  memory  with  sequential  gray
4327                                      codes (these only change 1 bit at a time
4328                                      between adjacent bytes) and  then  check
4329                                      if they are set correctly.
4330              incdec                  work  sequentially through memory twice,
4331                                      the first pass increments each byte by a
4332                                      specific   value  and  the  second  pass
4333                                      decrements each byte back to the  origi‐
4334                                      nal start value. The increment/decrement
4335                                      value changes on each invocation of  the
4336                                      stressor.
4337              inc-nybble              initialise  memory  to a set value (that
4338                                      changes on each invocation of the stres‐
4339                                      sor)  and then sequentially work through
4340                                      each byte incrementing the bottom 4 bits
4341                                      by 1 and the top 4 bits by 15.
4342              rand-set                sequentially  work  through memory in 64
4343                                      bit chunks setting bytes in the chunk to
4344                                      the same 8 bit random value.  The random
4345                                      value changes on each chunk.  Check that
4346                                      the values have not changed.
4347              rand-sum                sequentially  set  all  memory to random
4348                                      values and then summate  the  number  of
4349                                      bits that have changed from the original
4350                                      set values.
4351              read64                  sequentially read memory using 32  x  64
4352                                      bit  reads  per  bogo  loop.  Each  loop
4353                                      equates to one bogo operation.  This ex‐
4354                                      ercises raw memory reads.
4355              ror                     fill  memory  with  a random pattern and
4356                                      then sequentially rotate 64 bits of mem‐
4357                                      ory right by one bit, then check the fi‐
4358                                      nal load/rotate/stored values.
4359
4360
4361
4362
4363
4364              swap                    fill memory in 64 byte chunks with  ran‐
4365                                      dom  patterns.  Then  swap each 64 chunk
4366                                      with a randomly chosen  chunk.  Finally,
4367                                      reverse  the swap to put the chunks back
4368                                      to their original place and check if the
4369                                      data is correct. This exercises adjacent
4370                                      and random memory load/stores.
4371              move-inv                sequentially fill memory 64 bits of mem‐
4372                                      ory  at  a  time with random values, and
4373                                      then check if the  memory  is  set  cor‐
4374                                      rectly.   Next, sequentially invert each
4375                                      64 bit pattern and again  check  if  the
4376                                      memory is set as expected.
4377              modulo-x                fill memory over 23 iterations. Each it‐
4378                                      eration starts one  byte  further  along
4379                                      from  the  start of the memory and steps
4380                                      along  in  23  byte  strides.  In   each
4381                                      stride,  the first byte is set to a ran‐
4382                                      dom pattern and all other bytes are  set
4383                                      to  the  inverse.  Then it checks see if
4384                                      the first  byte  contains  the  expected
4385                                      random  pattern.  This  exercises  cache
4386                                      store/reads as well as seeing if  neigh‐
4387                                      bouring cells influence each other.
4388              prime-0                 iterate 8 times by stepping through mem‐
4389                                      ory in very large prime strides clearing
4390                                      just  on  bit  at  a time in every byte.
4391                                      Then check to see if all bits are set to
4392                                      zero.
4393              prime-1                 iterate 8 times by stepping through mem‐
4394                                      ory in very large prime strides  setting
4395                                      just  on  bit  at  a time in every byte.
4396                                      Then check to see if all bits are set to
4397                                      one.
4398              prime-gray-0            first  step through memory in very large
4399                                      prime  strides  clearing  just  on   bit
4400                                      (based  on  a  gray code) in every byte.
4401                                      Next, repeat this but clear the other  7
4402                                      bits.  Then check to see if all bits are
4403                                      set to zero.
4404              prime-gray-1            first step through memory in very  large
4405                                      prime strides setting just on bit (based
4406                                      on a gray code) in every byte. Next, re‐
4407                                      peat this but set the other 7 bits. Then
4408                                      check to see if all bits are set to one.
4409              rowhammer               try to force memory corruption using the
4410                                      rowhammer  memory stressor. This fetches
4411                                      two 32  bit  integers  from  memory  and
4412                                      forces  a  cache  flush  on  the two ad‐
4413                                      dresses multiple times.  This  has  been
4414                                      known  to  force  bit  flipping  on some
4415                                      hardware,  especially  with  lower  fre‐
4416                                      quency memory refresh cycles.
4417              walk-0d                 for  each  byte  in memory, walk through
4418                                      each data line setting them to low  (and
4419                                      the  others are set high) and check that
4420                                      the written value is as  expected.  This
4421                                      checks if any data lines are stuck.
4422              walk-1d                 for  each  byte  in memory, walk through
4423                                      each data line setting them to high (and
4424                                      the  others  are set low) and check that
4425                                      the written value is as  expected.  This
4426                                      checks if any data lines are stuck.
4427              walk-0a                 in   the   given  memory  mapping,  work
4428                                      through a range of specially chosen  ad‐
4429                                      dresses working through address lines to
4430                                      see if any address lines are stuck  low.
4431                                      This works best with physical memory ad‐
4432                                      dressing, however, exercising these vir‐
4433                                      tual addresses has some value too.
4434              walk-1a                 in   the   given  memory  mapping,  work
4435                                      through a range of specially chosen  ad‐
4436                                      dresses working through address lines to
4437                                      see if any address lines are stuck high.
4438                                      This works best with physical memory ad‐
4439                                      dressing, however, exercising these vir‐
4440                                      tual addresses has some value too.
4441              write64                 sequentially  write memory using 32 x 64
4442                                      bit writes  per  bogo  loop.  Each  loop
4443                                      equates to one bogo operation.  This ex‐
4444                                      ercises raw memory  writes.   Note  that
4445                                      memory writes are not checked at the end
4446                                      of each test iteration.
4447
4448
4449
4450              zero-one                set all memory bits  to  zero  and  then
4451                                      check  if  any  bits are not zero. Next,
4452                                      set all the memory bits to one and check
4453                                      if any bits are not one.
4454
4455       --vm-populate
4456              populate  (prefault)  page  tables for the memory mappings; this
4457              can stress swapping. Only  available  on  systems  that  support
4458              MAP_POPULATE (since Linux 2.5.46).
4459
4460       --vm-addr N
4461              start  N  workers  that exercise virtual memory addressing using
4462              various methods to walk through a memory mapped  address  range.
4463              This will exercise mapped private addresses from 8MB to 64MB per
4464              worker and try to generate cache and TLB inefficient  addressing
4465              patterns. Each method will set the memory to a random pattern in
4466              a write phase and then sanity check this in a read phase.
4467
4468       --vm-addr-ops N
4469              stop N workers after N bogo addressing passes.
4470
4471       --vm-addr-method M
4472              specify a vm address stress method. By default, all  the  stress
4473              methods are exercised sequentially, however one can specify just
4474              one method to be used if required.
4475
4476              Available vm address stress methods are described as follows:
4477
4478              Method                  Description
4479              all                     iterate over all the vm  stress  methods
4480                                      as listed below.
4481              pwr2                    work  through  memory addresses in steps
4482                                      of powers of two.
4483              pwr2inv                 like pwr2, but with the all relevant ad‐
4484                                      dress bits inverted.
4485              gray                    work  through memory with gray coded ad‐
4486                                      dresses so that each change  of  address
4487                                      just  changes 1 bit compared to the pre‐
4488                                      vious address.
4489              grayinv                 like gray, but with the all relevant ad‐
4490                                      dress  bits  inverted,  hence  all  bits
4491                                      change  apart  from  1  in  the  address
4492                                      range.
4493              rev                     work  through the address range with the
4494                                      bits in the address range reversed.
4495              revinv                  like rev, but with all the relevant  ad‐
4496                                      dress bits inverted.
4497              inc                     work  through the address range forwards
4498                                      sequentially, byte by byte.
4499              incinv                  like inc, but with all the relevant  ad‐
4500                                      dress bits inverted.
4501              dec                     work through the address range backwards
4502                                      sequentially, byte by byte.
4503              decinv                  like dec, but with all the relevant  ad‐
4504                                      dress bits inverted.
4505
4506       --vm-rw N
4507              start  N workers that transfer memory to/from a parent/child us‐
4508              ing process_vm_writev(2) and process_vm_readv(2). This  is  fea‐
4509              ture is only supported on Linux.  Memory transfers are only ver‐
4510              ified if the --verify option is enabled.
4511
4512       --vm-rw-ops N
4513              stop vm-rw workers after N memory read/writes.
4514
4515       --vm-rw-bytes N
4516              mmap N bytes per vm-rw worker, the  default  is  16MB.  One  can
4517              specify  the  size as % of total available memory or in units of
4518              Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g.
4519
4520       --vm-segv N
4521              start N workers that create a child process that unmaps its  ad‐
4522              dress space causing a SIGSEGV on return from the unmap.
4523
4524       --vm-segv-ops N
4525              stop after N bogo vm-segv SIGSEGV faults.
4526
4527       --vm-splice N
4528              move  data  from  memory to /dev/null through a pipe without any
4529              copying between kernel address space and user address space  us‐
4530              ing  vmsplice(2)  and  splice(2).   This  is  only available for
4531              Linux.
4532
4533       --vm-splice-ops N
4534              stop after N bogo vm-splice operations.
4535
4536       --vm-splice-bytes N
4537              transfer N bytes per vmsplice call, the default is 64K. One  can
4538              specify  the  size as % of total available memory or in units of
4539              Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g.
4540
4541       --wait N
4542              start N workers that spawn off two  children;  one  spins  in  a
4543              pause(2)  loop,  the  other  continually stops and continues the
4544              first. The controlling process waits on the first  child  to  be
4545              resumed   by  the  delivery  of  SIGCONT  using  waitpid(2)  and
4546              waitid(2).
4547
4548       --wait-ops N
4549              stop after N bogo wait operations.
4550
4551       --watchdog N
4552              start N workers that exercising the /dev/watchdog  watchdog  in‐
4553              terface   by  opening  it,  perform  various  watchdog  specific
4554              ioctl(2) commands on the device and close  it.   Before  closing
4555              the  special  watchdog magic close message is written to the de‐
4556              vice to try and force it to never trip a watchdog  reboot  after
4557              the  stressor has been run.  Note that this stressor needs to be
4558              run as root with the --pathological option and is only available
4559              on Linux.
4560
4561       --watchdog-ops N
4562              stop after N bogo operations on the watchdog device.
4563
4564       --wcs N
4565              start N workers that exercise various libc wide character string
4566              functions on random strings.
4567
4568       --wcs-method wcsfunc
4569              select a specific libc wide character string function to stress.
4570              Available  string  functions to stress are: all, wcscasecmp, wc‐
4571              scat, wcschr, wcscoll, wcscmp, wcscpy, wcslen, wcsncasecmp,  wc‐
4572              sncat,  wcsncmp,  wcsrchr  and wcsxfrm.  The 'all' method is the
4573              default and will exercise all the string methods.
4574
4575       --wcs-ops N
4576              stop after N bogo wide character string operations.
4577
4578       --x86syscall N
4579              start N workers that repeatedly exercise the x86-64 syscall  in‐
4580              struction  to  call  the  getcpu(2), gettimeofday(2) and time(2)
4581              system using the Linux vsyscall handler. Only for Linux.
4582
4583       --x86syscall-ops N
4584              stop after N x86syscall system calls.
4585
4586       --x86syscall-func F
4587              Instead of exercising the 3 syscall system calls, just call  the
4588              syscall  function  F. The function F must be one of getcpu, get‐
4589              timeofday and time.
4590
4591       --xattr N
4592              start N workers that create, update and delete  batches  of  ex‐
4593              tended attributes on a file.
4594
4595       --xattr-ops N
4596              stop after N bogo extended attribute operations.
4597
4598       -y N, --yield N
4599              start  N workers that call sched_yield(2). This stressor ensures
4600              that at least 2 child processes per CPU exercise shield_yield(2)
4601              no  matter  how many workers are specified, thus always ensuring
4602              rapid context switching.
4603
4604       --yield-ops N
4605              stop yield stress workers after  N  sched_yield(2)  bogo  opera‐
4606              tions.
4607
4608       --zero N
4609              start N workers reading /dev/zero.
4610
4611       --zero-ops N
4612              stop zero stress workers after N /dev/zero bogo read operations.
4613
4614       --zlib N
4615              start  N workers compressing and decompressing random data using
4616              zlib. Each worker has two processes, one that compresses  random
4617              data and pipes it to another process that decompresses the data.
4618              This stressor exercises CPU, cache and memory.
4619
4620       --zlib-ops N
4621              stop after N bogo compression operations, each bogo  compression
4622              operation  is a compression of 64K of random data at the highest
4623              compression level.
4624
4625       --zlib-level L
4626              specify the compression level (0..9), where 0 = no  compression,
4627              1 = fastest compression and 9 = best compression.
4628
4629       --zlib-method method
4630              specify the type of random data to send to the zlib library.  By
4631              default, the data stream is created from a random  selection  of
4632              the  different data generation processes.  However one can spec‐
4633              ify just one method to be used if required.  Available zlib data
4634              generation methods are described as follows:
4635
4636              Method           Description
4637              00ff             randomly distributed 0x00 and 0xFF values.
4638              ascii01          randomly distributed ASCII 0 and 1 characters.
4639              asciidigits      randomly  distributed ASCII digits in the range
4640                               of 0 and 9.
4641              bcd              packed binary coded decimals, 0..99 packed into
4642                               2 4-bit nybbles.
4643              binary           32 bit random numbers.
4644              brown            8  bit brown noise (Brownian motion/Random Walk
4645                               noise).
4646              double           double precision floating  point  numbers  from
4647                               sin(θ).
4648              fixed            data stream is repeated 0x04030201.
4649              gray             16  bit gray codes generated from an increment‐
4650                               ing counter.
4651              latin            Random latin sentences from a sample  of  Lorem
4652                               Ipsum text.
4653              logmap           Values  generated  from a logistical map of the
4654                               equation Χn+1 = r ×  Χn × (1 - Χn) where r >  ≈
4655                               3.56994567  to produce chaotic data. The values
4656                               are scaled by a large arbitrary value  and  the
4657                               lower 8 bits of this value are compressed.
4658              lfsr32           Values  generated  from  a 32 bit Galois linear
4659                               feedback shift register  using  the  polynomial
4660                               x↑32  +  x↑31  + x↑29 + x + 1. This generates a
4661                               ring of  2↑32 - 1 unique  values  (all  32  bit
4662                               values except for 0).
4663              lrand48          Uniformly distributed pseudo-random 32 bit val‐
4664                               ues generated from lrand48(3).
4665              morse            Morse code generated  from  random  latin  sen‐
4666                               tences from a sample of Lorem Ipsum text.
4667              nybble           randomly distributed bytes in the range of 0x00
4668                               to 0x0f.
4669              objcode          object code selected from a random start  point
4670                               in the stress-ng text segment.
4671              parity           7 bit binary data with 1 parity bit.
4672              pink             pink  noise in the range 0..255 generated using
4673                               the Gardner method with the McCartney selection
4674                               tree  optimization.  Pink  noise  is  where the
4675                               power spectral  density  is  inversely  propor‐
4676                               tional to the frequency of the signal and hence
4677                               is slightly compressible.
4678              random           segments of the data stream are created by ran‐
4679                               domly  calling  the  different  data generation
4680                               methods.
4681              rarely1          data that has a single 1 in every 32 bits, ran‐
4682                               domly located.
4683              rarely0          data that has a single 0 in every 32 bits, ran‐
4684                               domly located.
4685              text             random ASCII text.
4686              utf8             random 8 bit data encoded to UTF-8.
4687              zero             all zeros, compresses very easily.
4688
4689       --zlib-window-bits W
4690              specify the window bits used to specify the history buffer size.
4691              The  value  is specified as the base two logarithm of the buffer
4692              size (e.g. value 9 is 2^9 = 512 bytes).  Default is 15.
4693
4694              Values:
4695              -8-(-15): raw deflate format
4696                  8-15: zlib format
4697                 24-31: gzip format
4698                 40-47: inflate auto format detection using zlib deflate format
4699
4700       --zlib-mem-level L specify the reserved compression  state  memory  for
4701       zlib.  Default is 8.
4702
4703              Values:
4704              1 = minimum memory usage
4705              9 = maximum memory usage
4706
4707       --zlib-strategy S
4708              specifies  the strategy to use when deflating data. This is used
4709              to tune the compression algorithm.  Default is 0.
4710
4711              Values:
4712              0: used for normal data (Z_DEFAULT_STRATEGY)
4713              1: for data generated by a filter or predictor (Z_FILTERED)
4714              2: forces huffman encoding (Z_HUFFMAN_ONLY)
4715              3: Limit match distances to one run-length-encoding (Z_RLE)
4716              4: prevents dynamic huffman codes (Z_FIXED)
4717
4718       --zlib-stream-bytes S
4719              specify the amount of bytes to deflate until deflate should fin‐
4720              ish  the block and return with Z_STREAM_END. One can specify the
4721              size in units of Bytes, KBytes, MBytes and GBytes using the suf‐
4722              fix b, k, m or g.  Default is 0 which creates and endless stream
4723              until stressor ends.
4724
4725              Values:
4726              0: creates an endless deflate stream until stressor stops
4727              n: creates an stream of n bytes over and over again.
4728                 Each block will be closed with Z_STREAM_END.
4729
4730
4731       --zombie N
4732              start N workers that create zombie processes. This will  rapidly
4733              try to create a default of 8192 child processes that immediately
4734              die and wait in a zombie state until they are reaped.  Once  the
4735              maximum  number  of  processes is reached (or fork fails because
4736              one has reached the maximum allowed number of children) the old‐
4737              est  child  is  reaped  and  a  new process is then created in a
4738              first-in first-out manner, and then repeated.
4739
4740       --zombie-ops N
4741              stop zombie stress workers after N bogo zombie operations.
4742
4743       --zombie-max N
4744              try to create as many as N zombie processes.  This  may  not  be
4745              reached if the system limit is less than N.
4746

EXAMPLES

4748       stress-ng --vm 8 --vm-bytes 80% -t 1h
4749
4750              run  8  virtual  memory  stressors  that combined use 80% of the
4751              available memory for 1 hour. Thus each stressor uses 10% of  the
4752              available memory.
4753
4754       stress-ng --cpu 4 --io 2 --vm 1 --vm-bytes 1G --timeout 60s
4755
4756              runs  for  60 seconds with 4 cpu stressors, 2 io stressors and 1
4757              vm stressor using 1GB of virtual memory.
4758
4759       stress-ng --iomix 2 --iomix-bytes 10% -t 10m
4760
4761              runs 2 instances of the mixed I/O stressors using a total of 10%
4762              of the available file system space for 10 minutes. Each stressor
4763              will use 5% of the available file system space.
4764
4765       stress-ng  --cyclic  1  --cyclic-dist  2500  --cyclic-method   clock_ns
4766       --cyclic-prio 100 --cyclic-sleep 10000 --hdd 0 -t 1m
4767
4768              measures  real  time  scheduling  latencies  created  by the hdd
4769              stressor. This uses the high resolution nanosecond clock to mea‐
4770              sure  latencies  during sleeps of 10,000 nanoseconds. At the end
4771              of 1 minute of stressing, the latency distribution with 2500  ns
4772              intervals  will  be  displayed.  NOTE: this must be run with the
4773              CAP_SYS_NICE capability to enable the real  time  scheduling  to
4774              get accurate measurements.
4775
4776       stress-ng --cpu 8 --cpu-ops 800000
4777
4778              runs 8 cpu stressors and stops after 800000 bogo operations.
4779
4780       stress-ng --sequential 2 --timeout 2m --metrics
4781
4782              run  2  simultaneous instances of all the stressors sequentially
4783              one by one, each for 2 minutes and  summarise  with  performance
4784              metrics at the end.
4785
4786       stress-ng --cpu 4 --cpu-method fft --cpu-ops 10000 --metrics-brief
4787
4788              run  4  FFT  cpu stressors, stop after 10000 bogo operations and
4789              produce a summary just for the FFT results.
4790
4791       stress-ng --cpu -1 --cpu-method all -t 1h --cpu-load 90
4792
4793              run cpu stressors on all online CPUs  working  through  all  the
4794              available CPU stressors for 1 hour, loading the CPUs at 90% load
4795              capacity.
4796
4797       stress-ng --cpu 0 --cpu-method all -t 20m
4798
4799              run cpu stressors on all configured CPUs working through all the
4800              available CPU stressors for 20 minutes
4801
4802       stress-ng --all 4 --timeout 5m
4803
4804              run 4 instances of all the stressors for 5 minutes.
4805
4806       stress-ng --random 64
4807
4808              run 64 stressors that are randomly chosen from all the available
4809              stressors.
4810
4811       stress-ng --cpu 64 --cpu-method all --verify -t 10m --metrics-brief
4812
4813              run 64 instances of all the different cpu stressors  and  verify
4814              that the computations are correct for 10 minutes with a bogo op‐
4815              erations summary at the end.
4816
4817       stress-ng --sequential -1 -t 10m
4818
4819              run all the stressors one by one for 10 minutes, with the number
4820              of  instances  of  each  stressor  matching the number of online
4821              CPUs.
4822
4823       stress-ng --sequential 8 --class io -t 5m --times
4824
4825              run all the stressors in the io class one by one for  5  minutes
4826              each, with 8 instances of each stressor running concurrently and
4827              show overall time utilisation statistics at the end of the run.
4828
4829       stress-ng --all -1 --maximize --aggressive
4830
4831              run all the stressors (1 instance of each per online CPU) simul‐
4832              taneously,  maximize  the  settings  (memory sizes, file alloca‐
4833              tions, etc.) and select the most demanding/aggressive options.
4834
4835       stress-ng --random 32 -x numa,hdd,key
4836
4837              run 32 randomly selected stressors and exclude the numa, hdd and
4838              key stressors
4839
4840       stress-ng --sequential 4 --class vm --exclude bigheap,brk,stack
4841
4842              run  4  instances  of the VM stressors one after each other, ex‐
4843              cluding the bigheap, brk and stack stressors
4844
4845       stress-ng --taskset 0,2-3 --cpu 3
4846
4847              run 3 instances of the CPU stressor and pin them to  CPUs  0,  2
4848              and 3.
4849

EXIT STATUS

4851         Status     Description
4852           0        Success.
4853           1        Error; incorrect user options or a fatal resource issue in
4854                    the stress-ng stressor harness (for example, out  of  mem‐
4855                    ory).
4856           2        One or more stressors failed.
4857           3        One or more stressors failed to initialise because of lack
4858                    of resources, for example ENOMEM (no memory),  ENOSPC  (no
4859                    space on file system) or a missing or unimplemented system
4860                    call.
4861           4        One or more stressors were not implemented on  a  specific
4862                    architecture or operating system.
4863           5        A stressor has been killed by an unexpected signal.
4864           6        A  stressor  exited  by exit(2) which was not expected and
4865                    timing metrics could not be gathered.
4866           7        The bogo ops metrics maybe  untrustworthy.  This  is  most
4867                    likely  to  occur  when a stress test is terminated during
4868                    the update of a bogo-ops counter such as when it has  been
4869                    OOM killed. A less likely reason is that the counter ready
4870                    indicator has been corrupted.
4871

BUGS

4873       File bug reports at:
4874         https://launchpad.net/ubuntu/+source/stress-ng/+filebug
4875

SEE ALSO

4877       cpuburn(1), perf(1), stress(1), taskset(1)
4878

AUTHOR

4880       stress-ng was written by Colin King <colin.king@canonical.com> and is a
4881       clean  room re-implementation and extension of the original stress tool
4882       by Amos Waterland. Thanks also for  contributions  from  Abdul  Haleem,
4883       Adrian  Ratiu,  André  Wild,  Baruch  Siach,  Carlos  Santos, Christian
4884       Ehrhardt,  Chunyu  Hu,  David  Turner,  Dominik  B  Czarnota,   Fabrice
4885       Fontaine,  Helmut  Grohne,  James  Hunt,  James Wang, Jianshen Liu, Jim
4886       Rowan, Joseph DeVincentis, Khalid Elmously, Khem Raj, Luca Pizzamiglio,
4887       Luis   Henriques,  Manoj  Iyer,  Matthew  Tippett,  Mauricio  Faria  de
4888       Oliveira, Piyush Goyal, Ralf Ramsauer, Rob Colclaser,  Thadeu  Lima  de
4889       Souza  Cascardo,  Thia  Wyrod,  Tim Gardner, Tim Orling, Tommi Rantala,
4890       Witold Baryluk, Zhiyi Sun and others.
4891

NOTES

4893       Sending a SIGALRM, SIGINT or SIGHUP to stress-ng causes it to terminate
4894       all  the stressor processes and ensures temporary files and shared mem‐
4895       ory segments are removed cleanly.
4896
4897       Sending a SIGUSR2 to stress-ng will dump out the current  load  average
4898       and memory statistics.
4899
4900       Note  that the stress-ng cpu, io, vm and hdd tests are different imple‐
4901       mentations of the original stress tests and hence may produce different
4902       stress  characteristics.   stress-ng  does  not  support any GPU stress
4903       tests.
4904
4905       The bogo operations metrics may change with each  release   because  of
4906       bug  fixes to the code, new features, compiler optimisations or changes
4907       in system call performance.
4908
4910       Copyright © 2013-2021 Canonical Ltd.
4911       This is free software; see the source for copying conditions.  There is
4912       NO  warranty;  not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR
4913       PURPOSE.
4914
4915
4916
4917                                  Aug 2, 2021                     STRESS-NG(1)
Impressum