1STRESS-NG(1)                General Commands Manual               STRESS-NG(1)
2
3
4

NAME

6       stress-ng - a tool to load and stress a computer system
7
8

SYNOPSIS

10       stress-ng [OPTION [ARG]] ...
11
12

DESCRIPTION

14       stress-ng  will  stress  test  a  computer system in various selectable
15       ways. It was designed to exercise various physical subsystems of a com‐
16       puter  as  well  as  the  various  operating  system kernel interfaces.
17       stress-ng also has a wide range of CPU specific stress tests that exer‐
18       cise floating point, integer, bit manipulation and control flow.
19
20       stress-ng  was originally intended to make a machine work hard and trip
21       hardware issues such as thermal overruns as well  as  operating  system
22       bugs  that  only  occur  when  a  system  is  being  thrashed hard. Use
23       stress-ng with caution as some of the tests can make a system  run  hot
24       on poorly designed hardware and also can cause excessive system thrash‐
25       ing which may be difficult to stop.
26
27       stress-ng can also measure test throughput rates; this can be useful to
28       observe  performance changes across different operating system releases
29       or types of hardware. However, it has never been intended to be used as
30       a precise benchmark test suite, so do NOT use it in this manner.
31
32       Running  stress-ng  with root privileges will adjust out of memory set‐
33       tings on Linux systems to make the stressors unkillable in  low  memory
34       situations,  so  use this judiciously.  With the appropriate privilege,
35       stress-ng can allow the ionice class and ionice levels to be  adjusted,
36       again, this should be used with care.
37
38       One  can  specify  the number of processes to invoke per type of stress
39       test; specifying a zero value will  select  the  number  of  processors
40       available as defined by sysconf(_SC_NPROCESSORS_CONF), if that can't be
41       determined then the number of online CPUs is used.   If  the  value  is
42       less than zero then the number of online CPUs is used.
43

OPTIONS

45       General stress-ng control options:
46
47       --abort
48              this  option  will  force all running stressors to abort (termi‐
49              nate) if any other stressor terminates prematurely because of  a
50              failure.
51
52       --aggressive
53              enables more file, cache and memory aggressive options. This may
54              slow tests down, increase latencies and  reduce  the  number  of
55              bogo  ops as well as changing the balance of user time vs system
56              time used depending on the type of stressor being used.
57
58       -a N, --all N, --parallel N
59              start N instances of all stressors in parallel.  If  N  is  less
60              than zero, then the number of CPUs online is used for the number
61              of instances.  If N is zero, then the number of configured  CPUs
62              in the system is used.
63
64       -b N, --backoff N
65              wait  N  microseconds  between  the  start of each stress worker
66              process. This allows one to ramp up the stress tests over time.
67
68       --class name
69              specify the class of stressors to run. Stressors are  classified
70              into  one  or more of the following classes: cpu, cpu-cache, de‐
71              vice, gpu, io, interrupt, filesystem, memory, network, os, pipe,
72              scheduler  and vm.  Some stressors fall into just one class. For
73              example the 'get' stressor is just  in  the  'os'  class.  Other
74              stressors  fall  into  more  than  one  class,  for example, the
75              'lsearch' stressor falls into the 'cpu', 'cpu-cache'  and  'mem‐
76              ory'  classes as it exercises all these three.  Selecting a spe‐
77              cific class will run all the stressors that fall into that class
78              only when run with the --sequential option.
79
80              Specifying  a  name  followed  by  a  question mark (for example
81              --class vm?) will print out all the stressors in  that  specific
82              class.
83
84       -n, --dry-run
85              parse options, but do not run stress tests. A no-op.
86
87       --ftrace
88              enable kernel function call tracing (Linux only).  This will use
89              the kernel debugfs ftrace mechanism to  record  all  the  kernel
90              functions  used  on the system while stress-ng is running.  This
91              is only as accurate as the kernel ftrace output, so there may be
92              some variability on the data reported.
93
94       -h, --help
95              show help.
96
97       --ignite-cpu
98              alter kernel controls to try and maximize the CPU. This requires
99              root privilege to alter various /sys interface  controls.   Cur‐
100              rently  this only works for Intel P-State enabled x86 systems on
101              Linux.
102
103       --ionice-class class
104              specify ionice class (only on Linux).  Can  be  idle  (default),
105              besteffort, be, realtime, rt.
106
107       --ionice-level level
108              specify  ionice  level  (only on Linux). For idle, 0 is the only
109              possible option. For besteffort or realtime  values  0  (highest
110              priority)  to  7  (lowest  priority). See ionice(1) for more de‐
111              tails.
112
113       --iostat S
114              every S seconds show I/O statistics on the  device  that  stores
115              the  stress-ng temporary files. This is either the device of the
116              current working directory or  the  --temp-path  specified  path.
117              Currently a Linux only option.  The fields output are:
118
119              Column Heading     Explanation
120              Inflight           number  of  I/O requests that have been
121                                 issued to the device  driver  but  have
122                                 not yet completed
123              Rd K/s             read rate in 1024 bytes per second
124              Wr K/s             write rate in 1024 bytes per second
125              Dscd K/s           discard rate in 1024 bytes per second
126              Rd/s               reads per second
127              Wr/s               writes per second
128              Dscd/s             discards per second
129
130       --job jobfile
131              run  stressors  using  a  jobfile.  The jobfile is essentially a
132              file containing stress-ng options (without the leading --)  with
133              one  option  per line. Lines may have comments with comment text
134              proceeded by the # character. A simple example is as follows:
135
136              run sequential   # run stressors sequentially
137              verbose          # verbose output
138              metrics-brief    # show metrics at end of run
139              timeout 60s      # stop each stressor after 60 seconds
140              #
141              # vm stressor options:
142              #
143              vm 2             # 2 vm stressors
144              vm-bytes 128M    # 128MB available memory
145              vm-keep          # keep vm mapping
146              vm-populate      # populate memory
147              #
148              # memcpy stressor options:
149              #
150              memcpy 5         # 5 memcpy stressors
151
152              The job file introduces the run command that  specifies  how  to
153              run the stressors:
154
155              run sequential - run stressors sequentially
156              run parallel - run stressors together in parallel
157
158              Note that 'run parallel' is the default.
159
160       --keep-files
161              do  not  remove  files and directories created by the stressors.
162              This can be useful for debugging purposes. Not generally  recom‐
163              mended as it can fill up a file system.
164
165       -k, --keep-name
166              by  default,  stress-ng  will  attempt to change the name of the
167              stress processes according to their functionality;  this  option
168              disables  this and keeps the process names to be the name of the
169              parent process, that is, stress-ng.
170
171       --klog-check
172              check the kernel log for kernel error and warning  messages  and
173              report  these  as  soon as they are detected. Linux only and re‐
174              quires root capability to read the kernel log.
175
176       --log-brief
177              by default stress-ng will report the name of  the  program,  the
178              message  type  and the process id as a prefix to all output. The
179              --log-brief option will output messages without these fields  to
180              produce a less verbose output.
181
182       --log-file filename
183              write messages to the specified log file.
184
185       --maximize
186              overrides  the  default stressor settings and instead sets these
187              to the maximum settings allowed.  These defaults can  always  be
188              overridden by the per stressor settings options if required.
189
190       --max-fd N
191              set  the maximum limit on file descriptors (value or a % of sys‐
192              tem allowed maximum).  By default, stress-ng  can  use  all  the
193              available  file  descriptors;  this option sets the limit in the
194              range from 10 up to the maximum limit of RLIMIT_NOFILE.  One can
195              use  a  % setting too, e.g. 50% is half the maximum allowed file
196              descriptors.  Note that stress-ng will use about 5 of the avail‐
197              able file descriptors so take this into consideration when using
198              this setting.
199
200       --metrics
201              output number of bogo  operations  in  total  performed  by  the
202              stress  processes.  Note that these are not a reliable metric of
203              performance or throughput and have not been designed to be  used
204              for  benchmarking  whatsoever. The metrics are just a useful way
205              to observe how a system behaves  when  under  various  kinds  of
206              load.
207
208              The following columns of information are output:
209
210              Column Heading        Explanation
211              bogo ops              number  of  iterations  of the stressor
212                                    during the run. This is metric  of  how
213                                    much  overall  "work" has been achieved
214                                    in bogo operations.
215              real time (secs)      average wall clock  duration  (in  sec‐
216                                    onds)  of the stressor. This is the to‐
217                                    tal wall clock  time  of  all  the  in‐
218                                    stances of that particular stressor di‐
219                                    vided by the number of these  stressors
220                                    being run.
221              usr time (secs)       total  user  time (in seconds) consumed
222                                    running all the instances of the stres‐
223                                    sor.
224              sys time (secs)       total system time (in seconds) consumed
225                                    running all the instances of the stres‐
226                                    sor.
227              bogo   ops/s  (real   total bogo operations per second  based
228              time)                 on  wall clock run time. The wall clock
229                                    time reflects the  apparent  run  time.
230                                    The more processors one has on a system
231                                    the more the work load can be  distrib‐
232                                    uted  onto  these  and  hence  the wall
233                                    clock time will reduce and the bogo ops
234                                    rate  will  increase.   This  is essen‐
235                                    tially the "apparent" bogo ops rate  of
236                                    the system.
237              bogo ops/s (usr+sys   total bogo operations per second  based
238              time)                 on  cumulative  user  and  system time.
239                                    This is the real bogo ops rate  of  the
240                                    system  taking  into  consideration the
241                                    actual  time  execution  time  of   the
242                                    stressor  across  all  the  processors.
243                                    Generally this  will  decrease  as  one
244                                    adds  more  concurrent stressors due to
245                                    contention on cache, memory,  execution
246                                    units, buses and I/O devices.
247              CPU  used  per  in‐   total percentage of CPU used divided by
248              stance (%)            number of stressor instances. 100% is 1
249                                    full CPU. Some stressors  run  multiple
250                                    threads  so  it  is  possible to have a
251                                    figure greater than 100%.
252
253       --metrics-brief
254              show shorter list of stressor  metrics  (no  CPU  used  per  in‐
255              stance).
256
257       --minimize
258              overrides  the  default stressor settings and instead sets these
259              to the minimum settings allowed.  These defaults can  always  be
260              overridden by the per stressor settings options if required.
261
262       --no-madvise
263              from  version  0.02.26  stress-ng automatically calls madvise(2)
264              with random advise options before each mmap and munmap to stress
265              the  vm  subsystem a little harder. The --no-advise option turns
266              this default off.
267
268       --no-oom-adjust
269              disable any form of out-of-memory score  adjustments,  keep  the
270              system defaults.  Normally stress-ng will adjust the out-of-mem‐
271              ory scores on stressors to try to create more  memory  pressure.
272              This option disables the adjustments.
273
274       --no-rand-seed
275              Do  not seed the stress-ng pseudo-random number generator with a
276              quasi random start seed, but instead seed it with constant  val‐
277              ues.  This  forces  tests  to run each time using the same start
278              conditions which can be useful when  one  requires  reproducible
279              stress tests.
280
281       --oomable
282              Do not respawn a stressor if it gets killed by the Out-of-Memory
283              (OOM) killer.  The default behaviour is to  restart  a  new  in‐
284              stance  of  a  stressor  if the kernel OOM killer terminates the
285              process. This option disables this default behaviour.
286
287       --page-in
288              touch allocated pages that are not in core, forcing them  to  be
289              paged  back  in.  This is a useful option to force all the allo‐
290              cated pages to be paged in when using the bigheap, mmap  and  vm
291              stressors.  It will severely degrade performance when the memory
292              in the system is less than the  allocated  buffer  sizes.   This
293              uses  mincore(2) to determine the pages that are not in core and
294              hence need touching to page them back in.
295
296       --pathological
297              enable stressors that are known to hang systems.  Some stressors
298              can  quickly  consume  resources  in  such  a  way that they can
299              rapidly hang a system before the kernel can OOM kill them. These
300              stressors  are not enabled by default, this option enables them,
301              but you probably don't want to do this. You have been warned.
302
303       --perf measure processor and system activity using perf  events.  Linux
304              only and caveat emptor, according to perf_event_open(2): "Always
305              double-check your results! Various generalized events  have  had
306              wrong  values.".   Note  that  with  Linux 4.7 one needs to have
307              CAP_SYS_ADMIN capabilities for this option to  work,  or  adjust
308              /proc/sys/kernel/perf_event_paranoid  to  below  2  to  use this
309              without CAP_SYS_ADMIN.
310
311       -q, --quiet
312              do not show any output.
313
314       -r N, --random N
315              start N random stress workers. If N is 0,  then  the  number  of
316              configured processors is used for N.
317
318       --sched scheduler
319              select  the  named scheduler (only on Linux). To see the list of
320              available schedulers use: stress-ng --sched which
321
322       --sched-prio prio
323              select the scheduler priority level  (only  on  Linux).  If  the
324              scheduler  does not support this then the default priority level
325              of 0 is chosen.
326
327       --sched-period period
328              select the period parameter  for  deadline  scheduler  (only  on
329              Linux). Default value is 0 (in nanoseconds).
330
331       --sched-runtime runtime
332              select  the  runtime  parameter  for deadline scheduler (only on
333              Linux). Default value is 99999 (in nanoseconds).
334
335       --sched-deadline deadline
336              select the deadline parameter for deadline  scheduler  (only  on
337              Linux). Default value is 100000 (in nanoseconds).
338
339       --sched-reclaim
340              use  cpu  bandwidth reclaim feature for deadline scheduler (only
341              on Linux).
342
343       --seed N
344              set the random number generate seed with a 64 bit value.  Allows
345              stressors  to  use the same random number generator sequences on
346              each invocation.
347
348       --sequential N
349              sequentially run all the stressors one by one for a  default  of
350              60  seconds.  The  number of instances of each of the individual
351              stressors to be started is N.  If N is less than zero, then  the
352              number of CPUs online is used for the number of instances.  If N
353              is zero, then the number of CPUs in the system is used.  Use the
354              --timeout option to specify the duration to run each stressor.
355
356       --skip-silent
357              silence  messages  that  report that a stressor has been skipped
358              because it requires features not supported by the  system,  such
359              as  unimplemented  system  calls, missing resources or processor
360              specific features.
361
362       --smart
363              scan the block devices for changes S.M.A.R.T. statistics  (Linux
364              only).  This  requires root privileges to read the Self-Monitor‐
365              ing, Analysis and Reporting Technology data from all  block  de‐
366              vies  and  will report any changes in the statistics. One caveat
367              is that device manufacturers provide different sets of data, the
368              exact meaning of the data can be vague and the data may be inac‐
369              curate.
370
371       --stdout
372              all output goes to stdout. By default all output goes to  stderr
373              (which  is  a  historical  oversight that will cause breakage to
374              users if it is now changed). This option allows the output to be
375              written to stdout.
376
377       --stressors
378              output the names of the available stressors.
379
380       --syslog
381              log output (except for verbose -v messages) to the syslog.
382
383       --taskset list
384              set  CPU  affinity based on the list of CPUs provided; stress-ng
385              is bound to just use these CPUs (Linux only).  The  CPUs  to  be
386              used  are specified by a comma separated list of CPU (0 to N-1).
387              One can  specify  a  range  of  CPUs  using  '-',  for  example:
388              --taskset 0,2-3,6,7-11
389
390       --temp-path path
391              specify a path for stress-ng temporary directories and temporary
392              files; the default path is the current working directory.   This
393              path  must  have  read and write access for the stress-ng stress
394              processes.
395
396       --thermalstat S
397              every S seconds show CPU and thermal load statistics.  This  op‐
398              tion  shows  average  CPU  frequency  in GHz (average of online-
399              CPUs), load averages (1 minute, 5 minute  and  15  minutes)  and
400              available thermal zone temperatures in degrees Centigrade.
401
402       --thrash
403              This can only be used when running on Linux and with root privi‐
404              lege. This option starts  a  background  thrasher  process  that
405              works through all the processes on a system and tries to page as
406              many pages in the processes as possible.  It  also  periodically
407              drops  the  page cache, frees reclaimable slab objects and page‐
408              cache. This will cause considerable amount of thrashing of  swap
409              on an over-committed system.
410
411       -t N, --timeout T
412              run  each stress test for at least T seconds. One can also spec‐
413              ify the units of time in seconds, minutes, hours, days or  years
414              with  the  suffix  s, m, h, d or y. Each stressor will be sent a
415              SIGALRM signal at the timeout time, however if the  stress  test
416              is  swapped out, in a non-interritable system call or performing
417              clean up (such as removing hundreds of test file) it may take  a
418              while  to finally terminate.  A 0 timeout will run stress-ng for
419              ever with no timeout.
420
421       --timestamp
422              add a timestamp in hours, minutes, seconds and hundredths  of  a
423              second to the log output.
424
425       --timer-slack N
426              adjust  the  per  process  timer  slack  to N nanoseconds (Linux
427              only). Increasing the timer slack allows the kernel to  coalesce
428              timer  events by adding some fuzziness to timer expiration times
429              and hence reduce  wakeups.   Conversely,  decreasing  the  timer
430              slack  will  increase wakeups.  A value of 0 for the timer-slack
431              will set the system default of 50,000 nanoseconds.
432
433       --times
434              show the cumulative user and system times of all the child  pro‐
435              cesses at the end of the stress run.  The percentage of utilisa‐
436              tion of available CPU time is also calculated from the number of
437              on-line CPUs in the system.
438
439       --tz   collect temperatures from the available thermal zones on the ma‐
440              chine (Linux only).  Some devices may have one or  more  thermal
441              zones, where as others may have none.
442
443       -v, --verbose
444              show all debug, warnings and normal information output.
445
446       --verify
447              verify  results when a test is run. This is not available on all
448              tests. This will sanity check the computations  or  memory  con‐
449              tents  from a test run and report to stderr any unexpected fail‐
450              ures.
451
452       --verifiable
453              print the names of stressors  that  can  be  verified  with  the
454              --verify option.
455
456       -V, --version
457              show  version  of  stress-ng, version of toolchain used to build
458              stress-ng and system information.
459
460       --vmstat S
461              every S seconds show statistics about processes, memory, paging,
462              block I/O, interrupts, context switches, disks and cpu activity.
463              The output is similar that to  the  output  from  the  vmstat(8)
464              utility. Currently a Linux only option.
465
466       -x, --exclude list
467              specify  a list of one or more stressors to exclude (that is, do
468              not run them).  This is useful  to  exclude  specific  stressors
469              when one selects many stressors to run using the --class option,
470              --sequential, --all and --random options. Example, run  the  cpu
471              class  stressors  concurrently  and  exclude the numa and search
472              stressors:
473
474              stress-ng --class cpu --all 1 -x numa,bsearch,hsearch,lsearch
475
476       -Y, --yaml filename
477              output gathered statistics to a YAML formatted file named 'file‐
478              name'.
479
480
481
482       Stressor specific options:
483
484       --access N
485              start  N workers that work through various settings of file mode
486              bits (read, write, execute) for the file owner and checks if the
487              user  permissions  of  the file using access(2) and faccessat(2)
488              are sane.
489
490       --access-ops N
491              stop access workers after N bogo access sanity checks.
492
493       --affinity N
494              start N workers that run 16 processes that  rapidly  change  CPU
495              affinity  (only  on  Linux).  Rapidly switching CPU affinity can
496              contribute to poor cache behaviour and high context switch rate.
497
498       --affinity-ops N
499              stop affinity workers after N bogo affinity operations.
500
501       --affinity-delay N
502              delay for N nanoseconds before changing  affinity  to  the  next
503              CPU.  The delay will spin on CPU scheduling yield operations for
504              N nanoseconds before the process is moved to  another  CPU.  The
505              default is 0 nanosconds.
506
507       --affinity-pin
508              pin all the 16 per stressor processes to a CPU. All 16 processes
509              follow the CPU chosen by the main parent stressor, forcing heavy
510              per CPU loading.
511
512       --affinity-rand
513              switch  CPU affinity randomly rather than the default of sequen‐
514              tially.
515
516       --affinity-sleep N
517              sleep for N nanoseconds before changing  affinity  to  the  next
518              CPU.
519
520       --af-alg N
521              start  N workers that exercise the AF_ALG socket domain by hash‐
522              ing and encrypting various sized random messages. This exercises
523              the  available  hashes,  ciphers, rng and aead crypto engines in
524              the Linux kernel.
525
526       --af-alg-ops N
527              stop af-alg workers after N AF_ALG messages are hashed.
528
529       --af-alg-dump
530              dump the internal  list  representing  cryptographic  algorithms
531              parsed from the /proc/crypto file to standard output (stdout).
532
533       --aio N
534              start  N  workers  that  issue  multiple  small asynchronous I/O
535              writes and reads on a relatively small temporary file using  the
536              POSIX  aio  interface.  This will just hit the file system cache
537              and soak up a lot of user and kernel time in  issuing  and  han‐
538              dling I/O requests.  By default, each worker process will handle
539              16 concurrent I/O requests.
540
541       --aio-ops N
542              stop POSIX asynchronous I/O workers after  N  bogo  asynchronous
543              I/O requests.
544
545       --aio-requests N
546              specify  the  number  of  POSIX  asynchronous  I/O requests each
547              worker should issue, the default is 16; 1 to 4096 are allowed.
548
549       --aiol N
550              start N workers that issue multiple 4K random  asynchronous  I/O
551              writes  using  the  Linux  aio system calls io_setup(2), io_sub‐
552              mit(2), io_getevents(2) and  io_destroy(2).   By  default,  each
553              worker process will handle 16 concurrent I/O requests.
554
555       --aiol-ops N
556              stop  Linux  asynchronous  I/O workers after N bogo asynchronous
557              I/O requests.
558
559       --aiol-requests N
560              specify the number  of  Linux  asynchronous  I/O  requests  each
561              worker should issue, the default is 16; 1 to 4096 are allowed.
562
563       --alarm N
564              start N workers that exercise alarm(2) with MAXINT, 0 and random
565              alarm and sleep delays that get prematurely interrupted.  Before
566              each  alarm  is  scheduled  any previous pending alarms are can‐
567              celled with zero second alarm calls.
568
569       --alarm-ops N
570              stop after N alarm bogo operations.
571
572       --apparmor N
573              start N workers that exercise various parts of the AppArmor  in‐
574              terface. Currently one needs root permission to run this partic‐
575              ular test. Only available on Linux systems with AppArmor support
576              and requires the CAP_MAC_ADMIN capability.
577
578       --apparmor-ops
579              stop the AppArmor workers after N bogo operations.
580
581       --atomic N
582              start  N workers that exercise various GCC __atomic_*() built in
583              operations on 8, 16, 32 and 64  bit  integers  that  are  shared
584              among  the N workers. This stressor is only available for builds
585              using GCC 4.7.4 or higher. The stressor forces  many  front  end
586              cache stalls and cache references.
587
588       --atomic-ops N
589              stop the atomic workers after N bogo atomic operations.
590
591       --bad-altstack N
592              start N workers that create broken alternative signal stacks for
593              SIGSEGV and  SIGBUS  handling  that  in  turn  create  secondary
594              SIGSEGV/SIGBUS errors.  A variety of randonly selected nefarious
595              methods are used to create the stacks:
596
597              • Unmapping the alternative signal stack, before triggering  the
598                signal handling.
599              • Changing the alternative signal stack to just being read only,
600                write only, execute only.
601              • Using a NULL alternative signal stack.
602              • Using the signal handler  object  as  the  alternative  signal
603                stack.
604              • Unmapping the alternative signal stack during execution of the
605                signal handler.
606              • Using a read-only text  segment  for  the  alternative  signal
607                stack.
608              • Using an undersized alternative signal stack.
609              • Using the VDSO as an alternative signal stack.
610              • Using an alternative stack mapped onto /dev/zero.
611              • Using  an  alternative  stack mapped to a zero sized temporary
612                file to generate a SIGBUS error.
613
614       --bad-altstack-ops N
615              stop the bad alternative stack stressors after  N  SIGSEGV  bogo
616              operations.
617
618
619       --bad-ioctl N
620              start  N workers that perform a range of illegal bad read ioctls
621              (using _IOR) across the  device  drivers.  This  exercises  page
622              size, 64 bit, 32 bit, 16 bit and 8 bit reads as well as NULL ad‐
623              dresses, non-readable pages and  PROT_NONE  mapped  pages.  Cur‐
624              rently only for Linux and requires the --pathological option.
625
626       --bad-ioctl-ops N
627              stop the bad ioctl stressors after N bogo ioctl operations.
628
629       -B N, --bigheap N
630              start N workers that grow their heaps by reallocating memory. If
631              the out of memory killer (OOM) on Linux kills the worker or  the
632              allocation  fails  then  the  allocating process starts all over
633              again.  Note that the OOM adjustment for the worker  is  set  so
634              that the OOM killer will treat these workers as the first candi‐
635              date processes to kill.
636
637       --bigheap-ops N
638              stop the big heap workers after N bogo allocation operations are
639              completed.
640
641       --bigheap-growth N
642              specify amount of memory to grow heap by per iteration. Size can
643              be from 4K to 64MB. Default is 64K.
644
645       --binderfs N
646              start N workers that mount, exercise and unmount  binderfs.  The
647              binder   control   device   is  exercised  with  256  sequential
648              BINDER_CTL_ADD ioctl calls per loop.
649
650       --binderfs-ops N
651              stop after N binderfs cycles.
652
653       --bind-mount N
654              start N workers that repeatedly bind mount / to / inside a  user
655              namespace.  This  can  consume resources rapidly, forcing out of
656              memory situations. Do not use this stressor unless you  want  to
657              risk hanging your machine.
658
659       --bind-mount-ops N
660              stop after N bind mount bogo operations.
661
662       --branch N
663              start  N  workers that randomly branch to 1024 randomly selected
664              locations and hence exercise the CPU branch prediction logic.
665
666       --branch-ops N
667              stop the branch stressors after N × 1024 branches
668
669       --brk N
670              start N workers that grow the data segment by one page at a time
671              using  multiple  brk(2)  calls.  Each successfully allocated new
672              page is touched to ensure it is resident in memory.  If  an  out
673              of  memory  condition  occurs  then the test will reset the data
674              segment to the point before it started and repeat the data  seg‐
675              ment resizing over again.  The process adjusts the out of memory
676              setting so that it may be killed by  the  out  of  memory  (OOM)
677              killer  before  other  processes.   If  it  is killed by the OOM
678              killer then it will be automatically re-started by a  monitoring
679              parent process.
680
681       --brk-ops N
682              stop the brk workers after N bogo brk operations.
683
684       --brk-mlock
685              attempt  to mlock future brk pages into memory causing more mem‐
686              ory pressure. If mlock(MCL_FUTURE) is implemented then this will
687              stop new brk pages from being swapped out.
688
689       --brk-notouch
690              do  not  touch each newly allocated data segment page. This dis‐
691              ables the default of touching  each  newly  allocated  page  and
692              hence  avoids  the kernel from necessarily backing the page with
693              physical memory.
694
695       --bsearch N
696              start N workers that binary search a sorted array of 32 bit  in‐
697              tegers using bsearch(3). By default, there are 65536 elements in
698              the array.  This is a useful method to exercise random access of
699              memory and processor cache.
700
701       --bsearch-ops N
702              stop the bsearch worker after N bogo bsearch operations are com‐
703              pleted.
704
705       --bsearch-size N
706              specify the size (number of 32 bit integers)  in  the  array  to
707              bsearch. Size can be from 1K to 4M.
708
709       -C N, --cache N
710              start  N workers that perform random wide spread memory read and
711              writes to thrash the CPU cache.  The code does not intelligently
712              determine the CPU cache configuration and so it may be sub-opti‐
713              mal in producing hit-miss read/write activity for  some  proces‐
714              sors.
715
716       --cache-cldemote
717              cache line demote (x86 only). This is a no-op for non-x86 archi‐
718              tectures and older x86 processors that do not support this  fea‐
719              ture.
720
721       --cache-clflushopt
722              use  optimized  cache line flush (x86 only). This is a no-op for
723              non-x86 architectures and older x86 processors that do not  sup‐
724              port this feature.
725
726       --cache-clwb
727              cache line writeback (x86 only). This is a no-op for non-x86 ar‐
728              chitectures and older x86 processors that do  not  support  this
729              feature.
730
731       --cache-enable-all
732              where appropriate exercise the cache using cldemote, clflushopt,
733              fence, flush, sfence and prefetch.
734
735       --cache-fence
736              force write serialization on each store  operation  (x86  only).
737              This is a no-op for non-x86 architectures.
738
739       --cache-flush
740              force  flush cache on each store operation (x86 only). This is a
741              no-op for non-x86 architectures.
742
743       --cache-level N
744              specify level of cache to  exercise  (1=L1  cache,  2=L2  cache,
745              3=L3/LLC cache (the default)).  If the cache hierarchy cannot be
746              determined, built-in defaults will apply.
747
748       --cache-no-affinity
749              do not change processor affinity when --cache is in effect.
750
751       --cache-sfence
752              force write serialization on  each  store  operation  using  the
753              sfence  instruction  (x86 only). This is a no-op for non-x86 ar‐
754              chitectures.
755
756       --cache-ops N
757              stop cache thrash workers after N bogo cache thrash operations.
758
759       --cache-prefetch
760              force read prefetch on next read address on  architectures  that
761              support prefetching.
762
763       --cache-ways N
764              specify the number of cache ways to exercise. This allows a sub‐
765              set of the overall cache size to be exercised.
766
767       --cap N
768              start N workers that read per process capabilities via calls  to
769              capget(2) (Linux only).
770
771       --cap-ops N
772              stop after N cap bogo operations.
773
774       --chattr N
775              start N workers that attempt to exercise file attributes via the
776              EXT2_IOC_SETFLAGS ioctl. This is intended  to  be  intentionally
777              racy  and  exercise a range of chattr attributes by enabling and
778              disabling them on a file shared amongst the  N  chattr  stressor
779              processes. (Linux only).
780
781       --chattr-ops N
782              stop after N chattr bogo operations.
783
784       --chdir N
785              start  N workers that change directory between directories using
786              chdir(2).
787
788       --chdir-ops N
789              stop after N chdir bogo operations.
790
791       --chdir-dirs N
792              exercise chdir on N directories. The default  is  8192  directo‐
793              ries, this allows 64 to 65536 directories to be used instead.
794
795       --chmod N
796              start  N workers that change the file mode bits via chmod(2) and
797              fchmod(2) on the same file. The greater the value for N then the
798              more  contention  on  the  single  file.  The stressor will work
799              through all the combination of mode bits.
800
801       --chmod-ops N
802              stop after N chmod bogo operations.
803
804       --chown N
805              start N workers that exercise chown(2) on  the  same  file.  The
806              greater  the  value for N then the more contention on the single
807              file.
808
809       --chown-ops N
810              stop the chown workers after N bogo chown(2) operations.
811
812       --chroot N
813              start N workers that exercise chroot(2) on various valid and in‐
814              valid chroot paths. Only available on Linux systems and requires
815              the CAP_SYS_ADMIN capability.
816
817       --chroot-ops N
818              stop the chroot workers after N bogo chroot(2) operations.
819
820       --clock N
821              start N workers exercising clocks  and  POSIX  timers.  For  all
822              known clock types this will exercise clock_getres(2), clock_get‐
823              time(2) and clock_nanosleep(2).  For all known  timers  it  will
824              create  a  50000ns  timer  and  busy poll this until it expires.
825              This stressor will cause frequent context switching.
826
827       --clock-ops N
828              stop clock stress workers after N bogo operations.
829
830       --clone N
831              start N  workers  that  create  clones  (via  the  clone(2)  and
832              clone3()  system  calls).  This will rapidly try to create a de‐
833              fault of 8192 clones that immediately die and wait in  a  zombie
834              state  until they are reaped.  Once the maximum number of clones
835              is reached (or clone fails because one has reached  the  maximum
836              allowed)  the  oldest  clone thread is reaped and a new clone is
837              then created in a first-in first-out manner, and then  repeated.
838              A  random  clone flag is selected for each clone to try to exer‐
839              cise different clone operations.  The clone stressor is a  Linux
840              only option.
841
842       --clone-ops N
843              stop clone stress workers after N bogo clone operations.
844
845       --clone-max N
846              try  to  create  as  many  as  N  clone threads. This may not be
847              reached if the system limit is less than N.
848
849       --close N
850              start N workers that try to force  race  conditions  on  closing
851              opened  file  descriptors.   These  file  descriptors  have been
852              opened in various ways to  try  and  exercise  different  kernel
853              close handlers.
854
855       --close-ops N
856              stop close workers after N bogo close operations.
857
858       --context N
859              start  N  workers that run three threads that use swapcontext(3)
860              to implement the thread-to-thread context switching. This  exer‐
861              cises  rapid  process  context saving and restoring and is band‐
862              width limited by register and memory save and restore rates.
863
864       --context-ops N
865              stop context workers after N bogo  context  switches.   In  this
866              stressor, 1 bogo op is equivalent to 1000 swapcontext calls.
867
868       --copy-file N
869              start   N   stressors   that   copy   a  file  using  the  Linux
870              copy_file_range(2) system call. 2MB chunks of  data  are  copied
871              from  random  locations  from  one file to random locations to a
872              destination file.  By default, the files are  256  MB  in  size.
873              Data  is  sync'd to the filesystem after each copy_file_range(2)
874              call.
875
876       --copy-file-ops N
877              stop after N copy_file_range() calls.
878
879       --copy-file-bytes N
880              copy file size, the default is 256 MB. One can specify the  size
881              as  %  of  free  space  on the file system or in units of Bytes,
882              KBytes, MBytes and GBytes using the suffix b, k, m or g.
883
884       -c N, --cpu N
885              start N workers  exercising  the  CPU  by  sequentially  working
886              through  all  the different CPU stress methods. Instead of exer‐
887              cising all the CPU stress methods, one can  specify  a  specific
888              CPU stress method with the --cpu-method option.
889
890       --cpu-ops N
891              stop cpu stress workers after N bogo operations.
892
893       -l P, --cpu-load P
894              load CPU with P percent loading for the CPU stress workers. 0 is
895              effectively a sleep (no load) and  100  is  full  loading.   The
896              loading  loop is broken into compute time (load%) and sleep time
897              (100% - load%). Accuracy depends on the overall load of the pro‐
898              cessor  and  the  responsiveness of the scheduler, so the actual
899              load may be different from the desired load.  Note that the num‐
900              ber  of  bogo CPU operations may not be linearly scaled with the
901              load as some systems employ CPU frequency scaling and so heavier
902              loads  produce  an  increased CPU frequency and greater CPU bogo
903              operations.
904
905              Note: This option only applies to the --cpu stressor option  and
906              not to all of the cpu class of stressors.
907
908       --cpu-load-slice S
909              note  -  this option is only useful when --cpu-load is less than
910              100%. The CPU load is broken into multiple busy and idle cycles.
911              Use this option to specify the duration of a busy time slice.  A
912              negative value for S specifies the number of iterations  to  run
913              before  idling  the CPU (e.g. -30 invokes 30 iterations of a CPU
914              stress loop).  A zero value selects a random busy time between 0
915              and 0.5 seconds.  A positive value for S specifies the number of
916              milliseconds to run before idling the CPU (e.g.  100  keeps  the
917              CPU  busy for 0.1 seconds).  Specifying small values for S lends
918              to  small  time  slices  and   smoother   scheduling.    Setting
919              --cpu-load  as a relatively low value and --cpu-load-slice to be
920              large will cycle the CPU between long idle and busy  cycles  and
921              exercise  different  CPU  frequencies.  The thermal range of the
922              CPU is also cycled, so this is a good mechanism to exercise  the
923              scheduler,  frequency scaling and passive/active thermal cooling
924              mechanisms.
925
926              Note: This option only applies to the --cpu stressor option  and
927              not to all of the cpu class of stressors.
928
929       --cpu-old-metrics
930              as  of  version V0.14.02 the cpu stressor now normalizes each of
931              the cpu stressor method bogo-op counters to  try  and  ensure  a
932              similar  bogo-op  rate  for all the methods to avoid the shorter
933              running (and faster) methods from skewing the bogo-op rates when
934              using  the  default  "all" method.  This is based on a reference
935              Intel i5-8350U processor and hence the bogo-ops normalizing fac‐
936              tors  will  be  skew somewhat on different CPUs, but so signifi‐
937              cantly as the original bogo-op counter  rates.  To  disable  the
938              normalization  and  fall  back to the original metrics, use this
939              option.
940
941       --cpu-method method
942              specify a cpu stress method. By default, all the stress  methods
943              are  exercised  sequentially,  however  one can specify just one
944              method to be used if required.  Available cpu stress methods are
945              described as follows:
946
947              Method              Description
948              all                 iterate  over  all the below cpu stress
949                                  methods
950              ackermann           Ackermann function:  compute  A(3,  7),
951                                  where:
952                                   A(m, n) = n + 1 if m = 0;
953                                   A(m - 1, 1) if m > 0 and n = 0;
954                                   A(m - 1, A(m, n - 1)) if m > 0 and n >
955                                  0
956              apery               calculate Apery's  constant  ζ(3);  the
957                                  sum  of  1/(n  ↑  3)  to a precision of
958                                  1.0x10↑14
959              bitops              various bit  operations  from  bithack,
960                                  namely: reverse bits, parity check, bit
961                                  count, round to nearest power of 2
962              callfunc            recursively call 8 argument C  function
963                                  to a depth of 1024 calls and unwind
964              cfloat              1000  iterations  of  a mix of floating
965                                  point complex operations
966              cdouble             1000 iterations  of  a  mix  of  double
967                                  floating point complex operations
968              clongdouble         1000 iterations of a mix of long double
969                                  floating point complex operations
970              collatz             compute the 1348 steps in  the  collatz
971                                  sequence     starting    from    number
972                                  989345275647.  Where f(n) = n / 2  (for
973                                  even n) and f(n) = 3n + 1 (for odd n).
974              correlate           perform  a  8192  ×  512 correlation of
975                                  random doubles
976              cpuid               fetch cpu  specific  information  using
977                                  the cpuid instruction (x86 only)
978              crc16               compute  1024  rounds of CCITT CRC16 on
979                                  random data
980              decimal32           1000 iterations of a mix of 32 bit dec‐
981                                  imal  floating  point  operations  (GCC
982                                  only)
983              decimal64           1000 iterations of a mix of 64 bit dec‐
984                                  imal  floating  point  operations  (GCC
985                                  only)
986
987              decimal128          1000 iterations of a  mix  of  128  bit
988                                  decimal  floating point operations (GCC
989                                  only)
990              dither              Floyd-Steinberg dithering of a  1024  ×
991                                  768  random image from 8 bits down to 1
992                                  bit of depth
993              div8                50,000 8 bit unsigned integer divisions
994              div16               50,000 16 bit  unsigned  integer  divi‐
995                                  sions
996              div32               50,000  32  bit  unsigned integer divi‐
997                                  sions
998              div64               50,000 64 bit  unsigned  integer  divi‐
999                                  sions
1000              div128              50,000  128  bit unsigned integer divi‐
1001                                  sions
1002              double              1000 iterations of a mix of double pre‐
1003                                  cision floating point operations
1004              euler               compute e using n = (1 + (1 ÷ n)) ↑ n
1005              explog              iterate on n = exp(log(n) ÷ 1.00002)
1006              factorial           find factorials from 1..150 using Stir‐
1007                                  ling's and Ramanujan's approximations
1008              fibonacci           compute Fibonacci sequence of 0, 1,  1,
1009                                  2, 5, 8...
1010              fft                 4096 sample Fast Fourier Transform
1011              fletcher16          1024  rounds  of a naïve implementation
1012                                  of a 16 bit Fletcher's checksum
1013              float               1000 iterations of a  mix  of  floating
1014                                  point operations
1015              float16             1000  iterations  of  a  mix  of 16 bit
1016                                  floating point operations
1017              float32             1000 iterations of  a  mix  of  32  bit
1018                                  floating point operations
1019              float64             1000  iterations  of  a  mix  of 64 bit
1020                                  floating point operations
1021              float80             1000 iterations of  a  mix  of  80  bit
1022                                  floating point operations
1023              float128            1000  iterations  of  a  mix of 128 bit
1024                                  floating point operations
1025              floatconversion     perform 65536  iterations  of  floating
1026                                  point conversions between float, double
1027                                  and long double  floating  point  vari‐
1028                                  ables.
1029              gamma               calculate the Euler-Mascheroni constant
1030                                  γ using the limiting difference between
1031                                  the  harmonic  series  (1 + 1/2 + 1/3 +
1032                                  1/4 + 1/5 ... + 1/n)  and  the  natural
1033                                  logarithm ln(n), for n = 80000.
1034              gcd                 compute GCD of integers
1035              gray                calculate  binary to gray code and gray
1036                                  code back to binary for integers from 0
1037                                  to 65535
1038              hamming             compute  Hamming H(8,4) codes on 262144
1039                                  lots of 4 bit data. This  turns  4  bit
1040                                  data into 8 bit Hamming code containing
1041                                  4 parity bits. For  data  bits  d1..d4,
1042                                  parity bits are computed as:
1043                                    p1 = d2 + d3 + d4
1044                                    p2 = d1 + d3 + d4
1045                                    p3 = d1 + d2 + d4
1046                                    p4 = d1 + d2 + d3
1047              hanoi               solve  a  21 disc Towers of Hanoi stack
1048                                  using the recursive solution
1049              hyperbolic          compute sinh(θ) × cosh(θ) + sinh(2θ)  +
1050                                  cosh(3θ)  for  float,  double  and long
1051                                  double hyperbolic sine and cosine func‐
1052                                  tions where θ = 0 to 2π in 1500 steps
1053              idct                8  ×  8  IDCT  (Inverse Discrete Cosine
1054                                  Transform).
1055              int8                1000 iterations of a mix of 8 bit inte‐
1056                                  ger operations.
1057              int16               1000  iterations of a mix of 16 bit in‐
1058                                  teger operations.
1059              int32               1000 iterations of a mix of 32 bit  in‐
1060                                  teger operations.
1061
1062              int64               1000  iterations of a mix of 64 bit in‐
1063                                  teger operations.
1064              int128              1000 iterations of a mix of 128 bit in‐
1065                                  teger operations (GCC only).
1066              int32float          1000  iterations of a mix of 32 bit in‐
1067                                  teger and floating point operations.
1068              int32double         1000 iterations of a mix of 32 bit  in‐
1069                                  teger  and  double  precision  floating
1070                                  point operations.
1071              int32longdouble     1000 iterations of a mix of 32 bit  in‐
1072                                  teger  and long double precision float‐
1073                                  ing point operations.
1074              int64float          1000 iterations of a mix of 64 bit  in‐
1075                                  teger and floating point operations.
1076              int64double         1000  iterations of a mix of 64 bit in‐
1077                                  teger  and  double  precision  floating
1078                                  point operations.
1079              int64longdouble     1000  iterations of a mix of 64 bit in‐
1080                                  teger and long double precision  float‐
1081                                  ing point operations.
1082              int128float         1000 iterations of a mix of 128 bit in‐
1083                                  teger  and  floating  point  operations
1084                                  (GCC only).
1085              int128double        1000 iterations of a mix of 128 bit in‐
1086                                  teger  and  double  precision  floating
1087                                  point operations (GCC only).
1088              int128longdouble    1000 iterations of a mix of 128 bit in‐
1089                                  teger and long double precision  float‐
1090                                  ing point operations (GCC only).
1091              int128decimal32     1000 iterations of a mix of 128 bit in‐
1092                                  teger and 32 bit decimal floating point
1093                                  operations (GCC only).
1094              int128decimal64     1000 iterations of a mix of 128 bit in‐
1095                                  teger and 64 bit decimal floating point
1096                                  operations (GCC only).
1097              int128decimal128    1000 iterations of a mix of 128 bit in‐
1098                                  teger  and  128  bit  decimal  floating
1099                                  point operations (GCC only).
1100              intconversion       perform  65536  iterations  of  integer
1101                                  conversions between  int16,  int32  and
1102                                  int64 variables.
1103              ipv4checksum        compute 1024 rounds of the 16 bit ones'
1104                                  complement IPv4 checksum.
1105              jmp                 Simple unoptimised compare >, <, == and
1106                                  jmp branching.
1107              lfsr32              16384  iterations  of  a  32 bit Galois
1108                                  linear feedback  shift  register  using
1109                                  the polynomial x↑32 + x↑31 + x↑29 + x +
1110                                  1. This generates a ring of  2↑32  -  1
1111                                  unique values (all 32 bit values except
1112                                  for 0).
1113              ln2                 compute ln(2) based on series:
1114                                   1 - 1/2 + 1/3 - 1/4 + 1/5 - 1/6 ...
1115              logmap              16384 iterations computing chaotic dou‐
1116                                  ble precision values using the logistic
1117                                  map Χn+1 = r ×  Χn × (1 - Χn) where r >
1118                                  ≈ 3.56994567
1119              longdouble          1000 iterations of a mix of long double
1120                                  precision floating point operations.
1121              loop                simple empty loop.
1122              matrixprod          matrix product of two 128 × 128  matri‐
1123                                  ces of double floats. Testing on 64 bit
1124                                  x86 hardware shows that  this  is  pro‐
1125                                  vides  a  good mix of memory, cache and
1126                                  floating point operations and is proba‐
1127                                  bly  the best CPU method to use to make
1128                                  a CPU run hot.
1129              nsqrt               compute sqrt() of  long  doubles  using
1130                                  Newton-Raphson.
1131              omega               compute  the  omega constant defined by
1132                                  Ωe↑Ω = 1 using efficient  iteration  of
1133                                  Ωn+1 = (1 + Ωn) / (1 + e↑Ωn).
1134
1135
1136
1137              parity              compute  parity  using  various methods
1138                                  from the Standford Bit Twiddling Hacks.
1139                                  Methods  employed  are:  the naïve way,
1140                                  the naïve way with the  Brian  Kernigan
1141                                  bit counting optimisation, the multiply
1142                                  way, the parallel way, the lookup table
1143                                  ways   (2  variations)  and  using  the
1144                                  __builtin_parity function.
1145              phi                 compute the Golden Ratio  ϕ  using  se‐
1146                                  ries.
1147              pi                  compute π using the Srinivasa Ramanujan
1148                                  fast convergence algorithm.
1149              prime               find the first 10000 prime numbers  us‐
1150                                  ing  a  slightly  optimised brute force
1151                                  naïve trial division search.
1152              psi                 compute  ψ  (the  reciprocal  Fibonacci
1153                                  constant) using the sum of the recipro‐
1154                                  cals of the Fibonacci numbers.
1155              queens              compute all the solutions of the  clas‐
1156                                  sic  8  queens  problem for board sizes
1157                                  1..11.
1158              rand                16384 iterations of rand(), where  rand
1159                                  is the MWC pseudo random number genera‐
1160                                  tor.  The MWC random function  concate‐
1161                                  nates  two  16  bit multiply-with-carry
1162                                  generators:
1163                                   x(n) = 36969 × x(n - 1) + carry,
1164                                   y(n) = 18000 × y(n - 1) + carry mod  2
1165                                  ↑ 16
1166
1167                                  and has period of around 2 ↑ 60.
1168              rand48              16384   iterations  of  drand48(3)  and
1169                                  lrand48(3).
1170              rgb                 convert RGB to  YUV  and  back  to  RGB
1171                                  (CCIR 601).
1172              sieve               find  the first 10000 prime numbers us‐
1173                                  ing the sieve of Eratosthenes.
1174              stats               calculate minimum, maximum,  arithmetic
1175                                  mean,  geometric  mean,  harmoninc mean
1176                                  and standard deviation on 250  randomly
1177                                  generated   positive  double  precision
1178                                  values.
1179              sqrt                compute sqrt(rand()), where rand is the
1180                                  MWC pseudo random number generator.
1181              trig                compute  sin(θ)  ×  cos(θ)  + sin(2θ) +
1182                                  cos(3θ) for float, double and long dou‐
1183                                  ble sine and cosine functions where θ =
1184                                  0 to 2π in 1500 steps.
1185              union               perform integer arithmetic on a mix  of
1186                                  bit  fields  in  a C union.  This exer‐
1187                                  cises how well the compiler and CPU can
1188                                  perform  integer  bit  field  loads and
1189                                  stores.
1190              zeta                compute the Riemann Zeta function  ζ(s)
1191                                  for s = 2.0..10.0
1192
1193              Note  that  some  of  these methods try to exercise the CPU with
1194              computations found in some real world use  cases.  However,  the
1195              code  has not been optimised on a per-architecture basis, so may
1196              be a sub-optimal compared to hand-optimised code  used  in  some
1197              applications.   They do try to represent the typical instruction
1198              mixes found in these use cases.
1199
1200       --cpu-online N
1201              start N workers that put randomly selected CPUs offline and  on‐
1202              line.  This  Linux only stressor requires root privilege to per‐
1203              form this action. By default the first CPU (CPU 0) is never  of‐
1204              flined  as this has been found to be problematic on some systems
1205              and can result in a shutdown.
1206
1207       --cpu-online-all
1208              The default is to never offline the first CPU.  This option will
1209              offline  and  online  all the CPUs include CPU 0. This may cause
1210              some systems to shutdown.
1211
1212       --cpu-online-ops N
1213              stop after offline/online operations.
1214
1215       --crypt N
1216              start N workers that encrypt a 16 character random password  us‐
1217              ing  crypt(3).  The password is encrypted using MD5, SHA-256 and
1218              SHA-512 encryption methods.
1219
1220       --crypt-ops N
1221              stop after N bogo encryption operations.
1222
1223       --cyclic N
1224              start N workers that exercise the real time FIFO or Round  Robin
1225              schedulers  with  cyclic  nanosecond  sleeps. Normally one would
1226              just use 1 worker instance with this stressor  to  get  reliable
1227              statistics.  This stressor measures the first 10 thousand laten‐
1228              cies and calculates the mean, mode, minimum,  maximum  latencies
1229              along  with  various  latency percentiles for the just the first
1230              cyclic stressor instance. One has  to  run  this  stressor  with
1231              CAP_SYS_NICE capability to enable the real time scheduling poli‐
1232              cies. The FIFO scheduling policy is the default.
1233
1234       --cyclic-ops N
1235              stop after N sleeps.
1236
1237       --cyclic-dist N
1238              calculate and print a latency distribution with the interval  of
1239              N  nanoseconds.   This is helpful to see where the latencies are
1240              clustering.
1241
1242       --cyclic-method [ clock_ns | itimer |  poll  |  posix_ns  |  pselect  |
1243       usleep ]
1244              specify  the  cyclic method to be used, the default is clock_ns.
1245              The available cyclic methods are as follows:
1246
1247              Method         Description
1248              clock_ns       sleep for the specified time using  the
1249                             clock_nanosleep(2)    high   resolution
1250                             nanosleep and the  CLOCK_REALTIME  real
1251                             time clock.
1252              itimer         wakeup   a   paused   process   with  a
1253                             CLOCK_REALTIME itimer signal.
1254              poll           delay for the specified  time  using  a
1255                             poll  delay  loop  that checks for time
1256                             changes using clock_gettime(2)  on  the
1257                             CLOCK_REALTIME clock.
1258              posix_ns       sleep  for the specified time using the
1259                             POSIX  nanosleep(2)   high   resolution
1260                             nanosleep.
1261              pselect        sleep for the specified time using pse‐
1262                             lect(2) with null file descriptors.
1263              usleep         sleep to the nearest microsecond  using
1264                             usleep(2).
1265
1266       --cyclic-policy [ fifo | rr ]
1267              specify  the  desired real time scheduling policy, ff (first-in,
1268              first-out) or rr (round robin).
1269
1270       --cyclic-prio P
1271              specify the scheduling priority P. Range from 1 (lowest) to  100
1272              (highest).
1273
1274       --cyclic-sleep N
1275              sleep  for N nanoseconds per test cycle using clock_nanosleep(2)
1276              with the  CLOCK_REALTIME  timer.  Range  from  1  to  1000000000
1277              nanoseconds.
1278
1279       --daemon N
1280              start  N workers that each create a daemon that dies immediately
1281              after creating another daemon and so on. This effectively  works
1282              through the process table with short lived processes that do not
1283              have a parent and are waited for by init.  This puts pressure on
1284              init  to  do  rapid child reaping.  The daemon processes perform
1285              the usual mix of calls to turn into  typical  UNIX  daemons,  so
1286              this artificially mimics very heavy daemon system stress.
1287
1288       --daemon-ops N
1289              stop daemon workers after N daemons have been created.
1290
1291       --dccp N
1292              start  N  workers  that send and receive data using the Datagram
1293              Congestion Control Protocol (DCCP) (RFC4340).  This  involves  a
1294              pair  of  client/server processes performing rapid connect, send
1295              and receives and disconnects on the local host.
1296
1297       --dccp-domain D
1298              specify the domain to use, the default is ipv4.  Currently  ipv4
1299              and ipv6 are supported.
1300
1301       --dccp-if NAME
1302              use  network  interface NAME. If the interface NAME does not ex‐
1303              ist, is not up or does not support the domain then the  loopback
1304              (lo) interface is used as the default.
1305
1306       --dccp-port P
1307              start  DCCP at port P. For N dccp worker processes, ports P to P
1308              - 1 are used.
1309
1310       --dccp-ops N
1311              stop dccp stress workers after N bogo operations.
1312
1313       --dccp-opts [ send | sendmsg | sendmmsg ]
1314              by default, messages are sent using send(2). This option  allows
1315              one  to  specify the sending method using send(2), sendmsg(2) or
1316              sendmmsg(2).  Note that sendmmsg is  only  available  for  Linux
1317              systems that support this system call.
1318
1319       --dekker N
1320              start  N workers that exercises mutex exclusion between two pro‐
1321              cesses using shared memory with the Dekker Algorithm. Where pos‐
1322              sible  this  uses  memory  fencing  and  falls back to using GCC
1323              __sync_synchronize if they are not available. The stressors con‐
1324              tain simple mutex and memory coherency sanity checks.
1325
1326       --dekker-ops N
1327              stop dekker workers after N mutex operations.
1328
1329       -D N, --dentry N
1330              start  N workers that create and remove directory entries.  This
1331              should create file system meta data activity. The directory  en‐
1332              try  names  are suffixed by a gray-code encoded number to try to
1333              mix up the hashing of the namespace.
1334
1335       --dentry-ops N
1336              stop denty thrash workers after N bogo dentry operations.
1337
1338       --dentry-order [ forward | reverse | stride | random ]
1339              specify unlink order of dentries, can be  one  of  forward,  re‐
1340              verse,  stride  or random.  By default, dentries are unlinked in
1341              random order.  The forward order will unlink them from first  to
1342              last,  reverse order will unlink them from last to first, stride
1343              order will unlink them by stepping around order in a  quasi-ran‐
1344              dom  pattern  and  random order will randomly select one of for‐
1345              ward, reverse or stride orders.
1346
1347       --dentries N
1348              create N dentries per dentry thrashing loop, default is 2048.
1349
1350       --dev N
1351              start N workers that exercise the /dev devices. Each worker runs
1352              5  concurrent  threads that perform open(2), fstat(2), lseek(2),
1353              poll(2), fcntl(2), mmap(2), munmap(2), fsync(2) and close(2)  on
1354              each device.  Note that watchdog devices are not exercised.
1355
1356       --dev-ops N
1357              stop dev workers after N bogo device exercising operations.
1358
1359       --dev-file filename
1360              specify  the device file to exercise, for example, /dev/null. By
1361              default the stressor will work through all the device  files  it
1362              can fine, however, this option allows a single device file to be
1363              exercised.
1364
1365       --dev-shm N
1366              start N workers that fallocate large files in /dev/shm and  then
1367              mmap  these  into memory and touch all the pages. This exercises
1368              pages being moved to/from the buffer cache. Linux only.
1369
1370       --dev-shm-ops N
1371              stop after N bogo allocation and mmap /dev/shm operations.
1372
1373       --dir N
1374              start N workers that create and remove directories  using  mkdir
1375              and rmdir.
1376
1377       --dir-ops N
1378              stop directory thrash workers after N bogo directory operations.
1379
1380       --dir-dirs N
1381              exercise  dir on N directories. The default is 8192 directories,
1382              this allows 64 to 65536 directories to be used instead.
1383
1384       --dirdeep N
1385              start N workers that create a depth-first tree of directories to
1386              a  maximum  depth  as limited by PATH_MAX or ENAMETOOLONG (which
1387              ever occurs first).  By default, each level of the tree contains
1388              one directory, but this can be increased to a maximum of 10 sub-
1389              trees using the --dirdeep-dir option.  To stress inode creation,
1390              a  symlink  and  a hardlink to a file at the root of the tree is
1391              created in each level.
1392
1393       --dirdeep-ops N
1394              stop directory depth workers after N bogo directory operations.
1395
1396       --dirdeep-bytes N
1397              allocated file size, the default is 0. One can specify the  size
1398              as  %  of  free  space  on the file system or in units of Bytes,
1399              KBytes, MBytes and GBytes using the suffix b, k, m or g. Used in
1400              conjunction with the --dirdeep-files option.
1401
1402       --dirdeep-dirs N
1403              create  N  directories at each tree level. The default is just 1
1404              but can be increased to a maximum of 36 per level.
1405
1406       --dirdeep-files N
1407              create N files  at each tree level. The default is  0  with  the
1408              file size specified by the --dirdeep-bytes option.
1409
1410       --dirdeep-inodes N
1411              consume  up  to N inodes per dirdeep stressor while creating di‐
1412              rectories and links. The value N can be the number of inodes  or
1413              a  percentage of the total available free inodes on the filesys‐
1414              tem being used.
1415
1416       --dirmany N
1417              start N stressors that create as many files in  a  directory  as
1418              possible  and  then  remove  them. The file creation phase stops
1419              when an error occurs (for  example,  out  of  inodes,  too  many
1420              files, quota reached, etc.) and then the files are removed. This
1421              cycles until the the run time is reached or  the  file  creation
1422              count  bogo-ops  metric  is  reached.  This is a much faster and
1423              light weight directory exercising stressor compared to the  den‐
1424              try stressor.
1425
1426       --dirmany-ops N
1427              stop dirmany stressors after N empty files have been created.
1428
1429       --dirmany-bytes N
1430              allocated  file size, the default is 0. One can specify the size
1431              as % of free space on the file system  or  in  units  of  Bytes,
1432              KBytes, MBytes and GBytes using the suffix b, k, m or g.
1433
1434       --dnotify N
1435              start  N  workers performing file system activities such as mak‐
1436              ing/deleting files/directories, renaming files, etc.  to  stress
1437              exercise the various dnotify events (Linux only).
1438
1439       --dnotify-ops N
1440              stop inotify stress workers after N dnotify bogo operations.
1441
1442       --dup N
1443              start N workers that perform dup(2) and then close(2) operations
1444              on /dev/zero.  The maximum opens at one time is system  defined,
1445              so  the test will run up to this maximum, or 65536 open file de‐
1446              scriptors, which ever comes first.
1447
1448       --dup-ops N
1449              stop the dup stress workers after N bogo open operations.
1450
1451       --dynlib N
1452              start N workers that dynamically load and unload various  shared
1453              libraries.  This exercises memory mapping and dynamic code load‐
1454              ing and symbol lookups. See dlopen(3) for more details  of  this
1455              mechanism.
1456
1457       --dynlib-ops N
1458              stop workers after N bogo load/unload cycles.
1459
1460       --efivar N
1461              start N works that exercise the Linux /sys/firmware/efi/vars in‐
1462              terface by reading the EFI  variables.  This  is  a  Linux  only
1463              stress  test  for  platforms that support the EFI vars interface
1464              and requires the CAP_SYS_ADMIN capability.
1465
1466       --efivar-ops N
1467              stop the efivar stressors after N EFI variable read operations.
1468
1469       --enosys N
1470              start N workers that exercise non-functional  system  call  num‐
1471              bers.  This  calls a wide range of system call numbers to see if
1472              it can break a system where these are not  wired  up  correctly.
1473              It  also keeps track of system calls that exist (ones that don't
1474              return ENOSYS) so that it can focus on purely finding and  exer‐
1475              cising non-functional system calls. This stressor exercises sys‐
1476              tem calls from 0 to __NR_syscalls + 1024,  random  system  calls
1477              within  constrained in the ranges of 0 to 2^8, 2^16, 2^24, 2^32,
1478              2^40, 2^48, 2^56 and 2^64 bits, high  system  call  numbers  and
1479              various  other bit patterns to try to get wide coverage. To keep
1480              the environment clean, each system call being tested runs  in  a
1481              child process with reduced capabilities.
1482
1483       --enosys-ops N
1484              stop after N bogo enosys system call attempts
1485
1486       --env N
1487              start  N  workers  that creates numerous large environment vari‐
1488              ables  to  try  to  trigger  out  of  memory  conditions   using
1489              setenv(3).  If ENOMEM occurs then the environment is emptied and
1490              another memory filling retry occurs.  The process  is  restarted
1491              if it is killed by the Out Of Memory (OOM) killer.
1492
1493       --env-ops N
1494              stop after N bogo setenv/unsetenv attempts.
1495
1496       --epoll N
1497              start  N  workers that perform various related socket stress ac‐
1498              tivity using epoll_wait(2) to monitor  and  handle  new  connec‐
1499              tions.  This  involves  client/server processes performing rapid
1500              connect, send/receives and disconnects on the local host.  Using
1501              epoll  allows  a  large  number of connections to be efficiently
1502              handled, however, this can lead to the connection table  filling
1503              up  and  blocking further socket connections, hence impacting on
1504              the epoll bogo op stats.  For ipv4 and  ipv6  domains,  multiple
1505              servers are spawned on multiple ports. The epoll stressor is for
1506              Linux only.
1507
1508       --epoll-domain D
1509              specify the domain to use, the default is unix (aka local). Cur‐
1510              rently ipv4, ipv6 and unix are supported.
1511
1512       --epoll-port P
1513              start at socket port P. For N epoll worker processes, ports P to
1514              (P * 4) - 1 are used for ipv4, ipv6 domains and ports P to P - 1
1515              are used for the unix domain.
1516
1517       --epoll-ops N
1518              stop epoll workers after N bogo operations.
1519
1520       --eventfd N
1521              start  N parent and child worker processes that read and write 8
1522              byte event messages  between  them  via  the  eventfd  mechanism
1523              (Linux only).
1524
1525       --eventfd-ops N
1526              stop eventfd workers after N bogo operations.
1527
1528       --eventfd-nonblock N
1529              enable  EFD_NONBLOCK to allow non-blocking on the event file de‐
1530              scriptor. This will cause reads and writes to return with EAGAIN
1531              rather  the  blocking  and  hence causing a high rate of polling
1532              I/O.
1533
1534       --exec N
1535              start N workers continually forking children that exec stress-ng
1536              and  then  exit almost immediately. If a system has pthread sup‐
1537              port then 1 in 4 of the exec's will be from inside a pthread  to
1538              exercise exec'ing from inside a pthread context.
1539
1540       --exec-ops N
1541              stop exec stress workers after N bogo operations.
1542
1543       --exec-max P
1544              create  P  child processes that exec stress-ng and then wait for
1545              them to exit per iteration. The default is just 1; higher values
1546              will  create many temporary zombie processes that are waiting to
1547              be reaped. One can potentially fill up the process  table  using
1548              high values for --exec-max and --exec.
1549
1550       --exit-group N
1551              start  N  workers  that  create  16  pthreads  and terminate the
1552              pthreads and the controlling child process using  exit_group(2).
1553              (Linux only stressor).
1554
1555       --exit-group-ops N
1556              stop after N iterations of pthread creation and deletion loops.
1557
1558       -F N, --fallocate N
1559              start  N  workers  continually  fallocating  (preallocating file
1560              space) and ftruncating (file truncating)  temporary  files.   If
1561              the  file  is larger than the free space, fallocate will produce
1562              an ENOSPC error which is ignored by this stressor.
1563
1564       --fallocate-bytes N
1565              allocated file size, the default is 1 GB. One  can  specify  the
1566              size as % of free space on the file system or in units of Bytes,
1567              KBytes, MBytes and GBytes using the suffix b, k, m or g.
1568
1569       --fallocate-ops N
1570              stop fallocate stress workers after N bogo fallocate operations.
1571
1572       --fanotify N
1573              start N workers performing file system activities such as creat‐
1574              ing,  opening,  writing, reading and unlinking files to exercise
1575              the fanotify  event  monitoring  interface  (Linux  only).  Each
1576              stressor runs a child process to generate file events and a par‐
1577              ent process to read file events using fanotify. Has  to  be  run
1578              with CAP_SYS_ADMIN capability.
1579
1580       --fanotify-ops N
1581              stop fanotify stress workers after N bogo fanotify events.
1582
1583       --fault N
1584              start N workers that generates minor and major page faults.
1585
1586       --fault-ops N
1587              stop the page fault workers after N bogo page fault operations.
1588
1589       --fcntl N
1590              start  N  workers  that perform fcntl(2) calls with various com‐
1591              mands.  The exercised  commands  (if  available)  are:  F_DUPFD,
1592              F_DUPFD_CLOEXEC,  F_GETFD,  F_SETFD, F_GETFL, F_SETFL, F_GETOWN,
1593              F_SETOWN, F_GETOWN_EX, F_SETOWN_EX, F_GETSIG, F_SETSIG, F_GETLK,
1594              F_SETLK, F_SETLKW, F_OFD_GETLK, F_OFD_SETLK and F_OFD_SETLKW.
1595
1596       --fcntl-ops N
1597              stop the fcntl workers after N bogo fcntl operations.
1598
1599       --fiemap N
1600              start  N  workers  that  each  create  a file with many randomly
1601              changing extents and has  4  child  processes  per  worker  that
1602              gather the extent information using the FS_IOC_FIEMAP ioctl(2).
1603
1604       --fiemap-ops N
1605              stop after N fiemap bogo operations.
1606
1607       --fiemap-bytes N
1608              specify the size of the fiemap'd file in bytes.  One can specify
1609              the size as % of free space on the file system or  in  units  of
1610              Bytes,  KBytes, MBytes and GBytes using the suffix b, k, m or g.
1611              Larger files will contain more extents, causing more stress when
1612              gathering extent information.
1613
1614       --fifo N
1615              start  N  workers  that exercise a named pipe by transmitting 64
1616              bit integers.
1617
1618       --fifo-ops N
1619              stop fifo workers after N bogo pipe write operations.
1620
1621       --fifo-readers N
1622              for each worker, create N fifo  reader  workers  that  read  the
1623              named pipe using simple blocking reads.
1624
1625       --file-ioctl N
1626              start  N  workers  that  exercise various file specific ioctl(2)
1627              calls. This will attempt to use the FIONBIO, FIOQSIZE, FIGETBSZ,
1628              FIOCLEX,   FIONCLEX,   FIONBIO,  FIOASYNC,  FIOQSIZE,  FIFREEZE,
1629              FITHAW,   FICLONE,   FICLONERANGE,   FIONREAD,   FIONWRITE   and
1630              FS_IOC_RESVSP ioctls if these are defined.
1631
1632       --file-ioctl-ops N
1633              stop file-ioctl workers after N file ioctl bogo operations.
1634
1635       --filename N
1636              start N workers that exercise file creation using various length
1637              filenames containing a range  of  allowed  filename  characters.
1638              This  will  try  to see if it can exceed the file system allowed
1639              filename length was well as test various  filename  lengths  be‐
1640              tween 1 and the maximum allowed by the file system.
1641
1642       --filename-ops N
1643              stop filename workers after N bogo filename tests.
1644
1645       --filename-opts opt
1646              use  characters in the filename based on option 'opt'. Valid op‐
1647              tions are:
1648
1649              Option    Description
1650              probe     default option, probe the file system for valid
1651                        allowed characters in a file name and use these
1652              posix     use  characters  as specified by The Open Group
1653                        Base  Specifications  Issue  7,   POSIX.1-2008,
1654                        3.278 Portable Filename Character Set
1655              ext       use  characters allowed by the ext2, ext3, ext4
1656                        file systems, namely any 8 bit character  apart
1657                        from NUL and /
1658
1659       --flock N
1660              start N workers locking on a single file.
1661
1662       --flock-ops N
1663              stop flock stress workers after N bogo flock operations.
1664
1665       -f N, --fork N
1666              start  N  workers  continually forking children that immediately
1667              exit.
1668
1669       --fork-ops N
1670              stop fork stress workers after N bogo operations.
1671
1672       --fork-max P
1673              create P child processes and then wait for them to exit per  it‐
1674              eration.  The  default is just 1; higher values will create many
1675              temporary zombie processes that are waiting to  be  reaped.  One
1676              can  potentially fill up the process table using high values for
1677              --fork-max and --fork.
1678
1679       --fork-vm
1680              enable detrimental performance virtual memory advice using  mad‐
1681              vise  on  all  pages  of the forked process. Where possible this
1682              will try to set every page in the new process with using madvise
1683              MADV_MERGEABLE,  MADV_WILLNEED,  MADV_HUGEPAGE  and  MADV_RANDOM
1684              flags. Linux only.
1685
1686       --fp-error N
1687              start N workers that generate floating point exceptions.  Compu‐
1688              tations  are  performed to force and check for the FE_DIVBYZERO,
1689              FE_INEXACT, FE_INVALID, FE_OVERFLOW and FE_UNDERFLOW exceptions.
1690              EDOM and ERANGE errors are also checked.
1691
1692       --fp-error-ops N
1693              stop after N bogo floating point exceptions.
1694
1695       --fpunch N
1696              start  N workers that punch and fill holes in a 16 MB file using
1697              five concurrent processes per stressor exercising  on  the  same
1698              file.    Where    available,   this   uses   fallocate(2)   FAL‐
1699              LOC_FL_KEEP_SIZE,  FALLOC_FL_PUNCH_HOLE,   FALLOC_FL_ZERO_RANGE,
1700              FALLOC_FL_COLLAPSE_RANGE  and FALLOC_FL_INSERT_RANGE to make and
1701              fill holes across the file and breaks it into multiple extents.
1702
1703       --fpunch-ops N
1704              stop fpunch workers after N punch and fill bogo operations.
1705
1706       --fstat N
1707              start N workers fstat'ing  files  in  a  directory  (default  is
1708              /dev).
1709
1710       --fstat-ops N
1711              stop fstat stress workers after N bogo fstat operations.
1712
1713       --fstat-dir directory
1714              specify  the directory to fstat to override the default of /dev.
1715              All the files in the directory will be fstat'd repeatedly.
1716
1717       --full N
1718              start N workers that exercise /dev/full.  This attempts to write
1719              to  the  device  (which should always get error ENOSPC), to read
1720              from the device (which should always return a buffer  of  zeros)
1721              and  to  seek  randomly  on the device (which should always suc‐
1722              ceed).  (Linux only).
1723
1724       --full-ops N
1725              stop the stress full workers after N bogo I/O operations.
1726
1727       --funccall N
1728              start N workers that call functions of 1 through to 9 arguments.
1729              By  default  functions  with uint64_t arguments are called, how‐
1730              ever, this can be changed using the --funccall-method option.
1731
1732       --funccall-ops N
1733              stop the funccall workers after N bogo function call operations.
1734              Each bogo operation is 1000 calls of functions of 1 through to 9
1735              arguments of the chosen argument type.
1736
1737       --funccall-method method
1738              specify the method of funccall argument type to be used. The de‐
1739              fault is uint64_t but can be one of bool, uint8, uint16, uint32,
1740              uint64, uint128,  float,  double,  longdouble,  cfloat  (complex
1741              float), cdouble (complex double), clongdouble (complex long dou‐
1742              ble), float16, float32, float64, float80,  float128,  decimal32,
1743              decimal64  and  decimal128.   Note  that some of these types are
1744              only available with specific  architectures  and  compiler  ver‐
1745              sions.
1746
1747       --funcret N
1748              start  N  workers that pass and return by value various small to
1749              large data types.
1750
1751       --funcret-ops N
1752              stop the funcret workers after N bogo function call operations.
1753
1754       --funcret-method method
1755              specify the method of funcret argument type to be used. The  de‐
1756              fault  is  uint64_t but can be one of uint8 uint16 uint32 uint64
1757              uint128 float double longdouble float80 float128 decimal32 deci‐
1758              mal64 decimal128 uint8x32 uint8x128 uint64x128.
1759
1760       --futex N
1761              start  N  workers  that  rapidly exercise the futex system call.
1762              Each worker has two processes, a futex waiter and a futex waker.
1763              The waiter waits with a very small timeout to stress the timeout
1764              and rapid polled futex waiting. This is a Linux specific  stress
1765              option.
1766
1767       --futex-ops N
1768              stop  futex  workers  after  N bogo successful futex wait opera‐
1769              tions.
1770
1771       --get N
1772              start N workers that call system calls that fetch data from  the
1773              kernel,  currently  these  are: getpid, getppid, getcwd, getgid,
1774              getegid, getuid, getgroups, getpgrp, getpgid,  getpriority,  ge‐
1775              tresgid,  getresuid, getrlimit, prlimit, getrusage, getsid, get‐
1776              tid, getcpu, gettimeofday,  uname,  adjtimex,  sysfs.   Some  of
1777              these system calls are OS specific.
1778
1779       --get-ops N
1780              stop get workers after N bogo get operations.
1781
1782       --getdent N
1783              start  N workers that recursively read directories /proc, /dev/,
1784              /tmp, /sys and /run using getdents and getdents64 (Linux only).
1785
1786       --getdent-ops N
1787              stop getdent workers after N bogo getdent bogo operations.
1788
1789       --getrandom N
1790              start N workers that get 8192 random bytes from the /dev/urandom
1791              pool using the getrandom(2) system call (Linux) or getentropy(2)
1792              (OpenBSD).
1793
1794       --getrandom-ops N
1795              stop getrandom workers after N bogo get operations.
1796
1797       --goto N
1798              start N workers that perform 1024 forward branches (to next  in‐
1799              struction)  or  backward  branches (to previous instruction) for
1800              each bogo operation loop.  By default, every 1024  branches  the
1801              direction  is  randomly  chosen to be forward or backward.  This
1802              stressor exercises suboptimal  pipelined  execution  and  branch
1803              prediction logic.
1804
1805       --goto-ops N
1806              stop  goto  workers  after  N bogo loops of 1024 branch instruc‐
1807              tions.
1808
1809       --goto-direction [ forward | backward | random ]
1810              select the branching direction in the stressor loop, forward for
1811              forward  only branching, backward for a backward only branching,
1812              random for a random choice of forward or random branching  every
1813              1024 branches.
1814
1815       --gpu N
1816              start  N  worker that exercise the GPU. This specifies a 2d tex‐
1817              ture image that allows the elements of an image array to be read
1818              by shaders, and render primitives using an opengl context.
1819
1820       --gpu-ops N
1821              stop gpu workers after N render loop operations.
1822
1823       --gpu-frag N
1824              specify  shader  core  usage per pixel, this sets N loops in the
1825              fragment shader.
1826
1827       --gpu-tex-size N
1828              specify upload texture NxN, by default this value is 4096x4096.
1829
1830       --gpu-xsize X
1831              use a framebuffer size of X pixels. The default is 256 pixels.
1832
1833       --gpu-ysize Y
1834              use a framebuffer size of Y pixels. The default is 256 pixels.
1835
1836       --gpu-upload N
1837              specify upload texture N times per frame, the default  value  is
1838              1.
1839
1840       --handle N
1841              start  N  workers  that  exercise  the  name_to_handle_at(2) and
1842              open_by_handle_at(2) system calls. (Linux only).
1843
1844       --handle-ops N
1845              stop after N handle bogo operations.
1846
1847       --hash N
1848              start N workers that exercise various hashing functions.  Random
1849              strings  from 1 to 128 bytes are hashed and the hashing rate and
1850              chi squared is calculated from the number  of  hashes  performed
1851              over a period of time. The chi squared value is the goodness-of-
1852              fit measure, it is the actual  distribution  of  items  in  hash
1853              buckets  versus  the expected distribution of items. Typically a
1854              chi squared value of 0.95..1.05 indicates a good hash  distribu‐
1855              tion.
1856
1857       --hash-ops N
1858              stop after N hashing rounds
1859
1860       --hash-method M
1861              specify  the  hashing  method to use, by default all the hashing
1862              methods are cycled through. Methods available are:
1863
1864              Method          Description
1865              all             cycle through all the hashing methods
1866              adler32         Mark Adler checksum, a modification  of
1867                              the Fletcher checksum
1868              coffin          xor and 5 bit rotate left hash
1869              coffin32        xor  and 5 bit rotate left hash with 32
1870                              bit fetch optimization
1871              crc32c          compute CRC32C (Castagnoli CRC32) inte‐
1872                              ger hash
1873              djb2a           Dan  Bernstein hash using the xor vari‐
1874                              ant
1875              fnv1a           FNV-1a Fowler-Noll-Vo  hash  using  the
1876                              xor then multiply variant
1877              jenkin          Jenkin's integer hash
1878              kandr           Kernighan  and  Richie's multiply by 31
1879                              and add hash from  "The  C  Programming
1880                              Language", 2nd Edition
1881              knuth           Donald E. Knuth's hash from "The Art Of
1882                              Computer Programming", Volume 3,  chap‐
1883                              ter 6.4
1884              loselose        Kernighan and Richie's simple hash from
1885                              "The C Programming Language", 1st  Edi‐
1886                              tion
1887              mid5            xor  shift hash of the middle 5 charac‐
1888                              ters of the string. Designed  by  Colin
1889                              Ian King
1890              muladd32        simple  multiply  and add hash using 32
1891                              bit math and xor folding of overflow
1892              muladd64        simple multiply and add hash  using  64
1893                              bit math and xor folding of overflow
1894              mulxror64       64  bit multiply, xor and rotate right.
1895                              Mangles 64  bits  where  possible.  De‐
1896                              signed by Colin Ian King
1897              murmur3_32      murmur3_32  hash, Austin Appleby's Mur‐
1898                              mur3 hash, 32 bit variant
1899              nhash           exim's nhash.
1900              pjw             a non-cryptographic hash function  cre‐
1901                              ated  by  Peter  J.  Weinberger of AT&T
1902                              Bell Labs,  used  in  UNIX  ELF  object
1903                              files
1904
1905              sdbm            sdbm  hash as used in the SDBM database
1906                              and GNU awk
1907              x17             multiply by 17 and add. The multiplica‐
1908                              tion  can  be  optimized down to a fast
1909                              right shift by 4 and add on some archi‐
1910                              tectures
1911              xor             simple rotate shift and xor of values
1912              xxhash          the   "Extremely  fast"  hash  in  non-
1913                              streaming mode
1914
1915       -d N, --hdd N
1916              start N workers continually writing, reading and removing tempo‐
1917              rary files. The default mode is to stress test sequential writes
1918              and reads.  With the --aggressive  option  enabled  without  any
1919              --hdd-opts  options  the  hdd stressor will work through all the
1920              --hdd-opt options one by one to cover a range of I/O options.
1921
1922       --hdd-bytes N
1923              write N bytes for each hdd process, the default is 1 GB. One can
1924              specify  the  size  as  % of free space on the file system or in
1925              units of Bytes, KBytes, MBytes and GBytes using the suffix b, k,
1926              m or g.
1927
1928       --hdd-opts list
1929              specify  various  stress test options as a comma separated list.
1930              Options are as follows:
1931
1932              Option        Description
1933              direct        try to minimize cache effects of the I/O.  File
1934                            I/O  writes  are  performed  directly from user
1935                            space buffers and synchronous transfer is  also
1936                            attempted.   To guarantee synchronous I/O, also
1937                            use the sync option.
1938              dsync         ensure output has been transferred to  underly‐
1939                            ing hardware and file metadata has been updated
1940                            (using the O_DSYNC open flag). This is  equiva‐
1941                            lent  to each write(2) being followed by a call
1942                            to fdatasync(2). See also the fdatasync option.
1943              fadv-dontneed advise kernel to expect the data  will  not  be
1944                            accessed in the near future.
1945              fadv-noreuse  advise kernel to expect the data to be accessed
1946                            only once.
1947              fadv-normal   advise kernel there are no explicit access pat‐
1948                            tern  for  the data. This is the default advice
1949                            assumption.
1950              fadv-rnd      advise kernel to expect random access  patterns
1951                            for the data.
1952              fadv-seq      advise  kernel to expect sequential access pat‐
1953                            terns for the data.
1954              fadv-willneed advise kernel to expect the data to be accessed
1955                            in the near future.
1956              fsync         flush  all  modified  in-core  data  after each
1957                            write to the output device  using  an  explicit
1958                            fsync(2) call.
1959              fdatasync     similar to fsync, but do not flush the modified
1960                            metadata unless metadata is required for  later
1961                            data  reads  to be handled correctly. This uses
1962                            an explicit fdatasync(2) call.
1963              iovec         use readv/writev multiple  buffer  I/Os  rather
1964                            than read/write. Instead of 1 read/write opera‐
1965                            tion, the buffer is broken into an iovec of  16
1966                            buffers.
1967              noatime       do  not  update the file last access timestamp,
1968                            this can reduce metadata writes.
1969              sync          ensure output has been transferred to  underly‐
1970                            ing hardware (using the O_SYNC open flag). This
1971                            is equivalent to a each write(2) being followed
1972                            by  a  call to fsync(2). See also the fsync op‐
1973                            tion.
1974              rd-rnd        read data randomly. By default, written data is
1975                            not  read back, however, this option will force
1976                            it to be read back randomly.
1977              rd-seq        read data  sequentially.  By  default,  written
1978                            data  is  not  read  back, however, this option
1979                            will force it to be read back sequentially.
1980              syncfs        write all buffered modifications of file  meta‐
1981                            data  and  data on the filesystem that contains
1982                            the hdd worker files.
1983
1984              utimes        force update of file timestamp  which  may  in‐
1985                            crease metadata writes.
1986              wr-rnd        write  data  randomly. The wr-seq option cannot
1987                            be used at the same time.
1988              wr-seq        write data sequentially. This is the default if
1989                            no write modes are specified.
1990
1991       Note  that  some  of these options are mutually exclusive, for example,
1992       there can be only one method of  writing  or  reading.   Also,  fadvise
1993       flags  may  be  mutually exclusive, for example fadv-willneed cannot be
1994       used with fadv-dontneed.
1995
1996       --hdd-ops N
1997              stop hdd stress workers after N bogo operations.
1998
1999       --hdd-write-size N
2000              specify size of each write in bytes. Size can be from 1 byte  to
2001              4MB.
2002
2003       --heapsort N
2004              start  N  workers  that sort 32 bit integers using the BSD heap‐
2005              sort.
2006
2007       --heapsort-ops N
2008              stop heapsort stress workers after N bogo heapsorts.
2009
2010       --heapsort-size N
2011              specify number of 32 bit integers to  sort,  default  is  262144
2012              (256 × 1024).
2013
2014       --hrtimers N
2015              start  N  workers  that exercise high resolution times at a high
2016              frequency. Each stressor starts 32 processes that run with  ran‐
2017              dom  timer  intervals  of  0..499999  nanoseconds.  Running this
2018              stressor with appropriate privilege  will  run  these  with  the
2019              SCHED_RR policy.
2020
2021       --hrtimers-ops N
2022              stop hrtimers stressors after N timer event bogo operations
2023
2024       --hrtimers-adjust
2025              enable  automatic  timer  rate adjustment to try to maximize the
2026              hrtimer frequency.  The signal rate is measured every  0.1  sec‐
2027              onds  and the hrtimer delay is adjusted to try and set the opti‐
2028              mal hrtimer delay to generate the highest hrtimer rates.
2029
2030       --hsearch N
2031              start N  workers  that  search  a  80%  full  hash  table  using
2032              hsearch(3).  By  default,  there are 8192 elements inserted into
2033              the hash table.  This is a useful method to exercise  access  of
2034              memory and processor cache.
2035
2036       --hsearch-ops N
2037              stop  the  hsearch  workers  after N bogo hsearch operations are
2038              completed.
2039
2040       --hsearch-size N
2041              specify the number of hash entries to be inserted into the  hash
2042              table. Size can be from 1K to 4M.
2043
2044       --icache N
2045              start N workers that stress the instruction cache by forcing in‐
2046              struction cache reloads.  This is achieved by modifying  an  in‐
2047              struction  cache  line,  causing the processor to reload it when
2048              we call a function in inside it. Currently only verified and en‐
2049              abled for Intel x86 CPUs.
2050
2051       --icache-ops N
2052              stop  the icache workers after N bogo icache operations are com‐
2053              pleted.
2054
2055       --icmp-flood N
2056              start N workers that flood localhost with  randonly  sized  ICMP
2057              ping packets.  This stressor requires the CAP_NET_RAW capbility.
2058
2059       --icmp-flood-ops N
2060              stop  icmp  flood  workers  after  N ICMP ping packets have been
2061              sent.
2062
2063       --idle-scan N
2064              start N workers that scan the idle page bitmap across a range of
2065              physical pages. This sets and checks for idle pages via the idle
2066              page tracking interface  /sys/kernel/mm/page_idle/bitmap.   This
2067              is for Linux only.
2068
2069       --idle-scan-ops N
2070              stop  after N bogo page scan operations. Currently one bogo page
2071              scan operation is equivalent to setting and checking 64 physical
2072              pages.
2073
2074       --idle-page N
2075              start  N  workers  that  walks through every page exercising the
2076              Linux   /sys/kernel/mm/page_idle/bitmap   interface.    Requires
2077              CAP_SYS_RESOURCE capability.
2078
2079       --idle-page-ops N
2080              stop after N bogo idle page operations.
2081
2082       --inode-flags N
2083              start  N workers that exercise inode flags using the FS_IOC_GET‐
2084              FLAGS and FS_IOC_SETFLAGS ioctl(2). This attempts to  apply  all
2085              the  available inode flags onto a directory and file even if the
2086              underlying file system may not support these flags  (errors  are
2087              just  ignored).   Each  worker  runs 4 threads that exercise the
2088              flags on the same directory and file to try to force races. This
2089              is a Linux only stressor, see ioctl_iflags(2) for more details.
2090
2091       --inode-flags-ops N
2092              stop  the  inode-flags  workers  after  N ioctl flag setting at‐
2093              tempts.
2094
2095       --inotify N
2096              start N workers performing file system activities such  as  mak‐
2097              ing/deleting files/directories, moving files, etc. to stress ex‐
2098              ercise the various inotify events (Linux only).
2099
2100       --inotify-ops N
2101              stop inotify stress workers after N inotify bogo operations.
2102
2103       -i N, --io N
2104              start N workers continuously calling sync(2)  to  commit  buffer
2105              cache  to  disk.  This can be used in conjunction with the --hdd
2106              options.
2107
2108       --io-ops N
2109              stop io stress workers after N bogo operations.
2110
2111       --iomix N
2112              start N workers that perform a mix  of  sequential,  random  and
2113              memory  mapped read/write operations as well as random copy file
2114              read/writes, forced sync'ing and (if run as  root)  cache  drop‐
2115              ping.   Multiple child processes are spawned to all share a sin‐
2116              gle file and perform different I/O operations on the same file.
2117
2118       --iomix-bytes N
2119              write N bytes for each iomix worker process, the  default  is  1
2120              GB. One can specify the size as % of free space on the file sys‐
2121              tem or in units of Bytes, KBytes, MBytes and  GBytes  using  the
2122              suffix b, k, m or g.
2123
2124       --iomix-ops N
2125              stop iomix stress workers after N bogo iomix I/O operations.
2126
2127       --ioport N
2128              start N workers than perform bursts of 16 reads and 16 writes of
2129              ioport 0x80 (x86 Linux systems  only).   I/O  performed  on  x86
2130              platforms  on  port 0x80 will cause delays on the CPU performing
2131              the I/O.
2132
2133       --ioport-ops N
2134              stop the ioport stressors after N bogo I/O operations
2135
2136       --ioport-opts [ in | out | inout ]
2137              specify if port reads in, port read  writes  out  or  reads  and
2138              writes are to be performed.  The default is both in and out.
2139
2140       --ioprio N
2141              start   N  workers  that  exercise  the  ioprio_get(2)  and  io‐
2142              prio_set(2) system calls (Linux only).
2143
2144       --ioprio-ops N
2145              stop after N io priority bogo operations.
2146
2147       --io-uring N
2148              start N workers that perform iovec write and read I/O operations
2149              using the Linux io-uring interface. On each bogo-loop 1024 × 512
2150              byte writes and 1024 × reads are performed on a temporary file.
2151
2152       --io-uring-ops
2153              stop after N rounds of write and reads.
2154
2155       --ipsec-mb N
2156              start N workers that perform cryptographic processing using  the
2157              highly  optimized  Intel  Multi-Buffer Crypto for IPsec library.
2158              Depending on the features available, SSE, AVX,  AVX  and  AVX512
2159              CPU  features  will be used on data encrypted by SHA, DES, CMAC,
2160              CTR, HMAC MD5, HMAC SHA1 and HMAC SHA512 cryptographic routines.
2161              This is only available for x86-64 modern Intel CPUs.
2162
2163       --ipsec-mb-ops N
2164              stop  after  N  rounds  of  processing of data using the crypto‐
2165              graphic routines.
2166
2167       --ipsec-mb-feature [ sse | avx | avx2 | avx512 ]
2168              Just use the specified processor CPU feature.  By  default,  all
2169              the available features for the CPU are exercised.
2170
2171       --itimer N
2172              start  N  workers that exercise the system interval timers. This
2173              sets up an ITIMER_PROF itimer that generates a  SIGPROF  signal.
2174              The  default  frequency  for  the  itimer is 1 MHz, however, the
2175              Linux kernel will set this to be no more that the jiffy setting,
2176              hence  high frequency SIGPROF signals are not normally possible.
2177              A busy loop spins on getitimer(2) calls to consume CPU and hence
2178              decrement  the  itimer  based on amount of time spent in CPU and
2179              system time.
2180
2181       --itimer-ops N
2182              stop itimer stress workers after N bogo itimer SIGPROF signals.
2183
2184       --itimer-freq F
2185              run itimer at F Hz; range from 1 to  1000000  Hz.  Normally  the
2186              highest  frequency  is  limited by the number of jiffy ticks per
2187              second, so running above 1000 Hz is difficult to attain in prac‐
2188              tice.
2189
2190       --itimer-rand
2191              select  an  interval  timer  frequency based around the interval
2192              timer frequency +/- 12.5% random jitter.  This  tries  to  force
2193              more  variability  in  the timer interval to make the scheduling
2194              less predictable.
2195
2196       --jpeg N
2197              start N workers that use jpeg compression on a machine generated
2198              plasma field image. The default image is a plasma field, however
2199              different image types may be selected. The starting raster  line
2200              is  changed  on  each  compression iteration to cycle around the
2201              data.
2202
2203       --jpeg-ops N
2204              stop after N jpeg compression operations.
2205
2206       --jpeg-height H
2207              use a RGB sample image height of H pixels. The  default  is  512
2208              pixels.
2209
2210       --jpeg-image [ brown | flat | gradient | noise | plasma | xstripes ]
2211              select  the  source image type to be compressed. Available image
2212              types are:
2213
2214              Type           Description
2215              brown          brown noise, red and green values  vary
2216                             by a 3 bit value, blue values vary by a
2217                             2 bit value.
2218              flat           a single random colour for  the  entire
2219                             image.
2220
2221
2222
2223              gradient       linear  gradient  of the red, green and
2224                             blue components across  the  width  and
2225                             height of the image.
2226              noise          random white noise for red, green, blue
2227                             values.
2228              plasma         plasma field with smooth colour transi‐
2229                             tions and hard boundary edges.
2230              xstripes       a  random  colour  for  each horizontal
2231                             line.
2232
2233       --jpeg-width H
2234              use a RGB sample image width of H pixels.  The  default  is  512
2235              pixels.
2236
2237       --jpeg-quality Q
2238              use  the  compression  quality Q. The range is 1..100 (1 lowest,
2239              100 highest), with a default of 95
2240
2241       --judy N
2242              start N workers that insert, search and delete 32  bit  integers
2243              in  a  Judy array using a predictable yet sparse array index. By
2244              default, there are 131072 integers used in the Judy array.  This
2245              is  a useful method to exercise random access of memory and pro‐
2246              cessor cache.
2247
2248       --judy-ops N
2249              stop the judy workers after N  bogo  judy  operations  are  com‐
2250              pleted.
2251
2252       --judy-size N
2253              specify  the  size (number of 32 bit integers) in the Judy array
2254              to exercise.  Size can be from 1K to 4M 32 bit integers.
2255
2256       --kcmp N
2257              start N workers that use kcmp(2) to  compare  parent  and  child
2258              processes to determine if they share kernel resources. Supported
2259              only for Linux and requires CAP_SYS_PTRACE capability.
2260
2261       --kcmp-ops N
2262              stop kcmp workers after N bogo kcmp operations.
2263
2264       --key N
2265              start N workers that create and manipulate keys using add_key(2)
2266              and  ketctl(2).  As  many keys are created as the per user limit
2267              allows and then the following keyctl commands are  exercised  on
2268              each  key:  KEYCTL_SET_TIMEOUT,  KEYCTL_DESCRIBE, KEYCTL_UPDATE,
2269              KEYCTL_READ, KEYCTL_CLEAR and KEYCTL_INVALIDATE.
2270
2271       --key-ops N
2272              stop key workers after N bogo key operations.
2273
2274       --kill N
2275              start N workers sending SIGUSR1 kill signals to a SIG_IGN signal
2276              handler  in  the  stressor  and  SIGUSR1  kill signal to a child
2277              stressor with a SIGUSR1 handler. Most of the process  time  will
2278              end up in kernel space.
2279
2280       --kill-ops N
2281              stop kill workers after N bogo kill operations.
2282
2283       --klog N
2284              start  N  workers  exercising  the kernel syslog(2) system call.
2285              This will attempt to read the kernel log with various sized read
2286              buffers. Linux only.
2287
2288       --klog-ops N
2289              stop klog workers after N syslog operations.
2290
2291       --kvm N
2292              start  N  workers that create, run and destroy a minimal virtual
2293              machine. The virtual machine reads,  increments  and  writes  to
2294              port 0x80 in a spin loop and the stressor handles the I/O trans‐
2295              actions. Currently for x86 and Linux only.
2296
2297       --kvm-ops N
2298              stop kvm stressors after N virtual machines have  been  created,
2299              run and destroyed.
2300
2301       --l1cache N
2302              start  N  workers that exercise the CPU level 1 cache with reads
2303              and writes. A cache aligned buffer that is  twice  the  level  1
2304              cache  size  is read and then written in level 1 cache set sized
2305              steps over each level 1 cache set. This is designed to  exercise
2306              cache  block evictions. The bogo-op count measures the number of
2307              million cache lines touched.  Where possible, the level 1  cache
2308              geometry  is  determined  from  the kernel, however, this is not
2309              possible on some architectures or kernels, so one  may  need  to
2310              specify  these  manually.  One  can specify 3 out of the 4 cache
2311              geometric parameters, these are as follows:
2312
2313       --l1cache-line-size N
2314              specify the level 1 cache line size (in bytes)
2315
2316       --l1cache-sets N
2317              specify the number of level 1 cache sets
2318
2319       --l1cache-size N
2320              specify the level 1 cache size (in bytes)
2321
2322       --l1cache-ways N
2323              specify the number of level 1 cache ways
2324
2325       --landlock N
2326              start N workers that exercise Linux 5.13 landlocking. A range of
2327              landlock_create_ruleset  flags  are  exercised  with a read only
2328              file rule to see if a directory can be accessed and a read-write
2329              file create can be blocked. Each ruleset attempt is exercised in
2330              a new child context and this is the limiting factor on the speed
2331              of the stressor.
2332
2333       --landlock-ops N
2334              stop the landlock stressors after N landlock ruleset bogo opera‐
2335              tions.
2336
2337       --lease N
2338              start N workers locking, unlocking and breaking leases  via  the
2339              fcntl(2)  F_SETLEASE operation. The parent processes continually
2340              lock and unlock a lease on a file while a user selectable number
2341              of  child  processes  open  the file with a non-blocking open to
2342              generate SIGIO lease breaking notifications to the parent.  This
2343              stressor  is  only  available if F_SETLEASE, F_WRLCK and F_UNLCK
2344              support is provided by fcntl(2).
2345
2346       --lease-ops N
2347              stop lease workers after N bogo operations.
2348
2349       --lease-breakers N
2350              start N lease breaker child processes per  lease  worker.   Nor‐
2351              mally one child is plenty to force many SIGIO lease breaking no‐
2352              tification signals to the parent, however,  this  option  allows
2353              one to specify more child processes if required.
2354
2355       --link N
2356              start N workers creating and removing hardlinks.
2357
2358       --link-ops N
2359              stop link stress workers after N bogo operations.
2360
2361       --list N
2362              start  N workers that exercise list data structures. The default
2363              is to add, find and remove 5,000 64 bit  integers  into  circleq
2364              (doubly  linked  circle queue), list (doubly linked list), slist
2365              (singly linked list), slistt (singly linked  list  using  tail),
2366              stailq  (singly linked tail queue) and tailq (doubly linked tail
2367              queue) lists. The intention of this stressor is to exercise mem‐
2368              ory and cache with the various list operations.
2369
2370       --list-ops N
2371              stop list stressors after N bogo ops. A bogo op covers the addi‐
2372              tion, finding and removing all the items into the list(s).
2373
2374       --list-size N
2375              specify the size of the list, where N is the number  of  64  bit
2376              integers to be added into the list.
2377
2378       --list-method [ all | circleq | list | slist | stailq | tailq ]
2379              specify  the  list  to be used. By default, all the list methods
2380              are used (the 'all' option).
2381
2382       --loadavg N
2383              start N workers that attempt to  create  thousands  of  pthreads
2384              that run at the lowest nice priority to force very high load av‐
2385              erages. Linux systems will also perform some I/O writes as pend‐
2386              ing I/O is also factored into system load accounting.
2387
2388       --loadavg-ops N
2389              stop  loadavg  workers  after  N  bogo  scheduling yields by the
2390              pthreads have been reached.
2391
2392       --lockbus N
2393              start N workers that rapidly lock and increment 64 bytes of ran‐
2394              domly chosen memory from a 16MB mmap'd region (Intel x86 and ARM
2395              CPUs only).  This will cause cacheline misses  and  stalling  of
2396              CPUs.
2397
2398       --lockbus-ops N
2399              stop lockbus workers after N bogo operations.
2400
2401       --locka N
2402              start  N workers that randomly lock and unlock regions of a file
2403              using  the  POSIX  advisory  locking  mechanism  (see  fcntl(2),
2404              F_SETLK,  F_GETLK).  Each  worker creates a 1024 KB file and at‐
2405              tempts to hold a maximum of 1024 concurrent locks with  a  child
2406              process that also tries to hold 1024 concurrent locks. Old locks
2407              are unlocked in a first-in, first-out basis.
2408
2409       --locka-ops N
2410              stop locka workers after N bogo locka operations.
2411
2412       --lockf N
2413              start N workers that randomly lock and unlock regions of a  file
2414              using  the POSIX lockf(3) locking mechanism. Each worker creates
2415              a 64 KB file and attempts to hold a maximum of  1024  concurrent
2416              locks  with a child process that also tries to hold 1024 concur‐
2417              rent locks. Old locks are unlocked in a first-in, first-out  ba‐
2418              sis.
2419
2420       --lockf-ops N
2421              stop lockf workers after N bogo lockf operations.
2422
2423       --lockf-nonblock
2424              instead  of  using  blocking  F_LOCK lockf(3) commands, use non-
2425              blocking F_TLOCK commands and re-try if the lock  failed.   This
2426              creates  extra  system  call overhead and CPU utilisation as the
2427              number of lockf workers increases and  should  increase  locking
2428              contention.
2429
2430       --lockofd N
2431              start  N workers that randomly lock and unlock regions of a file
2432              using the Linux  open  file  description  locks  (see  fcntl(2),
2433              F_OFD_SETLK,  F_OFD_GETLK).   Each worker creates a 1024 KB file
2434              and attempts to hold a maximum of 1024 concurrent locks  with  a
2435              child process that also tries to hold 1024 concurrent locks. Old
2436              locks are unlocked in a first-in, first-out basis.
2437
2438       --lockofd-ops N
2439              stop lockofd workers after N bogo lockofd operations.
2440
2441       --longjmp N
2442              start N workers  that  exercise  setjmp(3)/longjmp(3)  by  rapid
2443              looping on longjmp calls.
2444
2445       --longjmp-ops N
2446              stop  longjmp  stress workers after N bogo longjmp operations (1
2447              bogo op is 1000 longjmp calls).
2448
2449       --loop N
2450              start N workers that exercise the loopback control device.  This
2451              creates 2MB loopback devices, expands them to 4MB, performs some
2452              loopback status information get  and  set  operations  and  then
2453              destoys them. Linux only and requires CAP_SYS_ADMIN capability.
2454
2455       --loop-ops N
2456              stop after N bogo loopback creation/deletion operations.
2457
2458       --lsearch N
2459              start  N  workers  that linear search a unsorted array of 32 bit
2460              integers using lsearch(3). By default, there are  8192  elements
2461              in  the  array.   This is a useful method to exercise sequential
2462              access of memory and processor cache.
2463
2464       --lsearch-ops N
2465              stop the lsearch workers after N  bogo  lsearch  operations  are
2466              completed.
2467
2468       --lsearch-size N
2469              specify  the  size  (number  of 32 bit integers) in the array to
2470              lsearch. Size can be from 1K to 4M.
2471
2472       --madvise N
2473              start N workers that apply random madvise(2) advise settings  on
2474              pages of a 4MB file backed shared memory mapping.
2475
2476       --madvise-ops N
2477              stop madvise stressors after N bogo madvise operations.
2478
2479       --malloc N
2480              start N workers continuously calling malloc(3), calloc(3), real‐
2481              loc(3) and free(3). By default, up to 65536 allocations  can  be
2482              active  at  any  point,  but this can be altered with the --mal‐
2483              loc-max option.  Allocation, reallocation and freeing are chosen
2484              at  random;  50%  of  the time memory is allocation (via malloc,
2485              calloc or realloc) and 50% of the time allocations  are  free'd.
2486              Allocation  sizes  are  also random, with the maximum allocation
2487              size controlled by the --malloc-bytes option, the  default  size
2488              being  64K.  The worker is re-started if it is killed by the out
2489              of memory (OOM) killer.
2490
2491       --malloc-bytes N
2492              maximum per allocation/reallocation size. Allocations  are  ran‐
2493              domly  selected from 1 to N bytes. One can specify the size as %
2494              of total available memory or in units of Bytes,  KBytes,  MBytes
2495              and  GBytes  using  the  suffix  b, k, m or g.  Large allocation
2496              sizes cause the memory allocator to use mmap(2) rather than  ex‐
2497              panding the heap using brk(2).
2498
2499       --malloc-max N
2500              maximum  number  of  active allocations allowed. Allocations are
2501              chosen at random and placed in an allocation slot. Because about
2502              50%/50%  split between allocation and freeing, typically half of
2503              the allocation slots are in use at any one time.
2504
2505       --malloc-ops N
2506              stop after N malloc bogo operations. One bogo operations relates
2507              to a successful malloc(3), calloc(3) or realloc(3).
2508
2509       --malloc-pthreads N
2510              specify  number  of malloc stressing concurrent pthreads to run.
2511              The default is 0 (just one main process, no pthreads). This  op‐
2512              tion will do nothing if pthreads are not supported.
2513
2514       --malloc-thresh N
2515              specify  the  threshold  where  malloc  uses  mmap(2) instead of
2516              sbrk(2) to allocate more memory. This is only available on  sys‐
2517              tems that provide the GNU C mallopt(3) tuning function.
2518
2519       --malloc-touch
2520              touch  every  allocated  page  to force pages to be populated in
2521              memory. This will increase the memory pressure and exercise  the
2522              virtual  memory harder. By default the malloc stressor will mad‐
2523              vise pages into memory or use mincore to check for  non-resident
2524              memory  pages and try to force them into memory; this option ag‐
2525              gressively forces pages to be memory resident.
2526
2527       --matrix N
2528              start N workers that perform various matrix operations on float‐
2529              ing point values. Testing on 64 bit x86 hardware shows that this
2530              provides a good mix of memory, cache and floating  point  opera‐
2531              tions and is an excellent way to make a CPU run hot.
2532
2533              By default, this will exercise all the matrix stress methods one
2534              by one.  One can specify a specific matrix  stress  method  with
2535              the --matrix-method option.
2536
2537       --matrix-ops N
2538              stop matrix stress workers after N bogo operations.
2539
2540       --matrix-method method
2541              specify  a matrix stress method. Available matrix stress methods
2542              are described as follows:
2543
2544              Method     Description
2545              all        iterate over all the below matrix stress  meth‐
2546                         ods
2547              add        add two N × N matrices
2548
2549              copy       copy one N × N matrix to another
2550              div        divide an N × N matrix by a scalar
2551              frobenius  Frobenius product of two N × N matrices
2552              hadamard   Hadamard product of two N × N matrices
2553              identity   create an N × N identity matrix
2554              mean       arithmetic mean of two N × N matrices
2555              mult       multiply an N × N matrix by a scalar
2556              negate     negate an N × N matrix
2557              prod       product of two N × N matrices
2558              sub        subtract  one  N  × N matrix from another N × N
2559                         matrix
2560              square     multiply an N × N matrix by itself
2561              trans      transpose an N × N matrix
2562              zero       zero an N × N matrix
2563
2564       --matrix-size N
2565              specify the N × N size of the matrices.  Smaller  values  result
2566              in  a floating point compute throughput bound stressor, where as
2567              large values result in a cache  and/or  memory  bandwidth  bound
2568              stressor.
2569
2570       --matrix-yx
2571              perform  matrix  operations  in order y by x rather than the de‐
2572              fault x by y. This is suboptimal ordering compared  to  the  de‐
2573              fault and will perform more data cache stalls.
2574
2575       --matrix-3d N
2576              start  N  workers  that  perform various 3D matrix operations on
2577              floating point values. Testing on 64 bit x86 hardware shows that
2578              this provides a good mix of memory, cache and floating point op‐
2579              erations and is an excellent way to make a CPU run hot.
2580
2581              By default, this will exercise all the 3D matrix stress  methods
2582              one  by one.  One can specify a specific 3D matrix stress method
2583              with the --matrix-3d-method option.
2584
2585       --matrix-3d-ops N
2586              stop the 3D matrix stress workers after N bogo operations.
2587
2588       --matrix-3d-method method
2589              specify a 3D matrix stress method. Available  3D  matrix  stress
2590              methods are described as follows:
2591
2592              Method     Description
2593              all        iterate  over all the below matrix stress meth‐
2594                         ods
2595              add        add two N × N × N matrices
2596              copy       copy one N × N × N matrix to another
2597              div        divide an N × N × N matrix by a scalar
2598              frobenius  Frobenius product of two N × N × N matrices
2599              hadamard   Hadamard product of two N × N × N matrices
2600              identity   create an N × N × N identity matrix
2601              mean       arithmetic mean of two N × N × N matrices
2602              mult       multiply an N × N × N matrix by a scalar
2603              negate     negate an N × N × N matrix
2604              sub        subtract one N × N × N matrix from another N  ×
2605                         N × N matrix
2606              trans      transpose an N × N × N matrix
2607              zero       zero an N × N × N matrix
2608
2609       --matrix-3d-size N
2610              specify  the N × N × N size of the matrices.  Smaller values re‐
2611              sult in a floating  point  compute  throughput  bound  stressor,
2612              where  as large values result in a cache and/or memory bandwidth
2613              bound stressor.
2614
2615       --matrix-3d-zyx
2616              perform matrix operations in order z by y by x rather  than  the
2617              default x by y by z. This is suboptimal ordering compared to the
2618              default and will perform more data cache stalls.
2619
2620       --mcontend N
2621              start N workers that produce memory contention  read/write  pat‐
2622              terns.  Each stressor runs with 5 threads that read and write to
2623              two different mappings of the  same  underlying  physical  page.
2624              Various caching operations are also exercised to cause sub-opti‐
2625              mal memory access patterns.  The threads  also  randomly  change
2626              CPU affinity to exercise CPU and memory migration stress.
2627
2628       --mcontend-ops N
2629              stop mcontend stressors after N bogo read/write operations.
2630
2631       --membarrier N
2632              start  N workers that exercise the membarrier system call (Linux
2633              only).
2634
2635       --membarrier-ops N
2636              stop membarrier stress workers after N  bogo  membarrier  opera‐
2637              tions.
2638
2639       --memcpy N
2640              start  N workers that copy 2MB of data from a shared region to a
2641              buffer using memcpy(3) and then move the data in the buffer with
2642              memmove(3)  with 3 different alignments. This will exercise pro‐
2643              cessor cache and system memory.
2644
2645       --memcpy-ops N
2646              stop memcpy stress workers after N bogo memcpy operations.
2647
2648       --memcpy-method [ all | libc | builtin | naive ]
2649              specify a memcpy copying method. Available  memcpy  methods  are
2650              described as follows:
2651
2652              Method    Description
2653              all       use libc, builtin and naïve methods
2654              libc      use  libc memcpy and memmove functions, this is
2655                        the default
2656              builtin   use the compiler built in optimized memcpy  and
2657                        memmove functions
2658              naive     use  naïve byte by byte copying and memory mov‐
2659                        ing build with  default  compiler  optimization
2660                        flags
2661              naive_o0  use  unoptimized naïve byte by byte copying and
2662                        memory moving
2663              naive_o3  use optimized naïve byte by  byte  copying  and
2664                        memory  moving  build with -O3 optimization and
2665                        where possible use CPU specific optimizations
2666
2667       --memfd N
2668              start N workers that create  allocations  of  1024  pages  using
2669              memfd_create(2)  and  ftruncate(2) for allocation and mmap(2) to
2670              map the allocation  into  the  process  address  space.   (Linux
2671              only).
2672
2673       --memfd-bytes N
2674              allocate  N bytes per memfd stress worker, the default is 256MB.
2675              One can specify the size in as % of total available memory or in
2676              units of Bytes, KBytes, MBytes and GBytes using the suffix b, k,
2677              m or g.
2678
2679       --memfd-fds N
2680              create N memfd file descriptors, the default is 256. One can se‐
2681              lect 8 to 4096 memfd file descriptions with this option.
2682
2683       --memfd-ops N
2684              stop after N memfd-create(2) bogo operations.
2685
2686       --memhotplug N
2687              start  N workers that offline and online memory hotplug regions.
2688              Linux only and requires CAP_SYS_ADMIN capabilities.
2689
2690       --memhotplug-ops N
2691              stop memhotplug stressors after N memory offline and online bogo
2692              operations.
2693
2694       --memrate N
2695              start N workers that exercise a buffer with 1024, 512, 256, 128,
2696              64, 32, 16 and 8 bit reads and writes. 1024, 512 and  256  reads
2697              and  writes  are  available  with compilers that support integer
2698              vectors.  x86-64 cpus that support uncached (non-temporal  "nt")
2699              writes  also  exercise  128,  64  and 32 writes providing higher
2700              write rates than the normal cached  writes.  CPUs  that  support
2701              prefetching  reads also exercise 64 prefetched "pf" reads.  This
2702              memory stressor allows one to also specify the maximum read  and
2703              write  rates. The stressors will run at maximum speed if no read
2704              or write rates are specified.
2705
2706       --memrate-ops N
2707              stop after N bogo memrate operations.
2708
2709       --memrate-bytes N
2710              specify the size of the memory buffer being exercised.  The  de‐
2711              fault size is 256MB. One can specify the size in units of Bytes,
2712              KBytes, MBytes and GBytes using the suffix b, k, m or g.
2713
2714       --memrate-rd-mbs N
2715              specify the maximum allowed read rate in MB/sec. The actual read
2716              rate  is dependent on scheduling jitter and memory accesses from
2717              other running processes.
2718
2719       --memrate-wr-mbs N
2720              specify the maximum allowed read  rate  in  MB/sec.  The  actual
2721              write rate is dependent on scheduling jitter and memory accesses
2722              from other running processes.
2723
2724       --memthrash N
2725              start N workers that thrash and exercise a 16MB buffer in  vari‐
2726              ous  ways  to  try and trip thermal overrun.  Each stressor will
2727              start 1 or more threads.  The number of  threads  is  chosen  so
2728              that  there will be at least 1 thread per CPU. Note that the op‐
2729              timal choice for N is a value that divides into  the  number  of
2730              CPUs.
2731
2732       --memthrash-ops N
2733              stop after N memthrash bogo operations.
2734
2735       --memthrash-method method
2736              specify  a  memthrash  stress method. Available memthrash stress
2737              methods are described as follows:
2738
2739              Method     Description
2740              all        iterate over all the below memthrash methods
2741              chunk1     memset 1 byte chunks of random data into random
2742                         locations
2743              chunk8     memset 8 byte chunks of random data into random
2744                         locations
2745              chunk64    memset 64 byte chunks of random data into  ran‐
2746                         dom locations
2747              chunk256   memset 256 byte chunks of random data into ran‐
2748                         dom locations
2749              chunkpage  memset page size chunks  of  random  data  into
2750                         random locations
2751              flip       flip (invert) all bits in random locations
2752              flush      flush cache line in random locations
2753              lock       lock randomly choosing locations (Intel x86 and
2754                         ARM CPUs only)
2755              matrix     treat memory as a 2 × 2 matrix and swap  random
2756                         elements
2757              memmove    copy  all the data in buffer to the next memory
2758                         location
2759              memset     memset the memory with random data
2760              memset64   memset the memory with a random 64 bit value in
2761                         64  byte  chunks  using  non-temporal stores if
2762                         possible or normal stores as a fallback
2763              mfence     stores with write serialization
2764              prefetch   prefetch data at random memory locations
2765              random     randomly run any of the memthrash  methods  ex‐
2766                         cept for 'random' and 'all'
2767              spinread   spin  loop  read  the same random location 2^19
2768                         times
2769              spinwrite  spin loop write the same random  location  2^19
2770                         times
2771              swap       step  through memory swapping bytes in steps of
2772                         65 and 129 byte strides
2773
2774       --mergesort N
2775              start N workers that sort 32 bit integers using the  BSD  merge‐
2776              sort.
2777
2778       --mergesort-ops N
2779              stop mergesort stress workers after N bogo mergesorts.
2780
2781       --mergesort-size N
2782              specify  number  of  32  bit integers to sort, default is 262144
2783              (256 × 1024).
2784
2785       --mincore N
2786              start N workers that walk through all of memory 1 page at a time
2787              checking if the page mapped and also is resident in memory using
2788              mincore(2). It also maps and unmaps a page to check if the  page
2789              is mapped or not using mincore(2).
2790
2791       --mincore-ops N
2792              stop  after  N  mincore  bogo operations. One mincore bogo op is
2793              equivalent to a 300 mincore(2) calls.  --mincore-random  instead
2794              of  walking  through pages sequentially, select pages at random.
2795              The chosen address is iterated over by  shifting  it  right  one
2796              place  and checked by mincore until the address is less or equal
2797              to the page size.
2798
2799       --misaligned N
2800              start N workers that perform misaligned read and writes. By  de‐
2801              fault,  this will exercise 128 bit misaligned read and writes in
2802              8 x 16 bits, 4 x 32 bits, 2 x 64 bits and 1 x 128  bits  at  the
2803              start of a page boundary, at the end of a page boundary and over
2804              a cache boundary. Misaligned read and writes operate at  1  byte
2805              offset  from the natural alignment of the data type. On some ar‐
2806              chitectures this can cause SIGBUS, SIGILL or SIGSEGV, these  are
2807              handled  and the misaligned stressor method causing the error is
2808              disabled.
2809
2810       --misaligned-ops N
2811              stop after N misaligned bogo operation. A misaligned bogo op  is
2812              equivalent to 65536 x 128 bit reads or writes.
2813
2814       --misaligned-method M
2815              Available misaligned stress methods are described as follows:
2816
2817              Method       Description
2818              all          iterate over all the following misaligned methods
2819              int16rd      8 x 16 bit integer reads
2820              int16wr      8 x 16 bit integer writes
2821              int16inc     8 x 16 bit integer increments
2822              int16atomic  8 x 16 bit atomic integer increments
2823              int32rd      4 x 32 bit integer reads
2824              int32wr      4 x 32 bit integer writes
2825              int32wtnt    4 x 32 bit non-termporal stores (x86 only)
2826              int32inc     4 x 32 bit integer increments
2827              int32atomic  4 x 32 bit atomic integer increments
2828              int64rd      2 x 64 bit integer reads
2829              int64wr      2 x 64 bit integer writes
2830              int64wtnt    4 x 64 bit non-termporal stores (x86 only)
2831              int64inc     2 x 64 bit integer increments
2832              int64atomic  2 x 64 bit atomic integer increments
2833              int128rd     1 x 128 bit integer reads
2834              int128wr     1 x 128 bit integer writes
2835              int128inc    1 x 128 bit integer increments
2836              int128atomic 1 x 128 bit atomic integer increments
2837
2838       Note  that  some of these options (128 bit integer and/or atomic opera‐
2839       tions) may not be available on some systems.
2840
2841       --mknod N
2842              start N workers that create and remove fifos,  empty  files  and
2843              named sockets using mknod and unlink.
2844
2845       --mknod-ops N
2846              stop directory thrash workers after N bogo mknod operations.
2847
2848       --mlock N
2849              start  N  workers that lock and unlock memory mapped pages using
2850              mlock(2), munlock(2), mlockall(2)  and  munlockall(2).  This  is
2851              achieved by the mapping of three contiguous pages and then lock‐
2852              ing the second page, hence  ensuring  non-contiguous  pages  are
2853              locked  . This is then repeated until the maximum allowed mlocks
2854              or a maximum of 262144 mappings are made.  Next, all future map‐
2855              pings  are  mlocked and the worker attempts to map 262144 pages,
2856              then all pages are munlocked and the pages are unmapped.
2857
2858       --mlock-ops N
2859              stop after N mlock bogo operations.
2860
2861       --mlockmany N
2862              start N workers that fork off a default of 1024 child  processes
2863              in  total; each child will attempt to anonymously mmap and mlock
2864              the maximum allowed mlockable memory size.  The stress test  at‐
2865              tempts to avoid swapping by tracking low memory and swap alloca‐
2866              tions (but some swapping may occur).  Once  either  the  maximum
2867              number of child process is reached or all mlockable in-core mem‐
2868              ory is locked then child processes are  killed  and  the  stress
2869              test is repeated.
2870
2871       --mlockmany-ops N
2872              stop after N mlockmany (mmap and mlock) operations.
2873
2874       --mlockmany-procs N
2875              set  the  number  of child processes to create per stressor. The
2876              default is to start a maximum of 1024 child processes  in  total
2877              across  all  the  stressors. This option allows the setting of N
2878              child processes per stressor.
2879
2880       --mmap N
2881              start N workers  continuously  calling  mmap(2)/munmap(2).   The
2882              initial   mapping   is   a   large   chunk  (size  specified  by
2883              --mmap-bytes) followed  by  pseudo-random  4K  unmappings,  then
2884              pseudo-random  4K mappings, and then linear 4K unmappings.  Note
2885              that this can cause systems to trip the  kernel  OOM  killer  on
2886              Linux  systems  if  not  enough  physical memory and swap is not
2887              available.  The MAP_POPULATE option is used  to  populate  pages
2888              into memory on systems that support this.  By default, anonymous
2889              mappings are used, however, the --mmap-file and --mmap-async op‐
2890              tions allow one to perform file based mappings if desired.
2891
2892       --mmap-ops N
2893              stop mmap stress workers after N bogo operations.
2894
2895       --mmap-async
2896              enable  file based memory mapping and use asynchronous msync'ing
2897              on each page, see --mmap-file.
2898
2899       --mmap-bytes N
2900              allocate N bytes per mmap stress worker, the default  is  256MB.
2901              One  can  specify  the size as % of total available memory or in
2902              units of Bytes, KBytes, MBytes and GBytes using the suffix b, k,
2903              m or g.
2904
2905       --mmap-file
2906              enable  file based memory mapping and by default use synchronous
2907              msync'ing on each page.
2908
2909       --mmap-mmap2
2910              use mmap2 for 4K page aligned offsets  if  mmap2  is  available,
2911              otherwise fall back to mmap.
2912
2913       --mmap-mprotect
2914              change  protection settings on each page of memory.  Each time a
2915              page or a group of pages are mapped or remapped then this option
2916              will  make the pages read-only, write-only, exec-only, and read-
2917              write.
2918
2919       --mmap-odirect
2920              enable file based memory mapping and use O_DIRECT direct I/O.
2921
2922       --mmap-osync
2923              enable file based memory mapping and used O_SYNC synchronous I/O
2924              integrity completion.
2925
2926       --mmapaddr N
2927              start  N  workers that memory map pages at a random memory loca‐
2928              tion that is not already mapped.  On 64 bit machines the  random
2929              address is randomly chosen 32 bit or 64 bit address. If the map‐
2930              ping works a second page is memory mapped from the first  mapped
2931              address.  The  stressor  exercises mmap/munmap, mincore and seg‐
2932              fault handling.
2933
2934       --mmapaddr-ops N
2935              stop after N random address mmap bogo operations.
2936
2937       --mmapfork N
2938              start N workers that each fork off 32 child processes,  each  of
2939              which tries to allocate some of the free memory left in the sys‐
2940              tem (and trying to avoid any  swapping).   The  child  processes
2941              then hint that the allocation will be needed with madvise(2) and
2942              then memset it to zero and hint that it is no longer needed with
2943              madvise before exiting.  This produces significant amounts of VM
2944              activity, a lot of cache misses and with minimal swapping.
2945
2946       --mmapfork-ops N
2947              stop after N mmapfork bogo operations.
2948
2949       --mmapfixed N
2950              start N workers that perform fixed address allocations from  the
2951              top  virtual address down to 128K.  The allocated sizes are from
2952              1 page to 8  pages  and  various  random  mmap  flags  are  used
2953              MAP_SHARED/MAP_PRIVATE, MAP_LOCKED, MAP_NORESERVE, MAP_POPULATE.
2954              If successfully map'd then the allocation is remap'd first to  a
2955              large  range of addresses based on a random start and finally an
2956              address that is several pages higher  in  memory.  Mappings  and
2957              remappings  are  madvised with random madvise options to further
2958              exercise the mappings.
2959
2960       --mmapfixed-ops N
2961              stop after N mmapfixed memory mapping bogo operations.
2962
2963       --mmaphuge N
2964              start N workers that attempt to mmap a set  of  huge  pages  and
2965              large huge page sized mappings. Successful mappings are madvised
2966              with MADV_NOHUGEPAGE and MADV_HUGEPAGE settings and then  1/64th
2967              of the normal small page size pages are touched. Finally, an at‐
2968              tempt to unmap a small page size page at the end of the  mapping
2969              is  made  (these may fail on huge pages) before the set of pages
2970              are unmapped. By default 8192 mappings are attempted  per  round
2971              of mappings or until swapping is detected.
2972
2973       --mmaphuge-ops N
2974              stop after N mmaphuge bogo operations
2975
2976       --mmaphuge-mmaps N
2977              set the number of huge page mappings to attempt in each round of
2978              mappings. The default is 8192 mappings.
2979
2980       --mmapmany N
2981              start N workers that attempt to create the maximum allowed  per-
2982              process  memory mappings. This is achieved by mapping 3 contigu‐
2983              ous pages and then unmapping the middle page hence splitting the
2984              mapping  into  two.  This is then repeated until the maximum al‐
2985              lowed mappings or a maximum of 262144 mappings are made.
2986
2987       --mmapmany-ops N
2988              stop after N mmapmany bogo operations
2989
2990       --mprotect N
2991              start N workers that exercise changing page protection  settings
2992              and access memory after each change. 8 processes per worker con‐
2993              tend with each other  changing  page  proection  settings  on  a
2994              shared memory region of just a few pages to cause TLB flushes. A
2995              read and write to the pages can cause  segmentation  faults  and
2996              these are handled by the stressor. All combinations of page pro‐
2997              tection settings are exercised including invalid combinations.
2998
2999       --mprotect-ops N
3000              stop after N mprotect calls.
3001
3002       --mq N start N sender and receiver processes that continually send  and
3003              receive messages using POSIX message queues. (Linux only).
3004
3005       --mq-ops N
3006              stop after N bogo POSIX message send operations completed.
3007
3008       --mq-size N
3009              specify size of POSIX message queue. The default size is 10 mes‐
3010              sages and most Linux systems this is the  maximum  allowed  size
3011              for  normal users. If the given size is greater than the allowed
3012              message queue size then a warning is issued and the maximum  al‐
3013              lowed size is used instead.
3014
3015       --mremap N
3016              start N workers continuously calling mmap(2), mremap(2) and mun‐
3017              map(2).  The initial anonymous mapping is a  large  chunk  (size
3018              specified by --mremap-bytes) and then iteratively halved in size
3019              by remapping all the way down to a page size and then back up to
3020              the original size.  This worker is only available for Linux.
3021
3022       --mremap-ops N
3023              stop mremap stress workers after N bogo operations.
3024
3025       --mremap-bytes N
3026              initially  allocate N bytes per remap stress worker, the default
3027              is 256MB. One can specify the size in units  of  Bytes,  KBytes,
3028              MBytes and GBytes using the suffix b, k, m or g.
3029
3030       --mremap-mlock
3031              attempt  to  mlock  remapped  pages into memory prohibiting them
3032              from being paged out.  This is a no-op if mlock(2) is not avail‐
3033              able.
3034
3035       --msg N
3036              start  N sender and receiver processes that continually send and
3037              receive messages using System V message IPC.
3038
3039       --msg-ops N
3040              stop after N bogo message send operations completed.
3041
3042       --msg-types N
3043              select the quality of message types (mtype) to use. By  default,
3044              msgsnd  sends messages with a mtype of 1, this option allows one
3045              to send messages types in the range 1..N to exercise the message
3046              queue receive ordering. This will also impact throughput perfor‐
3047              mance.
3048
3049       --msync N
3050              start N stressors that msync data from a file backed memory map‐
3051              ping  from  memory back to the file and msync modified data from
3052              the file back to the mapped memory. This exercises the  msync(2)
3053              MS_SYNC and MS_INVALIDATE sync operations.
3054
3055       --msync-ops N
3056              stop after N msync bogo operations completed.
3057
3058       --msync-bytes N
3059              allocate  N  bytes  for  the  memory mapped file, the default is
3060              256MB. One can specify the size as % of total  available  memory
3061              or in units of Bytes, KBytes, MBytes and GBytes using the suffix
3062              b, k, m or g.
3063
3064       --msyncmany N
3065              start N stressors that memory map up to 32768 pages on the  same
3066              page of a temporary file, change the first 32 bits in a page and
3067              msync the data back to the file.  The other 32767 pages are  ex‐
3068              amined to see if the 32 bit check value is msync'd back to these
3069              pages.
3070
3071       --msyncmany-ops N
3072              stop after N msync calls in the  msyncmany  stressors  are  com‐
3073              pleted.
3074
3075       --munmap N
3076              start  N  stressors  that  exercise unmapping of shared non-exe‐
3077              cutable mapped regions of child processes (Linux only). The  un‐
3078              mappings  map  shared  memory  regions page by page with a prime
3079              sized stride that creates many temporary mapping holes.  One the
3080              unmappings  are  complete  the  child will exit and a new one is
3081              started.  Note that this may trigger segmentation faults in  the
3082              child  process,  these are handled where possible by forcing the
3083              child process to call _exit(2).
3084
3085       --munmap-ops N
3086              stop after N page unmappings.
3087
3088       --mutex N
3089              start N stressors that exercise pthread mutex  locking  and  un‐
3090              locking. If run with enough privilege then the FIFO scheduler is
3091              used and a random priority between 0 and 80% of the maximum FIFO
3092              priority level is selected for the locking operation.  The mini‐
3093              mum FIFO priority level is selected for the critical mutex  sec‐
3094              tion  and unlocking operation to exercise random inverted prior‐
3095              ity scheduling.
3096
3097       --mutex-ops N
3098              stop after N bogo mutex lock/unlock operations.
3099
3100       --mutex-affinity
3101              enable random CPU affinity changing between mutex lock  and  un‐
3102              lock.
3103
3104       --mutex-procs N
3105              By  default 2 threads are used for locking/unlocking on a single
3106              mutex. This option allows the default to be changed to 2  to  64
3107              concurrent threads.
3108
3109       --nanosleep N
3110              start  N  workers that each run 256 pthreads that call nanosleep
3111              with random delays from 1 to 2^18 nanoseconds. This should exer‐
3112              cise the high resolution timers and scheduler.
3113
3114       --nanosleep-ops N
3115              stop the nanosleep stressor after N bogo nanosleep operations.
3116
3117       --netdev N
3118              start  N  workers that exercise various netdevice ioctl commands
3119              across all the available network devices. The  ioctls  exercised
3120              by  this  stressor  are  as  follows: SIOCGIFCONF, SIOCGIFINDEX,
3121              SIOCGIFNAME, SIOCGIFFLAGS, SIOCGIFADDR, SIOCGIFNETMASK, SIOCGIF‐
3122              METRIC, SIOCGIFMTU, SIOCGIFHWADDR, SIOCGIFMAP and SIOCGIFTXQLEN.
3123              See netdevice(7) for more details of these ioctl commands.
3124
3125       --netdev-ops N
3126              stop after N netdev bogo operations completed.
3127
3128       --netlink-proc N
3129              start  N  workers  that  spawn  child  processes   and   monitor
3130              fork/exec/exit  process  events  via the proc netlink connector.
3131              Each event received is counted as a bogo op. This  stressor  can
3132              only be run on Linux and requires CAP_NET_ADMIN capability.
3133
3134       --netlink-proc-ops N
3135              stop the proc netlink connector stressors after N bogo ops.
3136
3137       --netlink-task N
3138              start  N  workers  that  collect task statistics via the netlink
3139              taskstats interface.  This stressor can only be run on Linux and
3140              requires CAP_NET_ADMIN capability.
3141
3142       --netlink-task-ops N
3143              stop the taskstats netlink connector stressors after N bogo ops.
3144
3145       --nice N
3146              start  N  cpu consuming workers that exercise the available nice
3147              levels. Each iteration forks  off  a  child  process  that  runs
3148              through the all the nice levels running a busy loop for 0.1 sec‐
3149              onds per level and then exits.
3150
3151       --nice-ops N
3152              stop after N nice bogo nice loops
3153
3154       --nop N
3155              start N workers that consume cpu cycles issuing  no-op  instruc‐
3156              tions.  This stressor is available if the assembler supports the
3157              "nop" instruction.
3158
3159       --nop-ops N
3160              stop nop workers after N no-op bogo operations. Each bogo-opera‐
3161              tion is equivalent to 256 loops of 256 no-op instructions.
3162
3163       --nop-instr INSTR
3164              use alternative nop instruction INSTR. For x86 CPUs INSTR can be
3165              one of nop, pause, nop2 (2 byte nop) through to nop11  (11  byte
3166              nop).  For ARM CPUs, INSTR can be one of nop or yield. For PPC64
3167              CPUs, INSTR can be one of nop, mdoio, mdoom or yield.  For  S390
3168              CPUs, INSTR can be one of nop or nopr. For other processors, IN‐
3169              STR is only nop. The random INSTR option selects a randon mix of
3170              the available nop instructions. If the chosen INSTR generates an
3171              SIGILL signal, then the stressor falls back to the  vanilla  nop
3172              instruction.
3173
3174       --null N
3175              start N workers writing to /dev/null.
3176
3177       --null-ops N
3178              stop  null  stress  workers  after N /dev/null bogo write opera‐
3179              tions.
3180
3181       --numa N
3182              start N workers that migrate stressors and a 4MB  memory  mapped
3183              buffer  around  all  the  available  NUMA  nodes.  This uses mi‐
3184              grate_pages(2)  to  move  the   stressors   and   mbind(2)   and
3185              move_pages(2) to move the pages of the mapped buffer. After each
3186              move, the buffer is written to force activity over the bus which
3187              results  cache misses.  This test will only run on hardware with
3188              NUMA enabled and more than 1 NUMA node.
3189
3190       --numa-ops N
3191              stop NUMA stress workers after N bogo NUMA operations.
3192
3193       --oom-pipe N
3194              start N workers that create as many pipes as allowed  and  exer‐
3195              cise  expanding  and  shrinking  the pipes from the largest pipe
3196              size down to a page size. Data is written  into  the  pipes  and
3197              read  out  again to fill the pipe buffers. With the --aggressive
3198              mode enabled the data is not read out when the pipes are shrunk,
3199              causing  the kernel to OOM processes aggressively.  Running many
3200              instances of this stressor will force kernel  to  OOM  processes
3201              due to the many large pipe buffer allocations.
3202
3203       --oom-pipe-ops N
3204              stop after N bogo pipe expand/shrink operations.
3205
3206       --opcode N
3207              start  N  workers  that  fork off children that execute randomly
3208              generated executable code.  This will generate  issues  such  as
3209              illegal  instructions,  bus  errors, segmentation faults, traps,
3210              floating point errors that are handled gracefully by the  stres‐
3211              sor.
3212
3213       --opcode-ops N
3214              stop after N attempts to execute illegal code.
3215
3216       --opcode-method [ inc | mixed | random | text ]
3217              select  the  opcode generation method.  By default, random bytes
3218              are used to generate the executable code. This option allows one
3219              to select one of the three methods:
3220
3221              Method          Description
3222              inc             use incrementing 32 bit opcode patterns
3223                              from 0x00000000 to 0xfffffff inclusive.
3224              mixed           use a mix of incrementing 32 bit opcode
3225                              patterns  and random 32 bit opcode pat‐
3226                              terns that are also  inverted,  encoded
3227                              with gray encoding and bit reversed.
3228              random          generate  opcodes  using  random  bytes
3229                              from a mwc random generator.
3230              text            copies random chunks of code  from  the
3231                              stress-ng  text  segment  and  randomly
3232                              flips single bits in a random choice of
3233                              1/8th of the code.
3234
3235       -o N, --open N
3236              start  N  workers  that perform open(2) and then close(2) opera‐
3237              tions on /dev/zero. The maximum opens at one time is system  de‐
3238              fined,  so  the  test will run up to this maximum, or 65536 open
3239              file descriptors, which ever comes first.
3240
3241       --open-ops N
3242              stop the open stress workers after N bogo open operations.
3243
3244       --open-fd
3245              run a child process that scans  /proc/$PID/fd  and  attempts  to
3246              open the files that the stressor has opened. This exercises rac‐
3247              ing open/close operations on the proc interface.
3248
3249       --pageswap N
3250              start N workers that exercise page swap in and swap  out.  Pages
3251              are  allocated and paged out using madvise MADV_PAGEOUT. One the
3252              maximum per process number of mmaps are reached or  65536  pages
3253              are  allocated  the  pages are read to page them back in and un‐
3254              mapped in reverse mapping order.
3255
3256       --pageswap-ops N
3257              stop after N page allocation bogo operations.
3258
3259       --pci N
3260              exercise PCI sysfs by running N  workers  that  read  data  (and
3261              mmap/unmap  PCI  config or PCI resource files). Linux only. Run‐
3262              ning as root will allow config and resource mmappings to be read
3263              and exercises PCI I/O mapping.
3264
3265       --pci-ops N
3266              stop  pci stress workers after N PCI subdirectory exercising op‐
3267              erations.
3268
3269       --personality N
3270              start N workers that attempt to set personality and get all  the
3271              available personality types (process execution domain types) via
3272              the personality(2) system call. (Linux only).
3273
3274       --personality-ops N
3275              stop personality stress workers after N bogo personality  opera‐
3276              tions.
3277
3278       --peterson N
3279              start  N workers that exercises mutex exclusion between two pro‐
3280              cesses using shared memory with the  Peterson  Algorithm.  Where
3281              possible  this  uses  memory fencing and falls back to using GCC
3282              __sync_synchronize if they are not available. The stressors con‐
3283              tain simple mutex and memory coherency sanity checks.
3284
3285       --peterson-ops N
3286              stop peterson workers after N mutex operations.
3287
3288       --physpage N
3289              start N workers that use /proc/self/pagemap and /proc/kpagecount
3290              to determine the physical page  and  page  count  of  a  virtual
3291              mapped  page  and a page that is shared among all the stressors.
3292              Linux only and requires the CAP_SYS_ADMIN capabilities.
3293
3294       --physpage-ops N
3295              stop physpage stress  workers  after  N  bogo  physical  address
3296              lookups.
3297
3298       --pidfd N
3299              start   N   workers   that   exercise  signal  sending  via  the
3300              pidfd_send_signal system call.  This stressor creates child pro‐
3301              cesses  and  checks  if they exist and can be stopped, restarted
3302              and killed using the pidfd_send_signal system call.
3303
3304       --pidfd-ops N
3305              stop pidfd stress workers after N child processes have been cre‐
3306              ated, tested and killed with pidfd_send_signal.
3307
3308       --ping-sock N
3309              start  N workers that send small randomized ICMP messages to the
3310              localhost across a range of ports (1024..65535) using  a  "ping"
3311              socket  with  an AF_INET domain, a SOCK_DGRAM socket type and an
3312              IPPROTO_ICMP protocol.
3313
3314       --ping-sock-ops N
3315              stop the ping-sock stress workers  after  N  ICMP  messages  are
3316              sent.
3317
3318       -p N, --pipe N
3319              start  N workers that perform large pipe writes and reads to ex‐
3320              ercise pipe I/O.  This exercises memory write and reads as  well
3321              as  context  switching.  Each worker has two processes, a reader
3322              and a writer.
3323
3324       --pipe-ops N
3325              stop pipe stress workers after N bogo pipe write operations.
3326
3327       --pipe-data-size N
3328              specifies the size in bytes of each write  to  the  pipe  (range
3329              from  4  bytes  to  4096  bytes). Setting a small data size will
3330              cause more writes to be buffered in the pipe, hence reducing the
3331              context switch rate between the pipe writer and pipe reader pro‐
3332              cesses. Default size is the page size.
3333
3334       --pipe-size N
3335              specifies the size of the pipe in bytes (for systems  that  sup‐
3336              port  the  F_SETPIPE_SZ  fcntl()  command). Setting a small pipe
3337              size will cause the pipe to  fill  and  block  more  frequently,
3338              hence increasing the context switch rate between the pipe writer
3339              and the pipe reader processes. Default size is 512 bytes.
3340
3341       --pipeherd N
3342              start N workers that pass a 64 bit  token  counter  to/from  100
3343              child  processes  over a shared pipe. This forces a high context
3344              switch rate and can trigger a "thundering herd"  of  wakeups  on
3345              processes that are blocked on pipe waits.
3346
3347       --pipeherd-ops N
3348              stop pipe stress workers after N bogo pipe write operations.
3349
3350       --pipeherd-yield
3351              force  a  scheduling  yield after each write, this increases the
3352              context switch rate.
3353
3354       --pkey N
3355              start N workers that change memory protection using a protection
3356              key  (pkey)  and  the pkey_mprotect call (Linux only). This will
3357              try to allocate a pkey and use this  for  the  page  protection,
3358              however,  if  this  fails  then the special pkey -1 will be used
3359              (and the kernel will use the normal mprotect mechanism instead).
3360              Various  page  protection  mixes of read/write/exec/none will be
3361              cycled through on randomly chosen pre-allocated pages.
3362
3363       --pkey-ops N
3364              stop after N pkey_mprotect page protection cycles.
3365
3366       -P N, --poll N
3367              start N workers  that  perform  zero  timeout  polling  via  the
3368              poll(2),  ppoll(2),  select(2),  pselect(2)  and sleep(3) calls.
3369              This wastes system and user time doing nothing.
3370
3371       --poll-ops N
3372              stop poll stress workers after N bogo poll operations.
3373
3374       --poll-fds N
3375              specify the number of file descriptors to poll/ppoll/select/pse‐
3376              lect  on.   The  maximum number for select/pselect is limited by
3377              FD_SETSIZE and the upper maximum is also limited by the  maximum
3378              number of pipe open descriptors allowed.
3379
3380       --prctl N
3381              start  N workers that exercise the majority of the prctl(2) sys‐
3382              tem call options. Each batch of prctl calls is performed  inside
3383              a  new  child  process to ensure the limit of prctl is contained
3384              inside a new process every time.  Some prctl options are  archi‐
3385              tecture  specific,  however,  this  stressor will exercise these
3386              even if they are not implemented.
3387
3388       --prctl-ops N
3389              stop prctl workers after N batches of prctl calls
3390
3391       --prefetch N
3392              start N workers that benchmark prefetch and  non-prefetch  reads
3393              of a L3 cache sized buffer. The buffer is read with loops of 8 ×
3394              64 bit reads per iteration.  In  the  prefetch  cases,  data  is
3395              prefetched  ahead  of the current read position by various sized
3396              offsets, from 64 bytes to  8K  to  find  the  best  memory  read
3397              throughput.  The stressor reports the non-prefetch read rate and
3398              the best prefetched read rate. It also reports the prefetch off‐
3399              set  and  an estimate of the amount of time between the prefetch
3400              issue and the actual memory  read  operation.  These  statistics
3401              will  vary from run-to-run due to system noise and CPU frequency
3402              scaling.
3403
3404       --prefetch-ops N
3405              stop prefetch stressors after N benchmark operations
3406
3407       --prefetch-l3-size N
3408              specify the size of the l3 cache
3409
3410       --procfs N
3411              start N workers that read files from /proc and recursively  read
3412              files from /proc/self (Linux only).
3413
3414       --procfs-ops N
3415              stop  procfs  reading  after N bogo read operations. Note, since
3416              the number of entries may vary between kernels,  this  bogo  ops
3417              metric is probably very misleading.
3418
3419       --pthread N
3420              start N workers that iteratively creates and terminates multiple
3421              pthreads (the default is 1024 pthreads per worker). In each  it‐
3422              eration,  each  newly created pthread waits until the worker has
3423              created all the pthreads and then they all terminate together.
3424
3425       --pthread-ops N
3426              stop pthread workers after N bogo pthread create operations.
3427
3428       --pthread-max N
3429              create N pthreads per worker. If the product of  the  number  of
3430              pthreads by the number of workers is greater than the soft limit
3431              of allowed pthreads then the maximum is re-adjusted down to  the
3432              maximum allowed.
3433
3434       --ptrace N
3435              start  N  workers  that  fork  and trace system calls of a child
3436              process using ptrace(2).
3437
3438       --ptrace-ops N
3439              stop ptracer workers after N bogo system calls are traced.
3440
3441       --pty N
3442              start N workers that repeatedly attempt to open  pseudoterminals
3443              and  perform  various  pty  ioctls  upon the ptys before closing
3444              them.
3445
3446       --pty-ops N
3447              stop pty workers after N pty bogo operations.
3448
3449       --pty-max N
3450              try to open a maximum  of  N  pseudoterminals,  the  default  is
3451              65536. The allowed range of this setting is 8..65536.
3452
3453       -Q, --qsort N
3454              start N workers that sort 32 bit integers using qsort.
3455
3456       --qsort-ops N
3457              stop qsort stress workers after N bogo qsorts.
3458
3459       --qsort-size N
3460              specify  number  of  32  bit integers to sort, default is 262144
3461              (256 × 1024).
3462
3463       --quota N
3464              start N workers that exercise the Q_GETQUOTA,  Q_GETFMT,  Q_GET‐
3465              INFO,  Q_GETSTATS  and  Q_SYNC  quotactl(2)  commands on all the
3466              available mounted block based file systems. Requires CAP_SYS_AD‐
3467              MIN capability to run.
3468
3469       --quota-ops N
3470              stop quota stress workers after N bogo quotactl operations.
3471
3472       --radixsort N
3473              start N workers that sort random 8 byte strings using radixsort.
3474
3475       --radixsort-ops N
3476              stop radixsort stress workers after N bogo radixsorts.
3477
3478       --radixsort-size N
3479              specify  number  of  strings  to  sort, default is 262144 (256 ×
3480              1024).
3481
3482       --ramfs N
3483              start N workers mounting a memory based file system using  ramfs
3484              and  tmpfs  (Linux  only).  This alternates between mounting and
3485              umounting a ramfs or tmpfs file  system  using  the  traditional
3486              mount(2)  and  umount(2)  system call as well as the newer Linux
3487              5.2 fsopen(2), fsmount(2), fsconfig(2) and move_mount(2)  system
3488              calls if they are available. The default ram file system size is
3489              2MB.
3490
3491       --ramfs-ops N
3492              stop after N ramfs mount operations.
3493
3494       --ramfs-size N
3495              set the ramfs size (must be multiples of the page size).
3496
3497       --rawdev N
3498              start N workers that read the underlying raw drive device  using
3499              direct  IO  reads.  The device (with minor number 0) that stores
3500              the current working directory is the raw device to  be  read  by
3501              the stressor.  The read size is exactly the size of the underly‐
3502              ing device block size.  By default, this stressor will  exercise
3503              all  the of the rawdev methods (see the --rawdev-method option).
3504              This is a Linux only stressor and requires root privilege to  be
3505              able to read the raw device.
3506
3507       --rawdev-ops N
3508              stop  the rawdev stress workers after N raw device read bogo op‐
3509              erations.
3510
3511       --rawdev-method M
3512              Available rawdev stress methods are described as follows:
3513
3514              Method   Description
3515              all      iterate over all the rawdev stress  methods  as
3516                       listed below:
3517              sweep    repeatedly  read across the raw device from the
3518                       0th block to the end block in steps of the num‐
3519                       ber  of  blocks on the device / 128 and back to
3520                       the start again.
3521              wiggle   repeatedly read across the raw  device  in  128
3522                       evenly steps with each step reading 1024 blocks
3523                       backwards from each step.
3524              ends     repeatedly read the first and  last  128  start
3525                       and  end  blocks  of the raw device alternating
3526                       from start of the device to the end of the  de‐
3527                       vice.
3528              random   repeatedly read 256 random blocks
3529              burst    repeatedly  read 256 sequential blocks starting
3530                       from a random block on the raw device.
3531
3532       --randlist N
3533              start N workers that creates a list  of  objects  in  randomized
3534              memory  order and traverses the list setting and reading the ob‐
3535              jects. This is designed to exerise memory and  cache  thrashing.
3536              Normally  the objects are allocated on the heap, however for ob‐
3537              jects of page size or larger there is a 1 in 16  chance  of  ob‐
3538              jects  being  allocated using shared anonymous memory mapping to
3539              mix up the address spaces of the allocations to create more  TLB
3540              thrashing.
3541
3542       --randlist-ops N
3543              stop randlist workers after N list traversals
3544
3545       --randist-compact
3546              Allocate  all  the  list objects using one large heap allocation
3547              and divide this up for all the list objects.  This  removes  the
3548              overhead  of  the  heap keeping track of each list object, hence
3549              uses less memory.
3550
3551       --randlist-items N
3552              Allocate N items on the list. By default, 100,000 items are  al‐
3553              located.
3554
3555       --randlist-size N
3556              Allocate  each  item to be N bytes in size. By default, the size
3557              is 64 bytes of data payload plus the list handling pointer over‐
3558              head.
3559
3560       --rawsock N
3561              start  N  workers  that  send  and receive packet data using raw
3562              sockets on the localhost. Requires CAP_NET_RAW to run.
3563
3564       --rawsock-ops N
3565              stop rawsock workers after N packets are received.
3566
3567       --rawpkt N
3568              start N workers that sends and receives ethernet  packets  using
3569              raw  packets  on the localhost via the loopback device. Requires
3570              CAP_NET_RAW to run.
3571
3572       --rawpkt-ops N
3573              stop rawpkt workers after N packets from the sender process  are
3574              received.
3575
3576       --rawpkt-port N
3577              start  at port P. For N rawpkt worker processes, ports P to (P *
3578              4) - 1 are used. The default starting port is port 14000.
3579
3580       --rawudp N
3581              start N workers that send and  receive  UDP  packets  using  raw
3582              sockets on the localhost. Requires CAP_NET_RAW to run.
3583
3584       --rawudp-if NAME
3585              use  network  interface NAME. If the interface NAME does not ex‐
3586              ist, is not up or does not support the domain then the  loopback
3587              (lo) interface is used as the default.
3588
3589       --rawudp-ops N
3590              stop rawudp workers after N packets are received.
3591
3592       --rawudp-port N
3593              start  at port P. For N rawudp worker processes, ports P to (P *
3594              4) - 1 are used. The default starting port is port 13000.
3595
3596       --rdrand N
3597              start N workers that read a random number from an on-chip random
3598              number  generator  This uses the rdrand instruction on Intel x86
3599              processors or the darn instruction on Power9 processors.
3600
3601       --rdrand-ops N
3602              stop rdrand stress workers after N  bogo  rdrand  operations  (1
3603              bogo op = 2048 random bits successfully read).
3604
3605       --rdrand-seed
3606              use rdseed instead of rdrand (x86 only).
3607
3608       --readahead N
3609              start  N  workers  that  randomly  seek  and  perform  4096 byte
3610              read/write I/O operations on a file with readahead. The  default
3611              file  size  is  64 MB.  Readaheads and reads are batched into 16
3612              readaheads and then 16 reads.
3613
3614       --readahead-bytes N
3615              set the size of readahead file, the default is  1  GB.  One  can
3616              specify  the  size  as  % of free space on the file system or in
3617              units of Bytes, KBytes, MBytes and GBytes using the suffix b, k,
3618              m or g.
3619
3620       --readahead-ops N
3621              stop readahead stress workers after N bogo read operations.
3622
3623       --reboot N
3624              start  N  workers  that exercise the reboot(2) system call. When
3625              possible, it will create a process in a PID namespace  and  per‐
3626              form  a  reboot  power  off  command  that  should  shutdown the
3627              process.  Also, the stressor exercises invalid reboot magic val‐
3628              ues  and  invalid reboots when there are insufficient privileges
3629              that will not actually reboot the system.
3630
3631       --reboot-ops N
3632              stop the reboot stress workers after N bogo reboot cycles.
3633
3634       --remap N
3635              start N workers that map 512 pages and re-order these pages  us‐
3636              ing the deprecated system call remap_file_pages(2). Several page
3637              re-orderings are exercised: forward, reverse,  random  and  many
3638              pages to 1 page.
3639
3640       --remap-ops N
3641              stop after N remapping bogo operations.
3642
3643       -R N, --rename N
3644              start  N workers that each create a file and then repeatedly re‐
3645              name it.
3646
3647       --rename-ops N
3648              stop rename stress workers after N bogo rename operations.
3649
3650       --resched N
3651              start N workers that exercise process rescheduling. Each  stres‐
3652              sor  spawns a child process for each of the positive nice levels
3653              and iterates over the nice levels from 0 to the lowest  priority
3654              level (highest nice value). For each of the nice levels 1024 it‐
3655              erations over 3 non-real time  scheduling  polices  SCHED_OTHER,
3656              SCHED_BATCH  and  SCHED_IDLE are set and a sched_yield occurs to
3657              force heavy rescheduling activity.  When the -v  verbose  option
3658              is used the distribution of the number of yields across the nice
3659              levels is printed for the first stressor out of the N stressors.
3660
3661       --resched-ops N
3662              stop after N rescheduling sched_yield calls.
3663
3664       --resources N
3665              start N workers that  consume  various  system  resources.  Each
3666              worker  will  spawn 1024 child processes that iterate 1024 times
3667              consuming shared memory, heap, stack, temporary files and  vari‐
3668              ous  file  descriptors (eventfds, memoryfds, userfaultfds, pipes
3669              and sockets).
3670
3671       --resources-ops N
3672              stop after N resource child forks.
3673
3674       --revio N
3675              start N workers continually writing in reverse position order to
3676              temporary  files. The default mode is to stress test reverse po‐
3677              sition ordered writes with randomly sized sparse  holes  between
3678              each  write.   With  the --aggressive option enabled without any
3679              --revio-opts options the revio stressor will  work  through  all
3680              the  --revio-opt  options one by one to cover a range of I/O op‐
3681              tions.
3682
3683       --revio-bytes N
3684              write N bytes for each revio process, the default is 1  GB.  One
3685              can specify the size as % of free space on the file system or in
3686              units of Bytes, KBytes, MBytes and GBytes using the suffix b, k,
3687              m or g.
3688
3689       --revio-opts list
3690              specify  various  stress test options as a comma separated list.
3691              Options are the same as --hdd-opts but without the iovec option.
3692
3693       --revio-ops N
3694              stop revio stress workers after N bogo operations.
3695
3696       --revio-write-size N
3697              specify size of each write in bytes. Size can be from 1 byte  to
3698              4MB.
3699
3700       --rlimit N
3701              start  N  workers  that exceed CPU and file size resource imits,
3702              generating SIGXCPU and SIGXFSZ signals.
3703
3704       --rlimit-ops N
3705              stop after N bogo resource limited SIGXCPU and  SIGXFSZ  signals
3706              have been caught.
3707
3708       --rmap N
3709              start  N workers that exercise the VM reverse-mapping. This cre‐
3710              ates 16 processes per  worker  that  write/read  multiple  file-
3711              backed  memory  mappings.  There  are 64 lots of 4 page mappings
3712              made onto the file, with each mapping overlapping  the  previous
3713              by 3 pages and at least 1 page of non-mapped memory between each
3714              of the mappings. Data is synchronously msync'd to the file 1  in
3715              every 256 iterations in a random manner.
3716
3717       --rmap-ops N
3718              stop after N bogo rmap memory writes/reads.
3719
3720       --rseq N
3721              start  N  workers  that  exercise  restartable sequences via the
3722              rseq(2) system call.  This loops over a long  duration  critical
3723              section  that is likely to be interrupted.  A rseq abort handler
3724              keeps count of the number of interruptions and a SIGSEV  handler
3725              also  tracks any failed rseq aborts that can occur if there is a
3726              mistmatch in a rseq check signature. Linux only.
3727
3728       --rseq-ops N
3729              stop after N bogo rseq operations. Each bogo rseq  operation  is
3730              equivalent to 10000 iterations over a long duration rseq handled
3731              critical section.
3732
3733       --rtc N
3734              start N workers that exercise the real time clock  (RTC)  inter‐
3735              faces  via  /dev/rtc  and  /sys/class/rtc/rtc0.  No  destructive
3736              writes (modifications) are performed on the RTC. This is a Linux
3737              only stressor.
3738
3739       --rtc-ops N
3740              stop after N bogo RTC interface accesses.
3741
3742       --schedpolicy N
3743              start  N  workers  that work set the worker to various available
3744              scheduling policies out of SCHED_OTHER, SCHED_BATCH, SCHED_IDLE,
3745              SCHED_FIFO,  SCHED_RR  and  SCHED_DEADLINE.   For  the real time
3746              scheduling policies a random sched priority is selected  between
3747              the minimum and maximum scheduling priority settings.
3748
3749       --schedpolicy-ops N
3750              stop after N bogo scheduling policy changes.
3751
3752       --sctp N
3753              start  N workers that perform network sctp stress activity using
3754              the Stream Control Transmission Protocol (SCTP).  This  involves
3755              client/server  processes performing rapid connect, send/receives
3756              and disconnects on the local host.
3757
3758       --sctp-domain D
3759              specify the domain to use, the default is ipv4.  Currently  ipv4
3760              and ipv6 are supported.
3761
3762       --sctp-if NAME
3763              use  network  interface NAME. If the interface NAME does not ex‐
3764              ist, is not up or does not support the domain then the  loopback
3765              (lo) interface is used as the default.
3766
3767       --sctp-ops N
3768              stop sctp workers after N bogo operations.
3769
3770       --sctp-port P
3771              start at sctp port P. For N sctp worker processes, ports P to (P
3772              * 4) - 1 are used for ipv4, ipv6 domains and ports P to  P  -  1
3773              are used for the unix domain.
3774
3775       --seal N
3776              start  N  workers  that exercise the fcntl(2) SEAL commands on a
3777              small anonymous file created using memfd_create(2).  After  each
3778              SEAL  command  is  issued the stressor also sanity checks if the
3779              seal operation has sealed the file correctly.  (Linux only).
3780
3781       --seal-ops N
3782              stop after N bogo seal operations.
3783
3784       --seccomp N
3785              start N workers that exercise Secure Computing system call  fil‐
3786              tering.  Each  worker creates child processes that write a short
3787              message to /dev/null and then exits. 2% of the  child  processes
3788              have  a  seccomp filter that disallows the write system call and
3789              hence it is killed by seccomp with a  SIGSYS.   Note  that  this
3790              stressor  can  generate  many  audit  log messages each time the
3791              child is killed.  Requires CAP_SYS_ADMIN to run.
3792
3793       --seccomp-ops N
3794              stop seccomp stress workers after N seccomp filter tests.
3795
3796       --secretmem N
3797              start N workers  that  mmap  pages  using  file  mapping  off  a
3798              memfd_secret  file  descriptor.  Each stress loop iteration will
3799              expand the mappable region by 3 pages using ftruncate  and  mmap
3800              and  touches  the pages. The pages are then fragmented by unmap‐
3801              ping the middle page and then umapping the first and last pages.
3802              This  tries  to force page fragmentation and also trigger out of
3803              memory (OOM) kills of the stressor when the secret memory is ex‐
3804              hausted.   Note this is a Linux 5.11+ only stressor and the ker‐
3805              nel needs to be booted with "secretmem=" option  to  allocate  a
3806              secret memory reservation.
3807
3808       --secretmem-ops N
3809              stop secretmem stress workers after N stress loop iterations.
3810
3811       --seek N
3812              start  N  workers  that  randomly  seeks  and  performs 512 byte
3813              read/write I/O operations on a file. The default file size is 16
3814              GB.
3815
3816       --seek-ops N
3817              stop seek stress workers after N bogo seek operations.
3818
3819       --seek-punch
3820              punch  randomly located 8K holes into the file to cause more ex‐
3821              tents to force a more demanding seek stressor, (Linux only).
3822
3823       --seek-size N
3824              specify the size of the file in bytes. Small  file  sizes  allow
3825              the  I/O  to occur in the cache, causing greater CPU load. Large
3826              file sizes force more I/O operations to drive causing more  wait
3827              time  and  more  I/O  on  the drive. One can specify the size in
3828              units of Bytes, KBytes, MBytes and GBytes using the suffix b, k,
3829              m or g.
3830
3831       --sem N
3832              start N workers that perform POSIX semaphore wait and post oper‐
3833              ations. By default, a parent and  4  children  are  started  per
3834              worker  to  provide  some  contention  on  the  semaphore.  This
3835              stresses fast semaphore operations and  produces  rapid  context
3836              switching.
3837
3838       --sem-ops N
3839              stop semaphore stress workers after N bogo semaphore operations.
3840
3841       --sem-procs N
3842              start  N  child  workers per worker to provide contention on the
3843              semaphore, the default is 4 and a maximum of 64 are allowed.
3844
3845       --sem-sysv N
3846              start N workers that perform System V semaphore  wait  and  post
3847              operations.  By default, a parent and 4 children are started per
3848              worker  to  provide  some  contention  on  the  semaphore.  This
3849              stresses  fast  semaphore  operations and produces rapid context
3850              switching.
3851
3852       --sem-sysv-ops N
3853              stop semaphore stress workers after N bogo  System  V  semaphore
3854              operations.
3855
3856       --sem-sysv-procs N
3857              start  N child processes per worker to provide contention on the
3858              System V semaphore, the default is 4 and a maximum of 64 are al‐
3859              lowed.
3860
3861       --sendfile N
3862              start N workers that send an empty file to /dev/null. This oper‐
3863              ation spends nearly all the time in  the  kernel.   The  default
3864              sendfile size is 4MB.  The sendfile options are for Linux only.
3865
3866       --sendfile-ops N
3867              stop sendfile workers after N sendfile bogo operations.
3868
3869       --sendfile-size S
3870              specify  the  size to be copied with each sendfile call. The de‐
3871              fault size is 4MB. One can specify the size in units  of  Bytes,
3872              KBytes, MBytes and GBytes using the suffix b, k, m or g.
3873
3874       --session N
3875              start  N workers that create child and grandchild processes that
3876              set and get their session ids. 25% of the  grandchild  processes
3877              are not waited for by the child to create orphaned sessions that
3878              need to be reaped by init.
3879
3880       --session-ops N
3881              stop session workers after N child  processes  are  spawned  and
3882              reaped.
3883
3884       --set N
3885              start  N  workers that call system calls that try to set data in
3886              the kernel, currently these are: setgid,  sethostname,  setpgid,
3887              setpgrp,  setuid,  setgroups, setreuid, setregid, setresuid, se‐
3888              tresgid and setrlimit.  Some of these system calls are  OS  spe‐
3889              cific.
3890
3891       --set-ops N
3892              stop set workers after N bogo set operations.
3893
3894       --shellsort N
3895              start N workers that sort 32 bit integers using shellsort.
3896
3897       --shellsort-ops N
3898              stop shellsort stress workers after N bogo shellsorts.
3899
3900       --shellsort-size N
3901              specify  number  of  32  bit integers to sort, default is 262144
3902              (256 × 1024).
3903
3904       --shm N
3905              start N workers that open and allocate shared memory objects us‐
3906              ing  the  POSIX  shared memory interfaces.  By default, the test
3907              will repeatedly create and destroy  32  shared  memory  objects,
3908              each of which is 8MB in size.
3909
3910       --shm-ops N
3911              stop  after N POSIX shared memory create and destroy bogo opera‐
3912              tions are complete.
3913
3914       --shm-bytes N
3915              specify the size of the POSIX shared memory objects to  be  cre‐
3916              ated. One can specify the size as % of total available memory or
3917              in units of Bytes, KBytes, MBytes and GBytes using the suffix b,
3918              k, m or g.
3919
3920       --shm-objs N
3921              specify the number of shared memory objects to be created.
3922
3923       --shm-sysv N
3924              start  N  workers that allocate shared memory using the System V
3925              shared memory interface.  By default, the test  will  repeatedly
3926              create  and  destroy  8 shared memory segments, each of which is
3927              8MB in size.
3928
3929       --shm-sysv-ops N
3930              stop after N shared memory create and  destroy  bogo  operations
3931              are complete.
3932
3933       --shm-sysv-bytes N
3934              specify the size of the shared memory segment to be created. One
3935              can specify the size as % of total available memory or in  units
3936              of  Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or
3937              g.
3938
3939       --shm-sysv-segs N
3940              specify the number of shared memory segments to be created.  The
3941              default is 8 segments.
3942
3943       --sigabrt N
3944              start  N workers that create children that are killed by SIGABRT
3945              signals or by calling abort(3).
3946
3947       --sigabrt-ops N
3948              stop the sigabrt workers after N SIGABRT  signals  are  success‐
3949              fully handled.
3950
3951       --sigchld N
3952              start  N  workers  that create children to generate SIGCHLD sig‐
3953              nals. This exercises children that exit (CLD_EXITED), get killed
3954              (CLD_KILLED),  get  stopped (CLD_STOPPED) or continued (CLD_CON‐
3955              TINUED).
3956
3957       --sigchld-ops N
3958              stop the sigchld workers after N SIGCHLD  signals  are  success‐
3959              fully handled.
3960
3961       --sigfd N
3962              start  N  workers that generate SIGRT signals and are handled by
3963              reads by a child process using a file descriptor  set  up  using
3964              signalfd(2).   (Linux  only). This will generate a heavy context
3965              switch load when all CPUs are fully loaded.
3966
3967       --sigfd-ops
3968              stop sigfd workers after N bogo SIGUSR1 signals are sent.
3969
3970       --sigfpe N
3971              start N workers that  rapidly  cause  division  by  zero  SIGFPE
3972              faults.
3973
3974       --sigfpe-ops N
3975              stop sigfpe stress workers after N bogo SIGFPE faults.
3976
3977       --sigio N
3978              start  N  workers that read data from a child process via a pipe
3979              and generate SIGIO signals. This exercises asynchronous I/O  via
3980              SIGIO.
3981
3982       --sigio-ops N
3983              stop sigio stress workers after handling N SIGIO signals.
3984
3985       --signal N
3986              start  N workers that exercise the signal system call three dif‐
3987              ferent signal handlers, SIG_IGN (ignore), a SIGCHLD handler  and
3988              SIG_DFL (default action).  For the SIGCHLD handler, the stressor
3989              sends itself a SIGCHLD signal and checks if it has been handled.
3990              For other handlers, the stressor checks that the SIGCHLD handler
3991              has not been called.  This stress test calls the  signal  system
3992              call  directly when possible and will try to avoid the C library
3993              attempt to replace signal with the more modern sigaction  system
3994              call.
3995
3996       --signal-ops N
3997              stop signal stress workers after N rounds of signal handler set‐
3998              ting.
3999
4000       --signest N
4001              start N workers that exercise nested signal handling.  A  signal
4002              is  raised  and  inside the signal handler a different signal is
4003              raised, working through a list of signals to exercise. An alter‐
4004              native  signal  stack is used that is large enough to handle all
4005              the nested signal calls.  The -v option will log the approximate
4006              size of the stack required and the average stack size per nested
4007              call.
4008
4009       --signest-ops N
4010              stop after handling N nested signals.
4011
4012       --sigpending N
4013              start N workers that check if SIGUSR1 signals are pending.  This
4014              stressor masks SIGUSR1, generates a SIGUSR1 signal and uses sig‐
4015              pending(2) to see if the signal is pending. Then it unmasks  the
4016              signal and checks if the signal is no longer pending.
4017
4018       --sigpending-ops N
4019              stop  sigpending  stress  workers  after N bogo sigpending pend‐
4020              ing/unpending checks.
4021
4022       --sigpipe N
4023              start N workers that repeatedly spawn off child process that ex‐
4024              its before a parent can complete a pipe write, causing a SIGPIPE
4025              signal.  The child process is either spawned using  clone(2)  if
4026              it is available or use the slower fork(2) instead.
4027
4028       --sigpipe-ops N
4029              stop N workers after N SIGPIPE signals have been caught and han‐
4030              dled.
4031
4032       --sigq N
4033              start  N  workers  that  rapidly  send  SIGUSR1  signals   using
4034              sigqueue(3) to child processes that wait for the signal via sig‐
4035              waitinfo(2).
4036
4037       --sigq-ops N
4038              stop sigq stress workers after N bogo signal send operations.
4039
4040       --sigrt N
4041              start N workers that  each  create  child  processes  to  handle
4042              SIGRTMIN  to  SIGRMAX  real  time signals. The parent sends each
4043              child process a RT signal via siqueue(2) and the  child  process
4044              waits  for this via sigwaitinfo(2).  When the child receives the
4045              signal it then sends a RT signal to one of the other child  pro‐
4046              cesses also via sigqueue(2).
4047
4048       --sigrt-ops N
4049              stop  sigrt stress workers after N bogo sigqueue signal send op‐
4050              erations.
4051
4052       --sigsegv N
4053              start N workers  that  rapidly  create  and  catch  segmentation
4054              faults.
4055
4056       --sigsegv-ops N
4057              stop sigsegv stress workers after N bogo segmentation faults.
4058
4059       --sigsuspend N
4060              start  N workers that each spawn off 4 child processes that wait
4061              for a SIGUSR1 signal from the parent  using  sigsuspend(2).  The
4062              parent  sends SIGUSR1 signals to each child in rapid succession.
4063              Each sigsuspend wakeup is counted as one bogo operation.
4064
4065       --sigsuspend-ops N
4066              stop sigsuspend stress workers after N bogo sigsuspend wakeups.
4067
4068       --sigtrap N
4069              start N workers that exercise the SIGTRAP  signal.  For  systems
4070              that  support  SIGTRAP, the signal is generated using raise(SIG‐
4071              TRAP). Only x86 Linux systems the SIGTRAP is also  generated  by
4072              an int 3 instruction.
4073
4074       --sigtrap-ops N
4075              stop sigtrap stress workers after N SIGTRAPs have been handled.
4076
4077       --skiplist N
4078              start  N workers that store and then search for integers using a
4079              skiplist.  By default, 65536 integers are  added  and  searched.
4080              This  is a useful method to exercise random access of memory and
4081              processor cache.
4082
4083       --skiplist-ops N
4084              stop the skiplist worker after N skiplist store and  search  cy‐
4085              cles are completed.
4086
4087       --skiplist-size N
4088              specify the size (number of integers) to store and search in the
4089              skiplist. Size can be from 1K to 4M.
4090
4091       --sleep N
4092              start N workers that spawn off multiple threads that  each  per‐
4093              form multiple sleeps of ranges 1us to 0.1s.  This creates multi‐
4094              ple context switches and timer interrupts.
4095
4096       --sleep-ops N
4097              stop after N sleep bogo operations.
4098
4099       --sleep-max P
4100              start P threads per worker. The default is 1024, the maximum al‐
4101              lowed is 30000.
4102
4103       --smi N
4104              start  N  workers that attempt to generate system management in‐
4105              terrupts (SMIs) into the x86  ring  -2  system  management  mode
4106              (SMM)  by  exercising  the  advanced power management (APM) port
4107              0xb2. This requires the --pathological option and root privilege
4108              and  is  only  implemented on x86 Linux platforms. This probably
4109              does not work in a virtualized environment.  The  stressor  will
4110              attempt  to  determine  the  time stolen by SMIs with some naïve
4111              benchmarking.
4112
4113       --smi-ops N
4114              stop after N attempts to trigger the SMI.
4115
4116       -S N, --sock N
4117              start N workers that perform  various  socket  stress  activity.
4118              This involves a pair of client/server processes performing rapid
4119              connect, send and receives and disconnects on the local host.
4120
4121       --sock-domain D
4122              specify the domain to use, the default is ipv4. Currently  ipv4,
4123              ipv6 and unix are supported.
4124
4125       --sock-if NAME
4126              use  network  interface NAME. If the interface NAME does not ex‐
4127              ist, is not up or does not support the domain then the  loopback
4128              (lo) interface is used as the default.
4129
4130       --sock-nodelay
4131              This  disables the TCP Nagle algorithm, so data segments are al‐
4132              ways sent as soon as  possible.   This  stops  data  from  being
4133              buffered  before  being  transmitted,  hence resulting in poorer
4134              network utilisation and more context switches between the sender
4135              and receiver.
4136
4137       --sock-port P
4138              start  at  socket port P. For N socket worker processes, ports P
4139              to P - 1 are used.
4140
4141       --sock-protocol P
4142              Use the specified protocol P, default is tcp.  Options  are  tcp
4143              and mptcp (if supported by the operating system).
4144
4145       --sock-ops N
4146              stop socket stress workers after N bogo operations.
4147
4148       --sock-opts [ random | send | sendmsg | sendmmsg ]
4149              by  default, messages are sent using send(2). This option allows
4150              one to specify the sending  method  using  send(2),  sendmsg(2),
4151              sendmmsg(2) or a random selection of one of thse 3 on each iter‐
4152              ation.  Note that sendmmsg is only available for  Linux  systems
4153              that support this system call.
4154
4155       --sock-type [ stream | seqpacket ]
4156              specify the socket type to use. The default type is stream. seq‐
4157              packet currently only works for the unix socket domain.
4158
4159       --sock-zerocopy
4160              enable zerocopy for send and recv calls if the  MSG_ZEROCOPY  is
4161              supported.
4162
4163       --sockabuse N
4164              start N workers that abuse a socket file descriptor with various
4165              file based system that don't normally act on sockets. The kernel
4166              should handle these illegal and unexpected calls gracefully.
4167
4168       --sockabuse-ops N
4169              stop after N iterations of the socket abusing stressor loop.
4170
4171       --sockdiag N
4172              start N workers that exercise the Linux sock_diag netlink socket
4173              diagnostics (Linux only).  This currently  requests  diagnostics
4174              using    UDIAG_SHOW_NAME,    UDIAG_SHOW_VFS,    UDIAG_SHOW_PEER,
4175              UDIAG_SHOW_ICONS, UDIAG_SHOW_RQLEN  and  UDIAG_SHOW_MEMINFO  for
4176              the AF_UNIX family of socket connections.
4177
4178       --sockdiag-ops N
4179              stop after receiving N sock_diag diagnostic messages.
4180
4181       --sockfd N
4182              start  N  workers  that pass file descriptors over a UNIX domain
4183              socket using the CMSG(3)  ancillary  data  mechanism.  For  each
4184              worker,  pair of client/server processes are created, the server
4185              opens as many file descriptors  on  /dev/null  as  possible  and
4186              passing  these over the socket to a client that reads these from
4187              the CMSG data and immediately closes the files.
4188
4189       --sockfd-ops N
4190              stop sockfd stress workers after N bogo operations.
4191
4192       --sockfd-port P
4193              start at socket port P. For N socket worker processes,  ports  P
4194              to P - 1 are used.
4195
4196       --sockmany N
4197              start  N workers that use a client process to attempt to open as
4198              many as 100000 TCP/IP socket connections to  a  server  on  port
4199              10000.
4200
4201       --sockmany-ops N
4202              stop after N connections.
4203
4204       --sockmany-if NAME
4205              use  network  interface NAME. If the interface NAME does not ex‐
4206              ist, is not up or does not support the domain then the  loopback
4207              (lo) interface is used as the default.
4208
4209       --sockpair N
4210              start  N  workers that perform socket pair I/O read/writes. This
4211              involves a pair of client/server processes  performing  randomly
4212              sized socket I/O operations.
4213
4214       --sockpair-ops N
4215              stop socket pair stress workers after N bogo operations.
4216
4217       --softlockup N
4218              start N workers that flip between with the "real-time" SCHED_FIO
4219              and SCHED_RR scheduling policies  at  the  highest  priority  to
4220              force  softlockups. This can only be run with CAP_SYS_NICE capa‐
4221              bility and for best results the number of stressors should be at
4222              least  the  number of online CPUs. Once running, this is practi‐
4223              cally impossible to stop and it will force softlockup issues and
4224              may trigger watchdog timeout reboots.
4225
4226       --softlockup-ops N
4227              stop  softlockup  stress  workers  after N bogo scheduler policy
4228              changes.
4229
4230       --sparsematrix N
4231              start N workers that exercise 3 different sparse  matrix  imple‐
4232              mentations  based  on  hashing, Judy array (for 64 bit systems),
4233              2-d  circular  linked-lists,  memory  mapped  2-d  matrix  (non-
4234              sparse),  quick  hashing  (on  preallocated nodes) and red-black
4235              tree.  The sparse matrix is populated with values, random values
4236              potentially  non-existing values are read, known existing values
4237              are read and known existing values are marked as zero. This  de‐
4238              fault  500  x  500  sparse matrix is used and 5000 items are put
4239              into the sparse matrix making it 2% utilized.
4240
4241       --sparsematrix-ops N
4242              stop after N sparsematrix test iterations.
4243
4244       --sparsematrix-items N
4245              populate the sparse matrix with N items. If N  is  greater  than
4246              the  number  of  elements  in  the  sparse matrix than N will be
4247              capped to create at 100% full sparse matrix.
4248
4249       --sparsematrix-size N
4250              use a N × N sized sparse matrix
4251
4252       --sparsematrix-method [ all | hash | judy | list | mmap | qhash | rb ]
4253              specify the type of sparse matrix  implementation  to  use.  The
4254              'all' method uses all the methods and is the default.
4255
4256              Method        Description
4257              all           exercise   with  all  the  sparsematrix
4258                            stressor methods (see below):
4259              hash          use a hash table and allocate nodes  on
4260                            the heap for each unique value at a (x,
4261                            y) matrix position.
4262              judy          use a Judy array with a  unique  1-to-1
4263                            mapping  of (x, y) matrix position into
4264                            the array.
4265              list          use a circular linked-list for sparse y
4266                            positions  each  with  circular linked-
4267                            lists for sparse x  positions  for  the
4268                            (x, y) matrix coordinates.
4269              mmap          use  a  non-sparse  mmap the entire 2-d
4270                            matrix space. Only (x, y) matrix  posi‐
4271                            tions  that  are  referenced  will  get
4272                            physically  mapped.  Note  that   large
4273                            sparse matrices cannot be mmap'd due to
4274                            lack of  virtual  address  limitations,
4275                            and too many referenced pages can trig‐
4276                            ger the out of memory killer on Linux.
4277              qhash         use a  hash  table  with  pre-allocated
4278                            nodes  for each unique value. This is a
4279                            quick hash table implementation,  nodes
4280                            are not allocated each time with calloc
4281                            and are allocated from a  pre-allocated
4282                            pool leading to quicker hash table per‐
4283                            formance than the hash method.
4284
4285       --spawn N
4286              start N workers continually spawn children using  posix_spawn(3)
4287              that  exec stress-ng and then exit almost immediately. Currently
4288              Linux only.
4289
4290       --spawn-ops N
4291              stop spawn stress workers after N bogo spawns.
4292
4293       --splice N
4294              move data from /dev/zero to /dev/null through a pipe without any
4295              copying  between kernel address space and user address space us‐
4296              ing splice(2). This is only available for Linux.
4297
4298       --splice-ops N
4299              stop after N bogo splice operations.
4300
4301       --splice-bytes N
4302              transfer N bytes per splice call, the default is  64K.  One  can
4303              specify  the  size as % of total available memory or in units of
4304              Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g.
4305
4306       --stack N
4307              start N workers that rapidly cause and catch stack overflows  by
4308              use  of  large  recursive  stack allocations.  Much like the brk
4309              stressor, this can eat up pages rapidly and may trigger the ker‐
4310              nel  OOM  killer on the process, however, the killed stressor is
4311              respawned again by a monitoring parent process.
4312
4313       --stack-fill
4314              the default action is to touch the lowest page on each stack al‐
4315              location.  This  option touches all the pages by filling the new
4316              stack allocation with zeros which forces physical  pages  to  be
4317              allocated and hence is more aggressive.
4318
4319       --stack-mlock
4320              attempt  to  mlock stack pages into memory prohibiting them from
4321              being paged out.  This is a no-op if mlock(2) is not available.
4322
4323       --stack-ops N
4324              stop stack stress workers after N bogo stack overflows.
4325
4326       --stackmmap N
4327              start N workers that use a 2MB stack that is memory mapped  onto
4328              a  temporary file. A recursive function works down the stack and
4329              flushes dirty stack pages back to the memory mapped  file  using
4330              msync(2) until the end of the stack is reached (stack overflow).
4331              This exercises dirty page and stack exception handling.
4332
4333       --stackmmap-ops N
4334              stop workers after N stack overflows have occurred.
4335
4336       --str N
4337              start N workers that exercise various libc string  functions  on
4338              random strings.
4339
4340       --str-method strfunc
4341              select  a  specific  libc  string  function to stress. Available
4342              string functions to stress are: all, index, rindex,  strcasecmp,
4343              strcat,  strchr,  strcoll,  strcmp, strcpy, strlen, strncasecmp,
4344              strncat, strncmp, strrchr and strxfrm.  See string(3)  for  more
4345              information  on these string functions.  The 'all' method is the
4346              default and will exercise all the string methods.
4347
4348       --str-ops N
4349              stop after N bogo string operations.
4350
4351       --stream N
4352              start N workers exercising a memory bandwidth  stressor  loosely
4353              based  on  the STREAM "Sustainable Memory Bandwidth in High Per‐
4354              formance Computers" benchmarking tool by John D. McCalpin, Ph.D.
4355              This  stressor  allocates  buffers that are at least 4 times the
4356              size of the CPU L2 cache and continually performs rounds of fol‐
4357              lowing computations on large arrays of double precision floating
4358              point numbers:
4359
4360              Operation          Description
4361              copy               c[i] = a[i]
4362              scale              b[i] = scalar * c[i]
4363              add                c[i] = a[i] + b[i]
4364              triad              a[i] = b[i] + (c[i] * scalar)
4365
4366              Since this is loosely based on a variant of the STREAM benchmark
4367              code,  DO  NOT submit results based on this as it is intended to
4368              in stress-ng just to stress memory and compute and NOT  intended
4369              for  STREAM accurate tuned or non-tuned benchmarking whatsoever.
4370              Use the official STREAM benchmarking tool if you desire accurate
4371              and standardised STREAM benchmarks.
4372
4373       --stream-ops N
4374              stop  after  N stream bogo operations, where a bogo operation is
4375              one round of copy, scale, add and triad operations.
4376
4377       --stream-index N
4378              specify number of stream indices used to index into the data ar‐
4379              rays  a, b and c.  This adds indirection into the data lookup by
4380              using randomly shuffled indexing into  the  three  data  arrays.
4381              Level  0  (no indexing) is the default, and 3 is where all 3 ar‐
4382              rays are indexed via 3 different randomly shuffled indexes.  The
4383              higher  the index setting the more impact this has on L1, L2 and
4384              L3 caching and hence forces higher memory read/write latencies.
4385
4386       --stream-l3-size N
4387              Specify the CPU Level 3 cache size in bytes.   One  can  specify
4388              the  size in units of Bytes, KBytes, MBytes and GBytes using the
4389              suffix b, k, m or g.  If the L3 cache size is not provided, then
4390              stress-ng  will attempt to determine the cache size, and failing
4391              this, will default the size to 4MB.
4392
4393       --stream-madvise [ hugepage | nohugepage | normal ]
4394              Specify the madvise options used on  the  memory  mapped  buffer
4395              used  in  the  stream stressor. Non-linux systems will only have
4396              the 'normal' madvise advice. The default is 'normal'.
4397
4398       --swap N
4399              start N workers that add and remove small  randomly  sizes  swap
4400              partitions  (Linux only).  Note that if too many swap partitions
4401              are added then the stressors may exit  with  exit  code  3  (not
4402              enough resources).  Requires CAP_SYS_ADMIN to run.
4403
4404       --swap-ops N
4405              stop the swap workers after N swapon/swapoff iterations.
4406
4407       -s N, --switch N
4408              start  N  workers that force context switching between two mutu‐
4409              ally blocking/unblocking  tied  processes.  By  default  message
4410              passing  over  a  pipe is used, but different methods are avail‐
4411              able.
4412
4413       --switch-ops N
4414              stop context switching workers after N bogo operations.
4415
4416       --switch-freq F
4417              run the context switching at the frequency of F context switches
4418              per  second.  Note  that  the  specified  switch rate may not be
4419              achieved because of CPU speed and memory bandwidth limitations.
4420
4421       --switch-method [ mq | pipe | sem-sysv ]
4422              select the preferred context  switch  block/run  synchronization
4423              method, these are as follows:
4424
4425              Method    Description
4426              mq        use  posix  message  queue  with a 1 item size.
4427                        Messages are passed between a  sender  and  re‐
4428                        ceiver process.
4429              pipe      single  character  messages  are  passed down a
4430                        single character sized pipe  between  a  sender
4431                        and receiver process.
4432              sem-sysv  a  SYSV semaphore is used to block/run two pro‐
4433                        cesses.
4434
4435       --symlink N
4436              start N workers creating and removing symbolic links.
4437
4438       --symlink-ops N
4439              stop symlink stress workers after N bogo operations.
4440
4441       --sync-file N
4442              start N workers that perform a range of data syncs across a file
4443              using  sync_file_range(2).   Three mixes of syncs are performed,
4444              from start to the end of the file,  from end of the file to  the
4445              start,  and a random mix. A random selection of valid sync types
4446              are    used,    covering    the     SYNC_FILE_RANGE_WAIT_BEFORE,
4447              SYNC_FILE_RANGE_WRITE and SYNC_FILE_RANGE_WAIT_AFTER flag bits.
4448
4449       --sync-file-ops N
4450              stop sync-file workers after N bogo sync operations.
4451
4452       --sync-file-bytes N
4453              specify  the  size of the file to be sync'd. One can specify the
4454              size as % of free space on the file system in  units  of  Bytes,
4455              KBytes, MBytes and GBytes using the suffix b, k, m or g.
4456
4457       --syncload N
4458              start N workers that produce sporadic short lived loads synchro‐
4459              nized across N stressor processes. By default repeated cycles of
4460              125ms  busy  load  followed by 62.5ms sleep occur across all the
4461              workers in step to create bursts of load  to  exercise  C  state
4462              transitions  and CPU frequency scaling. The busy load and sleeps
4463              have +/-10% jitter added to try exercising scheduling patterns.
4464
4465       --syncload-ops N
4466              stop syncload workers after N load/sleep cycles.
4467
4468       --syncload-msbusy M
4469              specify the busy load duration in milliseconds.
4470
4471       --syncload-mssleep M
4472              specify the sleep duration in milliseconds.
4473
4474       --sysbadaddr N
4475              start N workers that pass bad addresses to system calls to exer‐
4476              cise bad address and fault handling. The addresses used are null
4477              pointers, read only pages, write only pages, unmapped addresses,
4478              text  only  pages,  unaligned  addresses  and  top of memory ad‐
4479              dresses.
4480
4481       --sysbadaddr-ops N
4482              stop the sysbadaddr stressors after N bogo system calls.
4483
4484       --sysinfo N
4485              start N workers that continually read system  and  process  spe‐
4486              cific information.  This reads the process user and system times
4487              using the times(2) system call.   For  Linux  systems,  it  also
4488              reads overall system statistics using the sysinfo(2) system call
4489              and also the file system statistics for all mounted file systems
4490              using statfs(2).
4491
4492       --sysinfo-ops N
4493              stop the sysinfo workers after N bogo operations.
4494
4495       --sysinval N
4496              start  N workers that exercise system calls in random order with
4497              permutations of invalid arguments to force kernel error handling
4498              checks. The stress test autodetects system calls that cause pro‐
4499              cesses to crash or exit prematurely and will blocklist these af‐
4500              ter several repeated breakages. System call arguments that cause
4501              system calls to work successfully are also  detected  an  block‐
4502              listed too.  Linux only.
4503
4504       --sysinval-ops N
4505              stop sysinval workers after N system call attempts.
4506
4507       --sysfs N
4508              start  N  workers  that  recursively read files from /sys (Linux
4509              only).  This may cause specific kernel drivers to emit  messages
4510              into the kernel log.
4511
4512       --sys-ops N
4513              stop sysfs reading after N bogo read operations. Note, since the
4514              number of entries may vary between kernels, this bogo ops metric
4515              is probably very misleading.
4516
4517       --tee N
4518              move  data  from  a  writer  process to a reader process through
4519              pipes and to /dev/null without any copying  between  kernel  ad‐
4520              dress  space  and  user address space using tee(2). This is only
4521              available for Linux.
4522
4523       --tee-ops N
4524              stop after N bogo tee operations.
4525
4526       -T N, --timer N
4527              start N workers creating timer events at a default rate of 1 MHz
4528              (Linux  only);  this  can create a many thousands of timer clock
4529              interrupts. Each timer event is caught by a signal  handler  and
4530              counted as a bogo timer op.
4531
4532       --timer-ops N
4533              stop  timer  stress  workers  after  N  bogo timer events (Linux
4534              only).
4535
4536       --timer-freq F
4537              run timers at F Hz; range from 1 to 1000000000 Hz (Linux  only).
4538              By  selecting  an  appropriate  frequency stress-ng can generate
4539              hundreds of thousands of interrupts per  second.   Note:  it  is
4540              also  worth  using  --timer-slack 0 for high frequencies to stop
4541              the kernel from coalescing timer events.
4542
4543       --timer-rand
4544              select a timer frequency based around the  timer  frequency  +/-
4545              12.5% random jitter. This tries to force more variability in the
4546              timer interval to make the scheduling less predictable.
4547
4548       --timerfd N
4549              start N workers creating timerfd events at a default rate  of  1
4550              MHz  (Linux  only);  this  can  create a many thousands of timer
4551              clock events. Timer events are waited for on the timer file  de‐
4552              scriptor  using  select(2)  and  then read and counted as a bogo
4553              timerfd op.
4554
4555       --timerfd-ops N
4556              stop timerfd stress workers after N bogo timerfd  events  (Linux
4557              only).
4558
4559       --timerfs-fds N
4560              try to use a maximum of N timerfd file descriptors per stressor.
4561
4562       --timerfd-freq F
4563              run  timers at F Hz; range from 1 to 1000000000 Hz (Linux only).
4564              By selecting an appropriate  frequency  stress-ng  can  generate
4565              hundreds of thousands of interrupts per second.
4566
4567       --timerfd-rand
4568              select  a timerfd frequency based around the timer frequency +/-
4569              12.5% random jitter. This tries to force more variability in the
4570              timer interval to make the scheduling less predictable.
4571
4572       --tlb-shootdown N
4573              start  N  workers  that force Translation Lookaside Buffer (TLB)
4574              shootdowns.  This is achieved by creating up to  16  child  pro‐
4575              cesses that all share a region of memory and these processes are
4576              shared amongst the available CPUs.   The  processes  adjust  the
4577              page  mapping  settings  causing TLBs to be force flushed on the
4578              other processors, causing the TLB shootdowns.
4579
4580       --tlb-shootdown-ops N
4581              stop after N bogo TLB shootdown operations are completed.
4582
4583       --tmpfs N
4584              start N workers that create a temporary  file  on  an  available
4585              tmpfs file system and perform various file based mmap operations
4586              upon it.
4587
4588       --tmpfs-ops N
4589              stop tmpfs stressors after N bogo mmap operations.
4590
4591       --tmpfs-mmap-async
4592              enable file based memory mapping and use asynchronous  msync'ing
4593              on each page, see --tmpfs-mmap-file.
4594
4595       --tmpfs-mmap-file
4596              enable  tmpfs  file based memory mapping and by default use syn‐
4597              chronous msync'ing on each page.
4598
4599       --touch N
4600              touch files by using open(2) or creat(2) and  then  closing  and
4601              unlinking  them. The filename contains the bogo-op number and is
4602              incremented on each touch operation, hence this fills the dentry
4603              cache.  Note  that the user time and system time may be very low
4604              as most of the run time is waiting for file I/O  and  this  pro‐
4605              duces very large bogo-op rates for the very low CPU time used.
4606
4607       --touch-opts all, direct, dsync, excl, noatime, sync
4608              specify various file open options as a comma separated list. Op‐
4609              tions are as follows:
4610
4611              Option    Description
4612              all       use all the open options, namely direct, dsync,
4613                        excl, noatime and sync
4614              direct    try to minimize cache effects of the I/O to and
4615                        from this file, using the O_DIRECT open flag.
4616              dsync     ensure output has been transferred to  underly‐
4617                        ing hardware and file metadata has been updated
4618                        using the O_DSYNC open flag.
4619              excl      fail if file already exists (it should not).
4620              noatime   do not update the file last access time if  the
4621                        file is read.
4622              sync      ensure  output has been transferred to underly‐
4623                        ing hardware using the O_SYNC open flag.
4624
4625       --touch-method [ random | open | creat ]
4626              select the method the file is  created,  either  randomly  using
4627              open(2)  or  create(2), just using open(2) with the O_CREAT open
4628              flag, or with creat(2).
4629
4630       --tree N
4631              start N workers that exercise tree data structures. The  default
4632              is  to  add,  find  and  remove 250,000 64 bit integers into AVL
4633              (avl), Red-Black (rb), Splay (splay), btree  and  binary  trees.
4634              The  intention  of this stressor is to exercise memory and cache
4635              with the various tree operations.
4636
4637       --tree-ops N
4638              stop tree stressors after N bogo ops. A bogo op covers the addi‐
4639              tion, finding and removing all the items into the tree(s).
4640
4641       --tree-size N
4642              specify  the  size  of the tree, where N is the number of 64 bit
4643              integers to be added into the tree.
4644
4645       --tree-method [ all | avl | binary | btree | rb | splay ]
4646              specify the tree to be used. By default, all the trees are  used
4647              (the 'all' option).
4648
4649       --tsc N
4650              start N workers that read the Time Stamp Counter (TSC) 256 times
4651              per loop iteration (bogo operation).  This exercises the tsc in‐
4652              struction  for x86, the mftb instruction for ppc64 and the rdcy‐
4653              cle instruction for RISC-V.
4654
4655       --tsc-ops N
4656              stop the tsc workers after N bogo operations are completed.
4657
4658       --tsearch N
4659              start N workers that insert, search and delete 32  bit  integers
4660              on  a  binary tree using tsearch(3), tfind(3) and tdelete(3). By
4661              default, there are 65536 randomized integers used in  the  tree.
4662              This  is a useful method to exercise random access of memory and
4663              processor cache.
4664
4665       --tsearch-ops N
4666              stop the tsearch workers after N bogo tree operations  are  com‐
4667              pleted.
4668
4669       --tsearch-size N
4670              specify  the  size  (number  of 32 bit integers) in the array to
4671              tsearch. Size can be from 1K to 4M.
4672
4673       --tun N
4674              start N workers that create a network tunnel  device  and  sends
4675              and receives packets over the tunnel using UDP and then destroys
4676              it. A new random 192.168.*.* IPv4 address is used  each  time  a
4677              tunnel is created.
4678
4679       --tun-ops N
4680              stop after N iterations of creating/sending/receiving/destroying
4681              a tunnel.
4682
4683       --tun-tap
4684              use network tap device using level 2  frames  (bridging)  rather
4685              than a tun device for level 3 raw packets (tunnelling).
4686
4687       --udp N
4688              start  N  workers  that transmit data using UDP. This involves a
4689              pair of client/server processes performing rapid  connect,  send
4690              and receives and disconnects on the local host.
4691
4692       --udp-domain D
4693              specify  the domain to use, the default is ipv4. Currently ipv4,
4694              ipv6 and unix are supported.
4695
4696       --udp-if NAME
4697              use network interface NAME. If the interface NAME does  not  ex‐
4698              ist,  is not up or does not support the domain then the loopback
4699              (lo) interface is used as the default.
4700
4701       --udp-gro
4702              enable UDP-GRO (Generic Receive Offload) if supported.
4703
4704       --udp-lite
4705              use the UDP-Lite (RFC 3828) protocol (only for ipv4 and ipv6 do‐
4706              mains).
4707
4708       --udp-ops N
4709              stop udp stress workers after N bogo operations.
4710
4711       --udp-port P
4712              start  at  port  P. For N udp worker processes, ports P to P - 1
4713              are used. By default, ports 7000 upwards are used.
4714
4715       --udp-flood N
4716              start N workers that attempt to flood the host with UDP  packets
4717              to random ports. The IP address of the packets are currently not
4718              spoofed.  This  is  only  available  on  systems  that   support
4719              AF_PACKET.
4720
4721       --udp-flood-domain D
4722              specify  the  domain to use, the default is ipv4. Currently ipv4
4723              and ipv6 are supported.
4724
4725       --udp-flood-if NAME
4726              use network interface NAME. If the interface NAME does  not  ex‐
4727              ist,  is not up or does not support the domain then the loopback
4728              (lo) interface is used as the default.
4729
4730       --udp-flood-ops N
4731              stop udp-flood stress workers after N bogo operations.
4732
4733       --unshare N
4734              start N workers that each fork off 32 child processes,  each  of
4735              which  exercises  the  unshare(2)  system call by disassociating
4736              parts of the process execution context. (Linux only).
4737
4738       --unshare-ops N
4739              stop after N bogo unshare operations.
4740
4741       --uprobe N
4742              start N workers that trace the entry to libc  function  getpid()
4743              using  the  Linux uprobe kernel tracing mechanism. This requires
4744              CAP_SYS_ADMIN capabilities and a  modern  Linux  uprobe  capable
4745              kernel.
4746
4747       --uprobe-ops N
4748              stop uprobe tracing after N trace events of the function that is
4749              being traced.
4750
4751       -u N, --urandom N
4752              start N workers reading /dev/urandom  (Linux  only).  This  will
4753              load the kernel random number source.
4754
4755       --urandom-ops N
4756              stop urandom stress workers after N urandom bogo read operations
4757              (Linux only).
4758
4759       --userfaultfd N
4760              start N workers that generate  write  page  faults  on  a  small
4761              anonymously  mapped  memory region and handle these faults using
4762              the user space fault handling  via  the  userfaultfd  mechanism.
4763              This  will  generate  a  large quantity of major page faults and
4764              also context switches during the handling of  the  page  faults.
4765              (Linux only).
4766
4767       --userfaultfd-ops N
4768              stop userfaultfd stress workers after N page faults.
4769
4770       --userfaultfd-bytes N
4771              mmap  N  bytes  per userfaultfd worker to page fault on, the de‐
4772              fault is 16MB.  One can specify the size as % of total available
4773              memory or in units of Bytes, KBytes, MBytes and GBytes using the
4774              suffix b, k, m or g.
4775
4776       --usersyscall N
4777              start N workers that exercise the Linux prctl  userspace  system
4778              call  mechanism.  A userspace system call is handled by a SIGSYS
4779              signal handler and  exercised  with  the  system  call  disabled
4780              (ENOSYS)     and    enabled    (via    SIGSYS)    using    prctl
4781              PR_SET_SYSCALL_USER_DISPATCH.
4782
4783       --usersyscall-ops N
4784              stop after N successful userspace syscalls via a  SIGSYS  signal
4785              handler.
4786
4787       --utime N
4788              start  N  workers  updating  file timestamps. This is mainly CPU
4789              bound when the default is used as the  system  flushes  metadata
4790              changes only periodically.
4791
4792       --utime-ops N
4793              stop utime stress workers after N utime bogo operations.
4794
4795       --utime-fsync
4796              force  metadata  changes  on  each  file  timestamp update to be
4797              flushed to disk.  This forces the test to become I/O  bound  and
4798              will result in many dirty metadata writes.
4799
4800       --vdso N
4801              start  N  workers  that  repeatedly call each of the system call
4802              functions in the vDSO (virtual dynamic shared object).  The vDSO
4803              is  a shared library that the kernel maps into the address space
4804              of all user-space applications to allow fast  access  to  kernel
4805              data  to some system calls without the need of performing an ex‐
4806              pensive system call.
4807
4808       --vdso-ops N
4809              stop after N vDSO functions calls.
4810
4811       --vdso-func F
4812              Instead of calling all the vDSO functions, just  call  the  vDSO
4813              function  F.  The functions depend on the kernel being used, but
4814              are typically clock_gettime, getcpu, gettimeofday and time.
4815
4816       --vecmath N
4817              start N workers that perform various unsigned integer math oper‐
4818              ations  on  various 128 bit vectors. A mix of vector math opera‐
4819              tions are performed on the following vectors: 16 × 8 bits,  8  ×
4820              16  bits, 4 × 32 bits, 2 × 64 bits. The metrics produced by this
4821              mix depend on the processor architecture and the vector math op‐
4822              timisations produced by the compiler.
4823
4824       --vecmath-ops N
4825              stop after N bogo vector integer math operations.
4826
4827       --vecwide N
4828              start  N  workers  that perform various 8 bit math operations on
4829              vectors of 4, 8, 16, 32, 64, 128, 256, 512, 1024 and 2048 bytes.
4830              With  the  -v option the relative compute performance vs the ex‐
4831              pected compute performance based on total run time is shown  for
4832              the first vecwide worker. The vecwide stressor exercises various
4833              processor vector instruction mixes and how well the compiler can
4834              map the vector operations to the target instruction set.
4835
4836       --vecwide-ops N
4837              stop after N bogo vector operations (2048 iterations of a mix of
4838              vector instruction operations).
4839
4840       --verity N
4841              start N workers that exercise read-only  file  based  authenticy
4842              protection  using  the  verity  ioctls  FS_IOC_ENABLE_VERITY and
4843              FS_IOC_MEASURE_VERITY.  This requires file systems  with  verity
4844              support  (currently ext4 and f2fs on Linux) with the verity fea‐
4845              ture enabled. The test attempts to creates  a  small  file  with
4846              multiple  small extents and enables verity on the file and veri‐
4847              fies it. It also checks to see if the file  has  verity  enabled
4848              with the FS_VERITY_FL bit set on the file flags.
4849
4850       --verity-ops N
4851              stop  the  verity  workers  after  N file create, enable verity,
4852              check verity and unlink cycles.
4853
4854       --vfork N
4855              start N workers continually vforking children  that  immediately
4856              exit.
4857
4858       --vfork-ops N
4859              stop vfork stress workers after N bogo operations.
4860
4861       --vfork-max P
4862              create P processes and then wait for them to exit per iteration.
4863              The default is just 1; higher values will create many  temporary
4864              zombie  processes  that are waiting to be reaped. One can poten‐
4865              tially  fill  up  the  process  table  using  high  values   for
4866              --vfork-max and --vfork.
4867
4868       --vfork-vm
4869              enable  detrimental performance virtual memory advice using mad‐
4870              vise on all pages of the vforked process.  Where  possible  this
4871              will try to set every page in the new process with using madvise
4872              MADV_MERGEABLE,  MADV_WILLNEED,  MADV_HUGEPAGE  and  MADV_RANDOM
4873              flags. Linux only.
4874
4875       --vforkmany N
4876              start  N  workers that spawn off a chain of vfork children until
4877              the process table  fills  up  and/or  vfork  fails.   vfork  can
4878              rapidly  create  child  processes  and the parent process has to
4879              wait until the child dies, so this stressor rapidly fills up the
4880              process table.
4881
4882       --vforkmany-ops N
4883              stop vforkmany stressors after N vforks have been made.
4884
4885       --vforkmany-vm
4886              enable  detrimental performance virtual memory advice using mad‐
4887              vise on all pages of the vforked process.  Where  possible  this
4888              will try to set every page in the new process with using madvise
4889              MADV_MERGEABLE,  MADV_WILLNEED,  MADV_HUGEPAGE  and  MADV_RANDOM
4890              flags. Linux only.
4891
4892       -m N, --vm N
4893              start N workers continuously calling mmap(2)/munmap(2) and writ‐
4894              ing to the allocated memory. Note that this can cause systems to
4895              trip the kernel OOM killer on Linux systems if not enough physi‐
4896              cal memory and swap is not available.
4897
4898       --vm-bytes N
4899              mmap N bytes per vm worker, the default is 256MB. One can  spec‐
4900              ify  the  size  as  %  of  total available memory or in units of
4901              Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g.
4902
4903       --vm-ops N
4904              stop vm workers after N bogo operations.
4905
4906       --vm-hang N
4907              sleep N seconds before unmapping memory,  the  default  is  zero
4908              seconds.  Specifying 0 will do an infinite wait.
4909
4910       --vm-keep
4911              do not continually unmap and map memory, just keep on re-writing
4912              to it.
4913
4914       --vm-locked
4915              Lock the pages of the  mapped  region  into  memory  using  mmap
4916              MAP_LOCKED  (since  Linux  2.5.37).   This is similar to locking
4917              memory as described in mlock(2).
4918
4919       --vm-madvise advice
4920              Specify the madvise 'advice' option used on  the  memory  mapped
4921              regions  used  in  the  vm stressor. Non-linux systems will only
4922              have the 'normal' madvise advice, linux systems  support  'dont‐
4923              need',  'hugepage',  'mergeable' , 'nohugepage', 'normal', 'ran‐
4924              dom', 'sequential', 'unmergeable' and 'willneed' advice. If this
4925              option  is  not  used then the default is to pick random madvise
4926              advice for each mmap call. See madvise(2) for more details.
4927
4928       --vm-method m
4929              specify a vm stress method. By default, all the  stress  methods
4930              are  exercised  sequentially,  however  one can specify just one
4931              method to be used if required.  Each of the vm  workers  have  3
4932              phases:
4933
4934              1. Initialised. The anonymously memory mapped region is set to a
4935              known pattern.
4936
4937              2. Exercised. Memory is modified in  a  known  predictable  way.
4938              Some  vm  workers  alter  memory sequentially, some use small or
4939              large strides to step along memory.
4940
4941              3. Checked. The modified memory is checked to see if it  matches
4942              the expected result.
4943
4944              The vm methods containing 'prime' in their name have a stride of
4945              the largest prime less than 2^64, allowing to them to thoroughly
4946              step through memory and touch all locations just once while also
4947              doing without touching memory cells next  to  each  other.  This
4948              strategy exercises the cache and page non-locality.
4949
4950              Since  the memory being exercised is virtually mapped then there
4951              is no guarantee of touching page  addresses  in  any  particular
4952              physical  order.   These workers should not be used to test that
4953              all the system's memory is working correctly either,  use  tools
4954              such as memtest86 instead.
4955
4956              The vm stress methods are intended to exercise memory in ways to
4957              possibly find memory issues and to try to force thermal errors.
4958
4959              Available vm stress methods are described as follows:
4960
4961              Method       Description
4962              all          iterate over  all  the  vm  stress  methods  as
4963                           listed below.
4964              cache-lines  work  through  memory  in  64  byte cache sized
4965                           steps writing a single  byte  per  cache  line.
4966                           Once  the write is complete, the memory is read
4967                           to verify the values are written correctly.
4968              cache-stripe work through memory  in  64  byte  cache  sized
4969                           chunks,  writing  in ascending address order on
4970                           even offsets and descending  address  order  on
4971                           odd offsets.
4972              flip         sequentially  work through memory 8 times, each
4973                           time just one bit in memory flipped (inverted).
4974                           This  will  effectively  invert  each byte in 8
4975                           passes.
4976              fwdrev       write to even addressed bytes in a forward  di‐
4977                           rection  and odd addressed bytes in reverse di‐
4978                           rection. rhe contents are sanity  checked  once
4979                           all the addresses have been written to.
4980              galpat-0     galloping  pattern zeros. This sets all bits to
4981                           0 and flips just 1 in 4096 bits to 1.  It  then
4982                           checks to see if the 1s are pulled down to 0 by
4983                           their neighbours or of the neighbours have been
4984                           pulled up to 1.
4985              galpat-1     galloping pattern ones. This sets all bits to 1
4986                           and flips just 1 in 4096 bits  to  0.  It  then
4987                           checks  to  see if the 0s are pulled up to 1 by
4988                           their neighbours or of the neighbours have been
4989                           pulled down to 0.
4990              gray         fill  the  memory  with  sequential  gray codes
4991                           (these only change 1 bit at a time between  ad‐
4992                           jacent  bytes)  and  then check if they are set
4993                           correctly.
4994              grayflip     fill memory with adjacent bytes  of  gray  code
4995                           and  inverted gray code pairs to change as many
4996                           bits at a time between adjacent bytes and check
4997                           if these are set correctly.
4998              incdec       work  sequentially  through  memory  twice, the
4999                           first pass increments each byte by  a  specific
5000                           value  and the second pass decrements each byte
5001                           back to the original start  value.  The  incre‐
5002                           ment/decrement value changes on each invocation
5003                           of the stressor.
5004
5005
5006
5007
5008
5009              inc-nybble   initialise memory to a set value (that  changes
5010                           on  each  invocation  of the stressor) and then
5011                           sequentially work through each byte  increment‐
5012                           ing  the  bottom 4 bits by 1 and the top 4 bits
5013                           by 15.
5014              rand-set     sequentially work  through  memory  in  64  bit
5015                           chunks setting bytes in the chunk to the same 8
5016                           bit random value.  The random value changes  on
5017                           each  chunk.   Check  that  the values have not
5018                           changed.
5019              rand-sum     sequentially set all memory  to  random  values
5020                           and  then  summate the number of bits that have
5021                           changed from the original set values.
5022              read64       sequentially read memory  using  32  x  64  bit
5023                           reads  per  bogo loop. Each loop equates to one
5024                           bogo  operation.   This  exercises  raw  memory
5025                           reads.
5026              ror          fill  memory with a random pattern and then se‐
5027                           quentially rotate 64 bits of  memory  right  by
5028                           one   bit,   then   check  the  final  load/ro‐
5029                           tate/stored values.
5030              swap         fill memory in 64 byte chunks with random  pat‐
5031                           terns.  Then swap each 64 chunk with a randomly
5032                           chosen chunk. Finally, reverse the swap to  put
5033                           the  chunks  back  to  their original place and
5034                           check if the data is  correct.  This  exercises
5035                           adjacent and random memory load/stores.
5036              move-inv     sequentially fill memory 64 bits of memory at a
5037                           time with random values, and then check if  the
5038                           memory  is  set  correctly.  Next, sequentially
5039                           invert each 64 bit pattern and again  check  if
5040                           the memory is set as expected.
5041              modulo-x     fill  memory over 23 iterations. Each iteration
5042                           starts one byte further along from the start of
5043                           the  memory and steps along in 23 byte strides.
5044                           In each stride, the first byte is set to a ran‐
5045                           dom  pattern and all other bytes are set to the
5046                           inverse.  Then it checks see if the first  byte
5047                           contains  the expected random pattern. This ex‐
5048                           ercises cache store/reads as well as seeing  if
5049                           neighbouring cells influence each other.
5050              mscan        fill  each  bit in each byte with 1s then check
5051                           these are set, fill each bit in each byte  with
5052                           0s and check these are clear.
5053              prime-0      iterate  8  times by stepping through memory in
5054                           very large prime strides clearing just  on  bit
5055                           at  a  time in every byte. Then check to see if
5056                           all bits are set to zero.
5057              prime-1      iterate 8 times by stepping through  memory  in
5058                           very large prime strides setting just on bit at
5059                           a time in every byte. Then check to see if  all
5060                           bits are set to one.
5061              prime-gray-0 first  step  through memory in very large prime
5062                           strides clearing just on bit (based on  a  gray
5063                           code)  in  every  byte.  Next,  repeat this but
5064                           clear the other 7 bits. Then check  to  see  if
5065                           all bits are set to zero.
5066              prime-gray-1 first  step  through memory in very large prime
5067                           strides setting just on bit (based  on  a  gray
5068                           code)  in every byte. Next, repeat this but set
5069                           the other 7 bits. Then check to see if all bits
5070                           are set to one.
5071              rowhammer    try   to  force  memory  corruption  using  the
5072                           rowhammer memory stressor. This fetches two  32
5073                           bit  integers  from  memory  and forces a cache
5074                           flush on the two addresses multiple times. This
5075                           has  been  known  to force bit flipping on some
5076                           hardware, especially with lower frequency  mem‐
5077                           ory refresh cycles.
5078              walk-0d      for each byte in memory, walk through each data
5079                           line setting them to low (and  the  others  are
5080                           set  high)  and check that the written value is
5081                           as expected. This checks if any data lines  are
5082                           stuck.
5083              walk-1d      for each byte in memory, walk through each data
5084                           line setting them to high (and the  others  are
5085                           set low) and check that the written value is as
5086                           expected. This checks if  any  data  lines  are
5087                           stuck.
5088              walk-0a      in  the  given  memory  mapping, work through a
5089                           range of  specially  chosen  addresses  working
5090                           through  address  lines  to  see if any address
5091                           lines are stuck low. This works best with phys‐
5092                           ical  memory  addressing,  however,  exercising
5093                           these virtual addresses has some value too.
5094              walk-1a      in the given memory  mapping,  work  through  a
5095                           range  of  specially  chosen  addresses working
5096                           through address lines to  see  if  any  address
5097                           lines  are  stuck  high.  This  works best with
5098                           physical memory addressing, however, exercising
5099                           these virtual addresses has some value too.
5100
5101
5102
5103
5104              write64      sequentially  write to memory using 32 x 64 bit
5105                           writes per bogo loop. Each loop equates to  one
5106                           bogo  operation.   This  exercises  raw  memory
5107                           writes.   Note  that  memory  writes  are   not
5108                           checked at the end of each test iteration.
5109              write64nt    sequentially  write to memory using 32 x 64 bit
5110                           non-temporal writes per bogo loop.   Each  loop
5111                           equates  to one bogo operation.  This exercises
5112                           cacheless raw memory writes and is only  avail‐
5113                           able on x86 sse2 capable systems built with gcc
5114                           and clang compilers.  Note that  memory  writes
5115                           are  not checked at the end of each test itera‐
5116                           tion.
5117              write1024v   sequentially write to memory using 1 x 1024 bit
5118                           vector  write  per bogo loop (only available if
5119                           the compiler supports vector types).  Each loop
5120                           equates  to one bogo operation.  This exercises
5121                           raw memory writes.  Note that memory writes are
5122                           not checked at the end of each test iteration.
5123              zero-one     set  all  memory bits to zero and then check if
5124                           any bits are not zero. Next, set all the memory
5125                           bits to one and check if any bits are not one.
5126
5127       --vm-populate
5128              populate  (prefault)  page  tables for the memory mappings; this
5129              can stress swapping. Only  available  on  systems  that  support
5130              MAP_POPULATE (since Linux 2.5.46).
5131
5132       --vm-addr N
5133              start  N  workers  that exercise virtual memory addressing using
5134              various methods to walk through a memory mapped  address  range.
5135              This will exercise mapped private addresses from 8MB to 64MB per
5136              worker and try to generate cache and TLB inefficient  addressing
5137              patterns. Each method will set the memory to a random pattern in
5138              a write phase and then sanity check this in a read phase.
5139
5140       --vm-addr-ops N
5141              stop N workers after N bogo addressing passes.
5142
5143       --vm-addr-method M
5144              specify a vm address stress method. By default, all  the  stress
5145              methods are exercised sequentially, however one can specify just
5146              one method to be used if required.
5147
5148              Available vm address stress methods are described as follows:
5149
5150              Method   Description
5151              all      iterate over  all  the  vm  stress  methods  as
5152                       listed below.
5153              pwr2     work  through memory addresses in steps of pow‐
5154                       ers of two.
5155              pwr2inv  like pwr2, but with the  all  relevant  address
5156                       bits inverted.
5157              gray     work  through  memory with gray coded addresses
5158                       so that each change of address just  changes  1
5159                       bit compared to the previous address.
5160              grayinv  like  gray,  but  with the all relevant address
5161                       bits inverted, hence all bits change apart from
5162                       1 in the address range.
5163              rev      work through the address range with the bits in
5164                       the address range reversed.
5165              revinv   like rev, but with  all  the  relevant  address
5166                       bits inverted.
5167              inc      work through the address range forwards sequen‐
5168                       tially, byte by byte.
5169              incinv   like inc, but with  all  the  relevant  address
5170                       bits inverted.
5171              dec      work  through  the  address range backwards se‐
5172                       quentially, byte by byte.
5173              decinv   like dec, but with  all  the  relevant  address
5174                       bits inverted.
5175
5176       --vm-rw N
5177              start  N workers that transfer memory to/from a parent/child us‐
5178              ing process_vm_writev(2) and process_vm_readv(2). This  is  fea‐
5179              ture is only supported on Linux.  Memory transfers are only ver‐
5180              ified if the --verify option is enabled.
5181
5182       --vm-rw-ops N
5183              stop vm-rw workers after N memory read/writes.
5184
5185       --vm-rw-bytes N
5186              mmap N bytes per vm-rw worker, the  default  is  16MB.  One  can
5187              specify  the  size as % of total available memory or in units of
5188              Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g.
5189
5190       --vm-segv N
5191              start N workers that create a child process that unmaps its  ad‐
5192              dress space causing a SIGSEGV on return from the unmap.
5193
5194       --vm-segv-ops N
5195              stop after N bogo vm-segv SIGSEGV faults.
5196
5197       --vm-splice N
5198              move  data  from  memory to /dev/null through a pipe without any
5199              copying between kernel address space and user address space  us‐
5200              ing  vmsplice(2)  and  splice(2).   This  is  only available for
5201              Linux.
5202
5203       --vm-splice-ops N
5204              stop after N bogo vm-splice operations.
5205
5206       --vm-splice-bytes N
5207              transfer N bytes per vmsplice call, the default is 64K. One  can
5208              specify  the  size as % of total available memory or in units of
5209              Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g.
5210
5211       --wait N
5212              start N workers that spawn off two  children;  one  spins  in  a
5213              pause(2)  loop,  the  other  continually stops and continues the
5214              first. The controlling process waits on the first  child  to  be
5215              resumed   by  the  delivery  of  SIGCONT  using  waitpid(2)  and
5216              waitid(2).
5217
5218       --wait-ops N
5219              stop after N bogo wait operations.
5220
5221       --watchdog N
5222              start N workers that exercising the /dev/watchdog  watchdog  in‐
5223              terface   by  opening  it,  perform  various  watchdog  specific
5224              ioctl(2) commands on the device and close  it.   Before  closing
5225              the  special  watchdog magic close message is written to the de‐
5226              vice to try and force it to never trip a watchdog  reboot  after
5227              the  stressor has been run.  Note that this stressor needs to be
5228              run as root with the --pathological option and is only available
5229              on Linux.
5230
5231       --watchdog-ops N
5232              stop after N bogo operations on the watchdog device.
5233
5234       --wcs N
5235              start N workers that exercise various libc wide character string
5236              functions on random strings.
5237
5238       --wcs-method wcsfunc
5239              select a specific libc wide character string function to stress.
5240              Available  string  functions to stress are: all, wcscasecmp, wc‐
5241              scat, wcschr, wcscoll, wcscmp, wcscpy, wcslen, wcsncasecmp,  wc‐
5242              sncat,  wcsncmp,  wcsrchr  and wcsxfrm.  The 'all' method is the
5243              default and will exercise all the string methods.
5244
5245       --wcs-ops N
5246              stop after N bogo wide character string operations.
5247
5248       --x86syscall N
5249              start N workers that repeatedly exercise the x86-64 syscall  in‐
5250              struction  to  call  the  getcpu(2), gettimeofday(2) and time(2)
5251              system using the Linux vsyscall handler. Only for Linux.
5252
5253       --x86syscall-ops N
5254              stop after N x86syscall system calls.
5255
5256       --x86syscall-func F
5257              Instead of exercising the 3 syscall system calls, just call  the
5258              syscall  function  F. The function F must be one of getcpu, get‐
5259              timeofday and time.
5260
5261       --xattr N
5262              start N workers that create, update and delete  batches  of  ex‐
5263              tended attributes on a file.
5264
5265       --xattr-ops N
5266              stop after N bogo extended attribute operations.
5267
5268       -y N, --yield N
5269              start  N workers that call sched_yield(2). This stressor ensures
5270              that at least 2 child processes per CPU exercise shield_yield(2)
5271              no  matter  how many workers are specified, thus always ensuring
5272              rapid context switching.
5273
5274       --yield-ops N
5275              stop yield stress workers after  N  sched_yield(2)  bogo  opera‐
5276              tions.
5277
5278       --zero N
5279              start N workers reading /dev/zero.
5280
5281       --zero-ops N
5282              stop zero stress workers after N /dev/zero bogo read operations.
5283
5284       --zlib N
5285              start  N workers compressing and decompressing random data using
5286              zlib. Each worker has two processes, one that compresses  random
5287              data and pipes it to another process that decompresses the data.
5288              This stressor exercises CPU, cache and memory.
5289
5290       --zlib-ops N
5291              stop after N bogo compression operations, each bogo  compression
5292              operation  is a compression of 64K of random data at the highest
5293              compression level.
5294
5295       --zlib-level L
5296              specify the compression level (0..9), where 0 = no  compression,
5297              1 = fastest compression and 9 = best compression.
5298
5299       --zlib-method method
5300              specify the type of random data to send to the zlib library.  By
5301              default, the data stream is created from a random  selection  of
5302              the  different data generation processes.  However one can spec‐
5303              ify just one method to be used if required.  Available zlib data
5304              generation methods are described as follows:
5305
5306              Method      Description
5307              00ff        randomly distributed 0x00 and 0xFF values.
5308              ascii01     randomly distributed ASCII 0 and 1 characters.
5309              asciidigits randomly  distributed ASCII digits in the range
5310                          of 0 and 9.
5311              bcd         packed binary coded decimals, 0..99 packed into
5312                          2 4-bit nybbles.
5313              binary      32 bit random numbers.
5314              brown       8  bit brown noise (Brownian motion/Random Walk
5315                          noise).
5316              double      double precision floating  point  numbers  from
5317                          sin(θ).
5318              fixed       data stream is repeated 0x04030201.
5319              gcr         random values as 4 x 4 bit data turned into 4 x
5320                          5 bit group  coded  recording  (GCR)  patterns.
5321                          Each  5  bit  GCR  value starts or ends with at
5322                          most one zero  bit  so  that  concatenated  GCR
5323                          codes have no more than two zero bits in a row.
5324              gray        16  bit gray codes generated from an increment‐
5325                          ing counter.
5326              latin       Random latin sentences from a sample  of  Lorem
5327                          Ipsum text.
5328              lehmer      Fast  random  values  generated  using Lehmer's
5329                          generator using a 128 bit multiply.
5330              lfsr32      Values generated from a 32  bit  Galois  linear
5331                          feedback  shift  register  using the polynomial
5332                          x↑32 + x↑31 + x↑29 + x + 1.  This  generates  a
5333                          ring  of   2↑32  -  1 unique values (all 32 bit
5334                          values except for 0).
5335              logmap      Values generated from a logistical map  of  the
5336                          equation  Χn+1 = r ×  Χn × (1 - Χn) where r > ≈
5337                          3.56994567 to produce chaotic data. The  values
5338                          are  scaled  by a large arbitrary value and the
5339                          lower 8 bits of this value are compressed.
5340              lrand48     Uniformly distributed pseudo-random 32 bit val‐
5341                          ues generated from lrand48(3).
5342              morse       Morse  code  generated  from  random latin sen‐
5343                          tences from a sample of Lorem Ipsum text.
5344              nybble      randomly distributed bytes in the range of 0x00
5345                          to 0x0f.
5346              objcode     object  code selected from a random start point
5347                          in the stress-ng text segment.
5348              parity      7 bit binary data with 1 parity bit.
5349              pink        pink noise in the range 0..255 generated  using
5350                          the Gardner method with the McCartney selection
5351                          tree optimization.  Pink  noise  is  where  the
5352                          power  spectral  density  is  inversely propor‐
5353                          tional to the frequency of the signal and hence
5354                          is slightly compressible.
5355              random      segments of the data stream are created by ran‐
5356                          domly calling  the  different  data  generation
5357                          methods.
5358              rarely1     data that has a single 1 in every 32 bits, ran‐
5359                          domly located.
5360              rarely0     data that has a single 0 in every 32 bits, ran‐
5361                          domly located.
5362              rdrand      x86-64  only, generate random data using rdrand
5363                          instruction.
5364              ror32       generate a 32 bit random value, rotate it right
5365                          0  to  7 places and store the rotated value for
5366                          each of the rotations.
5367              text        random ASCII text.
5368              utf8        random 8 bit data encoded to UTF-8.
5369              zero        all zeros, compresses very easily.
5370
5371       --zlib-window-bits W
5372              specify the window bits used to specify the history buffer size.
5373              The  value  is specified as the base two logarithm of the buffer
5374              size (e.g. value 9 is 2^9 = 512 bytes).  Default is 15.
5375
5376              Values:
5377              -8-(-15): raw deflate format
5378                  8-15: zlib format
5379                 24-31: gzip format
5380                 40-47: inflate auto format detection using zlib deflate format
5381
5382       --zlib-mem-level L specify the reserved compression  state  memory  for
5383       zlib.  Default is 8.
5384
5385              Values:
5386              1 = minimum memory usage
5387              9 = maximum memory usage
5388
5389       --zlib-strategy S
5390              specifies  the strategy to use when deflating data. This is used
5391              to tune the compression algorithm.  Default is 0.
5392
5393              Values:
5394              0: used for normal data (Z_DEFAULT_STRATEGY)
5395              1: for data generated by a filter or predictor (Z_FILTERED)
5396              2: forces huffman encoding (Z_HUFFMAN_ONLY)
5397              3: Limit match distances to one run-length-encoding (Z_RLE)
5398              4: prevents dynamic huffman codes (Z_FIXED)
5399
5400       --zlib-stream-bytes S
5401              specify the amount of bytes to deflate until deflate should fin‐
5402              ish  the block and return with Z_STREAM_END. One can specify the
5403              size in units of Bytes, KBytes, MBytes and GBytes using the suf‐
5404              fix b, k, m or g.  Default is 0 which creates and endless stream
5405              until stressor ends.
5406
5407              Values:
5408              0: creates an endless deflate stream until stressor stops
5409              n: creates an stream of n bytes over and over again.
5410                 Each block will be closed with Z_STREAM_END.
5411
5412
5413       --zombie N
5414              start N workers that create zombie processes. This will  rapidly
5415              try to create a default of 8192 child processes that immediately
5416              die and wait in a zombie state until they are reaped.  Once  the
5417              maximum  number  of  processes is reached (or fork fails because
5418              one has reached the maximum allowed number of children) the old‐
5419              est  child  is  reaped  and  a  new process is then created in a
5420              first-in first-out manner, and then repeated.
5421
5422       --zombie-ops N
5423              stop zombie stress workers after N bogo zombie operations.
5424
5425       --zombie-max N
5426              try to create as many as N zombie processes.  This  may  not  be
5427              reached if the system limit is less than N.
5428

EXAMPLES

5430       stress-ng --vm 8 --vm-bytes 80% -t 1h
5431
5432              run  8  virtual  memory  stressors  that combined use 80% of the
5433              available memory for 1 hour. Thus each stressor uses 10% of  the
5434              available memory.
5435
5436       stress-ng --cpu 4 --io 2 --vm 1 --vm-bytes 1G --timeout 60s
5437
5438              runs  for  60 seconds with 4 cpu stressors, 2 io stressors and 1
5439              vm stressor using 1GB of virtual memory.
5440
5441       stress-ng --iomix 2 --iomix-bytes 10% -t 10m
5442
5443              runs 2 instances of the mixed I/O stressors using a total of 10%
5444              of the available file system space for 10 minutes. Each stressor
5445              will use 5% of the available file system space.
5446
5447       stress-ng  --cyclic  1  --cyclic-dist  2500  --cyclic-method   clock_ns
5448       --cyclic-prio 100 --cyclic-sleep 10000 --hdd 0 -t 1m
5449
5450              measures  real  time  scheduling  latencies  created  by the hdd
5451              stressor. This uses the high resolution nanosecond clock to mea‐
5452              sure  latencies  during sleeps of 10,000 nanoseconds. At the end
5453              of 1 minute of stressing, the latency distribution with 2500  ns
5454              intervals  will  be  displayed.  NOTE: this must be run with the
5455              CAP_SYS_NICE capability to enable the real  time  scheduling  to
5456              get accurate measurements.
5457
5458       stress-ng --cpu 8 --cpu-ops 800000
5459
5460              runs 8 cpu stressors and stops after 800000 bogo operations.
5461
5462       stress-ng --sequential 2 --timeout 2m --metrics
5463
5464              run  2  simultaneous instances of all the stressors sequentially
5465              one by one, each for 2 minutes and  summarise  with  performance
5466              metrics at the end.
5467
5468       stress-ng --cpu 4 --cpu-method fft --cpu-ops 10000 --metrics-brief
5469
5470              run  4  FFT  cpu stressors, stop after 10000 bogo operations and
5471              produce a summary just for the FFT results.
5472
5473       stress-ng --cpu -1 --cpu-method all -t 1h --cpu-load 90
5474
5475              run cpu stressors on all online CPUs  working  through  all  the
5476              available CPU stressors for 1 hour, loading the CPUs at 90% load
5477              capacity.
5478
5479       stress-ng --cpu 0 --cpu-method all -t 20m
5480
5481              run cpu stressors on all configured CPUs working through all the
5482              available CPU stressors for 20 minutes
5483
5484       stress-ng --all 4 --timeout 5m
5485
5486              run 4 instances of all the stressors for 5 minutes.
5487
5488       stress-ng --random 64
5489
5490              run 64 stressors that are randomly chosen from all the available
5491              stressors.
5492
5493       stress-ng --cpu 64 --cpu-method all --verify -t 10m --metrics-brief
5494
5495              run 64 instances of all the different cpu stressors  and  verify
5496              that the computations are correct for 10 minutes with a bogo op‐
5497              erations summary at the end.
5498
5499       stress-ng --sequential -1 -t 10m
5500
5501              run all the stressors one by one for 10 minutes, with the number
5502              of  instances  of  each  stressor  matching the number of online
5503              CPUs.
5504
5505       stress-ng --sequential 8 --class io -t 5m --times
5506
5507              run all the stressors in the io class one by one for  5  minutes
5508              each, with 8 instances of each stressor running concurrently and
5509              show overall time utilisation statistics at the end of the run.
5510
5511       stress-ng --all -1 --maximize --aggressive
5512
5513              run all the stressors (1 instance of each per online CPU) simul‐
5514              taneously,  maximize  the  settings  (memory sizes, file alloca‐
5515              tions, etc.) and select the most demanding/aggressive options.
5516
5517       stress-ng --random 32 -x numa,hdd,key
5518
5519              run 32 randomly selected stressors and exclude the numa, hdd and
5520              key stressors
5521
5522       stress-ng --sequential 4 --class vm --exclude bigheap,brk,stack
5523
5524              run  4  instances  of the VM stressors one after each other, ex‐
5525              cluding the bigheap, brk and stack stressors
5526
5527       stress-ng --taskset 0,2-3 --cpu 3
5528
5529              run 3 instances of the CPU stressor and pin them to  CPUs  0,  2
5530              and 3.
5531

EXIT STATUS

5533         Status     Description
5534           0        Success.
5535           1        Error; incorrect user options or a fatal resource issue in
5536                    the stress-ng stressor harness (for example, out  of  mem‐
5537                    ory).
5538           2        One or more stressors failed.
5539           3        One or more stressors failed to initialise because of lack
5540                    of resources, for example ENOMEM (no memory),  ENOSPC  (no
5541                    space on file system) or a missing or unimplemented system
5542                    call.
5543           4        One or more stressors were not implemented on  a  specific
5544                    architecture or operating system.
5545           5        A stressor has been killed by an unexpected signal.
5546           6        A  stressor  exited  by exit(2) which was not expected and
5547                    timing metrics could not be gathered.
5548           7        The bogo ops metrics maybe  untrustworthy.  This  is  most
5549                    likely  to  occur  when a stress test is terminated during
5550                    the update of a bogo-ops counter such as when it has  been
5551                    OOM killed. A less likely reason is that the counter ready
5552                    indicator has been corrupted.
5553

BUGS

5555       File bug reports at:
5556         https://github.com/ColinIanKing/stress-ng/issues
5557

SEE ALSO

5559       cpuburn(1), perf(1), stress(1), taskset(1)
5560

AUTHOR

5562       stress-ng was written by Colin Ian King <colin.i.king@gmail.com> and is
5563       a  clean  room  re-implementation  and extension of the original stress
5564       tool by Amos  Waterland.  Thanks  also  for  contributions  from  Abdul
5565       Haleem,  Aboorva  Devarajan,  Adrian  Ratiu,  André  Wild,Aleksandar N.
5566       Kostadinov, Alexander Kanavin, Baruch Siach,  Carlos  Santo,  Christian
5567       Ehrhardt,   Chunyu  Hu,  Danilo  Krummrich,  David  Turner,  Dominik  B
5568       Czarnota, Dorinda  Bassey,  Fabien  Malfoy,  Fabrice  Fontaine,  Helmut
5569       Grohne,  James  Hunt,  James Wang, Jianshen Liu, Jim Rowan, John Kacur,
5570       Joseph DeVincentis, Jules Maselbas, Khalid  Elmously,  Khem  Raj,  Luca
5571       Pizzamiglio,  Luis  Henriques,  Manoj  Iyer,  Matthew Tippett, Mauricio
5572       Faria de Oliveira, Maxime Chevallier, Mike Koreneff, Piyush Goyal, Ralf
5573       Ramsauer, Rob Colclaser, Thadeu Lima de Souza Cascardo, Thia Wyrod, Tim
5574       Gardner, Tim Orling, Tommi Rantala, Witold Baryluk, Zhiyi Sun and  oth‐
5575       ers.
5576

NOTES

5578       Sending a SIGALRM, SIGINT or SIGHUP to stress-ng causes it to terminate
5579       all the stressor processes and ensures temporary files and shared  mem‐
5580       ory segments are removed cleanly.
5581
5582       Sending  a  SIGUSR2 to stress-ng will dump out the current load average
5583       and memory statistics.
5584
5585       Note that the stress-ng cpu, io, vm and hdd tests are different  imple‐
5586       mentations of the original stress tests and hence may produce different
5587       stress characteristics.  stress-ng does  not  support  any  GPU  stress
5588       tests.
5589
5590       The  bogo  operations  metrics may change with each release  because of
5591       bug fixes to the code, new features, compiler optimisations or  changes
5592       in system call performance.
5593
5595       Copyright  ©  2013-2021  Canonical Ltd, Copyright © 2021-2022 Colin Ian
5596       King.
5597       This is free software; see the source for copying conditions.  There is
5598       NO  warranty;  not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR
5599       PURPOSE.
5600
5601
5602
5603                                  5 May 2022                      STRESS-NG(1)
Impressum