1STRESS-NG(1)                General Commands Manual               STRESS-NG(1)
2
3
4

NAME

6       stress-ng - a tool to load and stress a computer system
7
8

SYNOPSIS

10       stress-ng [OPTION [ARG]] ...
11
12

DESCRIPTION

14       stress-ng  will  stress  test  a  computer system in various selectable
15       ways. It was designed to exercise various physical subsystems of a com‐
16       puter  as  well  as  the  various  operating  system kernel interfaces.
17       stress-ng also has a wide range of CPU specific stress tests that exer‐
18       cise floating point, integer, bit manipulation and control flow.
19
20       stress-ng  was originally intended to make a machine work hard and trip
21       hardware issues such as thermal overruns as well  as  operating  system
22       bugs  that  only  occur  when  a  system  is  being  thrashed hard. Use
23       stress-ng with caution as some of the tests can make a system  run  hot
24       on poorly designed hardware and also can cause excessive system thrash‐
25       ing which may be difficult to stop.
26
27       stress-ng can also measure test throughput rates; this can be useful to
28       observe  performance changes across different operating system releases
29       or types of hardware. However, it has never been intended to be used as
30       a precise benchmark test suite, so do NOT use it in this manner.
31
32       Running  stress-ng  with root privileges will adjust out of memory set‐
33       tings on Linux systems to make the stressors unkillable in  low  memory
34       situations,  so  use this judiciously.  With the appropriate privilege,
35       stress-ng can allow the ionice class and ionice levels to be  adjusted,
36       again, this should be used with care.
37
38       One  can  specify  the number of processes to invoke per type of stress
39       test; specifying a zero value will  select  the  number  of  processors
40       available as defined by sysconf(_SC_NPROCESSORS_CONF), if that can't be
41       determined then the number of online CPUs is used.   If  the  value  is
42       less than zero then the number of online CPUs is used.
43

OPTIONS

45       General stress-ng control options:
46
47       --abort
48              this  option  will  force all running stressors to abort (termi‐
49              nate) if any other stressor terminates prematurely because of  a
50              failure.
51
52       --aggressive
53              enables more file, cache and memory aggressive options. This may
54              slow tests down, increase latencies and  reduce  the  number  of
55              bogo  ops as well as changing the balance of user time vs system
56              time used depending on the type of stressor being used.
57
58       -a N, --all N, --parallel N
59              start N instances of all stressors in parallel.  If  N  is  less
60              than zero, then the number of CPUs online is used for the number
61              of instances.  If N is zero, then the number of configured  CPUs
62              in the system is used.
63
64       -b N, --backoff N
65              wait  N  microseconds  between  the  start of each stress worker
66              process. This allows one to ramp up the stress tests over time.
67
68       --class name
69              specify the class of stressors to run. Stressors are  classified
70              into  one  or more of the following classes: cpu, cpu-cache, de‐
71              vice, gpu, io, interrupt, filesystem, memory, network, os, pipe,
72              scheduler  and vm.  Some stressors fall into just one class. For
73              example the 'get' stressor is just  in  the  'os'  class.  Other
74              stressors  fall  into  more  than  one  class,  for example, the
75              'lsearch' stressor falls into the 'cpu', 'cpu-cache'  and  'mem‐
76              ory'  classes as it exercises all these three.  Selecting a spe‐
77              cific class will run all the stressors that fall into that class
78              only when run with the --sequential option.
79
80              Specifying  a  name  followed  by  a  question mark (for example
81              --class vm?) will print out all the stressors in  that  specific
82              class.
83
84       -n, --dry-run
85              parse options, but do not run stress tests. A no-op.
86
87       --ftrace
88              enable kernel function call tracing (Linux only).  This will use
89              the kernel debugfs ftrace mechanism to  record  all  the  kernel
90              functions  used  on the system while stress-ng is running.  This
91              is only as accurate as the kernel ftrace output, so there may be
92              some variability on the data reported.
93
94       -h, --help
95              show help.
96
97       --ignite-cpu
98              alter kernel controls to try and maximize the CPU. This requires
99              root privilege to alter various /sys interface  controls.   Cur‐
100              rently  this only works for Intel P-State enabled x86 systems on
101              Linux.
102
103       --ionice-class class
104              specify ionice class (only on Linux).  Can  be  idle  (default),
105              besteffort, be, realtime, rt.
106
107       --ionice-level level
108              specify  ionice  level  (only on Linux). For idle, 0 is the only
109              possible option. For besteffort or realtime  values  0  (highest
110              priority)  to  7  (lowest  priority). See ionice(1) for more de‐
111              tails.
112
113       --iostat S
114              every S seconds show I/O statistics on the  device  that  stores
115              the  stress-ng temporary files. This is either the device of the
116              current working directory or  the  --temp-path  specified  path.
117              Currently a Linux only option.  The fields output are:
118
119              Column Heading     Explanation
120              Inflight           number  of  I/O requests that have been
121                                 issued to the device  driver  but  have
122                                 not yet completed
123              Rd K/s             read rate in 1024 bytes per second
124              Wr K/s             write rate in 1024 bytes per second
125              Dscd K/s           discard rate in 1024 bytes per second
126              Rd/s               reads per second
127              Wr/s               writes per second
128              Dscd/s             discards per second
129
130       --job jobfile
131              run  stressors  using  a  jobfile.  The jobfile is essentially a
132              file containing stress-ng options (without the leading --)  with
133              one  option  per line. Lines may have comments with comment text
134              proceeded by the # character. A simple example is as follows:
135
136              run sequential   # run stressors sequentially
137              verbose          # verbose output
138              metrics-brief    # show metrics at end of run
139              timeout 60s      # stop each stressor after 60 seconds
140              #
141              # vm stressor options:
142              #
143              vm 2             # 2 vm stressors
144              vm-bytes 128M    # 128MB available memory
145              vm-keep          # keep vm mapping
146              vm-populate      # populate memory
147              #
148              # memcpy stressor options:
149              #
150              memcpy 5         # 5 memcpy stressors
151
152              The job file introduces the run command that  specifies  how  to
153              run the stressors:
154
155              run sequential - run stressors sequentially
156              run parallel - run stressors together in parallel
157
158              Note that 'run parallel' is the default.
159
160       --keep-files
161              do  not  remove  files and directories created by the stressors.
162              This can be useful for debugging purposes. Not generally  recom‐
163              mended as it can fill up a file system.
164
165       -k, --keep-name
166              by  default,  stress-ng  will  attempt to change the name of the
167              stress processes according to their functionality;  this  option
168              disables  this and keeps the process names to be the name of the
169              parent process, that is, stress-ng.
170
171       --klog-check
172              check the kernel log for kernel error and warning  messages  and
173              report  these  as  soon as they are detected. Linux only and re‐
174              quires root capability to read the kernel log.
175
176       --log-brief
177              by default stress-ng will report the name of  the  program,  the
178              message  type  and the process id as a prefix to all output. The
179              --log-brief option will output messages without these fields  to
180              produce a less verbose output.
181
182       --log-file filename
183              write messages to the specified log file.
184
185       --maximize
186              overrides  the  default stressor settings and instead sets these
187              to the maximum settings allowed.  These defaults can  always  be
188              overridden by the per stressor settings options if required.
189
190       --max-fd N
191              set  the maximum limit on file descriptors (value or a % of sys‐
192              tem allowed maximum).  By default, stress-ng  can  use  all  the
193              available  file  descriptors;  this option sets the limit in the
194              range from 10 up to the maximum limit of RLIMIT_NOFILE.  One can
195              use  a  % setting too, e.g. 50% is half the maximum allowed file
196              descriptors.  Note that stress-ng will use about 5 of the avail‐
197              able file descriptors so take this into consideration when using
198              this setting.
199
200       --mbind list
201              set strict NUMA memory allocation based  on  the  list  of  NUMA
202              nodes  provided;  page  allocations will come from the node with
203              sufficient free memory closest to the  specified  node(s)  where
204              the allocation takes place. This uses the Linux set_mempolicy(2)
205              call using the MPOL_BIND mode.  The NUMA nodes to  be  used  are
206              specified  by a comma separated list of node (0 to N-1). One can
207              specify a range of NUMA nodes using '-',  for  example:  --mbind
208              0,2-3,6,7-11
209
210       --metrics
211              output  number  of  bogo  operations  in  total performed by the
212              stress processes.  Note that these are not a reliable metric  of
213              performance  or throughput and have not been designed to be used
214              for benchmarking whatsoever. The metrics are just a  useful  way
215              to  observe  how  a  system  behaves when under various kinds of
216              load.
217
218              The following columns of information are output:
219
220              Column Heading        Explanation
221              bogo ops              number of iterations  of  the  stressor
222                                    during  the  run. This is metric of how
223                                    much overall "work" has  been  achieved
224                                    in bogo operations.
225              real time (secs)      average  wall  clock  duration (in sec‐
226                                    onds) of the stressor. This is the  to‐
227                                    tal  wall  clock  time  of  all the in‐
228                                    stances of that particular stressor di‐
229                                    vided  by the number of these stressors
230                                    being run.
231              usr time (secs)       total user time (in  seconds)  consumed
232                                    running all the instances of the stres‐
233                                    sor.
234              sys time (secs)       total system time (in seconds) consumed
235                                    running all the instances of the stres‐
236                                    sor.
237              bogo  ops/s   (real   total  bogo operations per second based
238              time)                 on wall clock run time. The wall  clock
239                                    time  reflects  the  apparent run time.
240                                    The more processors one has on a system
241                                    the  more the work load can be distrib‐
242                                    uted onto  these  and  hence  the  wall
243                                    clock time will reduce and the bogo ops
244                                    rate will  increase.   This  is  essen‐
245                                    tially  the "apparent" bogo ops rate of
246                                    the system.
247              bogo ops/s (usr+sys   total  bogo operations per second based
248              time)                 on cumulative  user  and  system  time.
249                                    This  is  the real bogo ops rate of the
250                                    system taking  into  consideration  the
251                                    actual   time  execution  time  of  the
252                                    stressor  across  all  the  processors.
253                                    Generally  this  will  decrease  as one
254                                    adds more concurrent stressors  due  to
255                                    contention  on cache, memory, execution
256                                    units, buses and I/O devices.
257              CPU  used  per  in‐   total percentage of CPU used divided by
258              stance (%)            number of stressor instances. 100% is 1
259                                    full  CPU.  Some stressors run multiple
260                                    threads so it is  possible  to  have  a
261                                    figure greater than 100%.
262
263       --metrics-brief
264              show  shorter  list  of  stressor  metrics  (no CPU used per in‐
265              stance).
266
267       --minimize
268              overrides the default stressor settings and instead  sets  these
269              to  the  minimum settings allowed.  These defaults can always be
270              overridden by the per stressor settings options if required.
271
272       --no-madvise
273              from version 0.02.26 stress-ng  automatically  calls  madvise(2)
274              with random advise options before each mmap and munmap to stress
275              the vm subsystem a little harder. The --no-advise  option  turns
276              this default off.
277
278       --no-oom-adjust
279              disable  any  form  of out-of-memory score adjustments, keep the
280              system defaults.  Normally stress-ng will adjust the out-of-mem‐
281              ory  scores  on stressors to try to create more memory pressure.
282              This option disables the adjustments.
283
284       --no-rand-seed
285              Do not seed the stress-ng pseudo-random number generator with  a
286              quasi  random start seed, but instead seed it with constant val‐
287              ues. This forces tests to run each time  using  the  same  start
288              conditions  which  can  be useful when one requires reproducible
289              stress tests.
290
291       --oom-avoid
292              Attempt to avoid out-of-memory conditions that can lead  to  the
293              Out-of-Memory  (OOM)  killer  terminating stressors. This checks
294              for low memory scenarios and swapping before making memory allo‐
295              cations  and  hence adds some overhead to the stressors and will
296              slow down stressor allocation speeds.
297
298       --oomable
299              Do not respawn a stressor if it gets killed by the Out-of-Memory
300              (OOM)  killer.   The  default  behaviour is to restart a new in‐
301              stance of a stressor if the kernel  OOM  killer  terminates  the
302              process. This option disables this default behaviour.
303
304       --page-in
305              touch  allocated  pages that are not in core, forcing them to be
306              paged back in.  This is a useful option to force all  the  allo‐
307              cated  pages  to be paged in when using the bigheap, mmap and vm
308              stressors.  It will severely degrade performance when the memory
309              in  the  system  is  less than the allocated buffer sizes.  This
310              uses mincore(2) to determine the pages that are not in core  and
311              hence need touching to page them back in.
312
313       --pathological
314              enable stressors that are known to hang systems.  Some stressors
315              can quickly consume resources  in  such  a  way  that  they  can
316              rapidly hang a system before the kernel can OOM kill them. These
317              stressors are not enabled by default, this option enables  them,
318              but you probably don't want to do this. You have been warned.
319
320       --perf measure  processor  and system activity using perf events. Linux
321              only and caveat emptor, according to perf_event_open(2): "Always
322              double-check  your  results! Various generalized events have had
323              wrong values.".  Note that with Linux  4.7  one  needs  to  have
324              CAP_SYS_ADMIN  capabilities  for  this option to work, or adjust
325              /proc/sys/kernel/perf_event_paranoid to  below  2  to  use  this
326              without CAP_SYS_ADMIN.
327
328       -q, --quiet
329              do not show any output.
330
331       -r N, --random N
332              start  N  random  stress  workers. If N is 0, then the number of
333              configured processors is used for N.
334
335       --sched scheduler
336              select the named scheduler (only on Linux). To see the  list  of
337              available schedulers use: stress-ng --sched which
338
339       --sched-prio prio
340              select  the  scheduler  priority  level  (only on Linux). If the
341              scheduler does not support this then the default priority  level
342              of 0 is chosen.
343
344       --sched-period period
345              select  the  period  parameter  for  deadline scheduler (only on
346              Linux). Default value is 0 (in nanoseconds).
347
348       --sched-runtime runtime
349              select the runtime parameter for  deadline  scheduler  (only  on
350              Linux). Default value is 99999 (in nanoseconds).
351
352       --sched-deadline deadline
353              select  the  deadline  parameter for deadline scheduler (only on
354              Linux). Default value is 100000 (in nanoseconds).
355
356       --sched-reclaim
357              use cpu bandwidth reclaim feature for deadline  scheduler  (only
358              on Linux).
359
360       --seed N
361              set  the random number generate seed with a 64 bit value. Allows
362              stressors to use the same random number generator  sequences  on
363              each invocation.
364
365       --sequential N
366              sequentially  run  all the stressors one by one for a default of
367              60 seconds. The number of instances of each  of  the  individual
368              stressors  to be started is N.  If N is less than zero, then the
369              number of CPUs online is used for the number of instances.  If N
370              is zero, then the number of CPUs in the system is used.  Use the
371              --timeout option to specify the duration to run each stressor.
372
373       --skip-silent
374              silence messages that report that a stressor  has  been  skipped
375              because  it  requires features not supported by the system, such
376              as unimplemented system calls, missing  resources  or  processor
377              specific features.
378
379       --smart
380              scan  the block devices for changes S.M.A.R.T. statistics (Linux
381              only). This requires root privileges to read  the  Self-Monitor‐
382              ing,  Analysis  and Reporting Technology data from all block de‐
383              vies and will report any changes in the statistics.  One  caveat
384              is that device manufacturers provide different sets of data, the
385              exact meaning of the data can be vague and the data may be inac‐
386              curate.
387
388       --stdout
389              all  output goes to stdout. By default all output goes to stderr
390              (which is a historical oversight that  will  cause  breakage  to
391              users if it is now changed). This option allows the output to be
392              written to stdout.
393
394       --stressors
395              output the names of the available stressors.
396
397       --syslog
398              log output (except for verbose -v messages) to the syslog.
399
400       --taskset list
401              set CPU affinity based on the list of CPUs  provided;  stress-ng
402              is  bound  to  just  use these CPUs (Linux only). The CPUs to be
403              used are specified by a comma separated list of CPU (0 to  N-1).
404              One  can  specify  a  range  of  CPUs  using  '-',  for example:
405              --taskset 0,2-3,6,7-11
406
407       --temp-path path
408              specify a path for stress-ng temporary directories and temporary
409              files;  the default path is the current working directory.  This
410              path must have read and write access for  the  stress-ng  stress
411              processes.
412
413       --thermalstat S
414              every  S  seconds show CPU and thermal load statistics. This op‐
415              tion shows average CPU frequency  in  GHz  (average  of  online-
416              CPUs),  load  averages  (1  minute, 5 minute and 15 minutes) and
417              available thermal zone temperatures in degrees Centigrade.
418
419       --thrash
420              This can only be used when running on Linux and with root privi‐
421              lege.  This  option  starts  a  background thrasher process that
422              works through all the processes on a system and tries to page as
423              many  pages  in  the processes as possible. It also periodically
424              drops the page cache, frees reclaimable slab objects  and  page‐
425              cache.  This will cause considerable amount of thrashing of swap
426              on an over-committed system.
427
428       -t N, --timeout T
429              run each stress test for at least T seconds. One can also  spec‐
430              ify  the units of time in seconds, minutes, hours, days or years
431              with the suffix s, m, h, d or y. Each stressor will  be  sent  a
432              SIGALRM  signal  at the timeout time, however if the stress test
433              is swapped out, in a non-interritable system call or  performing
434              clean  up (such as removing hundreds of test file) it may take a
435              while to finally terminate.  A 0 timeout will run stress-ng  for
436              ever with no timeout.
437
438       --timestamp
439              add  a  timestamp in hours, minutes, seconds and hundredths of a
440              second to the log output.
441
442       --timer-slack N
443              adjust the per process  timer  slack  to  N  nanoseconds  (Linux
444              only).  Increasing the timer slack allows the kernel to coalesce
445              timer events by adding some fuzziness to timer expiration  times
446              and  hence  reduce  wakeups.   Conversely,  decreasing the timer
447              slack will increase wakeups.  A value of 0 for  the  timer-slack
448              will set the system default of 50,000 nanoseconds.
449
450       --times
451              show  the cumulative user and system times of all the child pro‐
452              cesses at the end of the stress run.  The percentage of utilisa‐
453              tion of available CPU time is also calculated from the number of
454              on-line CPUs in the system.
455
456       --tz   collect temperatures from the available thermal zones on the ma‐
457              chine  (Linux  only).  Some devices may have one or more thermal
458              zones, where as others may have none.
459
460       -v, --verbose
461              show all debug, warnings and normal information output.
462
463       --verify
464              verify results when a test is run. This is not available on  all
465              tests.  This  will  sanity check the computations or memory con‐
466              tents from a test run and report to stderr any unexpected  fail‐
467              ures.
468
469       --verifiable
470              print  the  names  of  stressors  that  can be verified with the
471              --verify option.
472
473       -V, --version
474              show version of stress-ng, version of toolchain  used  to  build
475              stress-ng and system information.
476
477       --vmstat S
478              every S seconds show statistics about processes, memory, paging,
479              block I/O, interrupts, context switches, disks and cpu activity.
480              The  output  is  similar  that  to the output from the vmstat(8)
481              utility. Currently a Linux only option.
482
483       -x, --exclude list
484              specify a list of one or more stressors to exclude (that is,  do
485              not  run  them).   This  is useful to exclude specific stressors
486              when one selects many stressors to run using the --class option,
487              --sequential,  --all  and --random options. Example, run the cpu
488              class stressors concurrently and exclude  the  numa  and  search
489              stressors:
490
491              stress-ng --class cpu --all 1 -x numa,bsearch,hsearch,lsearch
492
493       -Y, --yaml filename
494              output gathered statistics to a YAML formatted file named 'file‐
495              name'.
496
497
498
499       Stressor specific options:
500
501       --access N
502              start N workers that work through various settings of file  mode
503              bits (read, write, execute) for the file owner and checks if the
504              user permissions of the file using  access(2)  and  faccessat(2)
505              are sane.
506
507       --access-ops N
508              stop access workers after N bogo access sanity checks.
509
510       --affinity N
511              start  N  workers  that run 16 processes that rapidly change CPU
512              affinity (only on Linux). Rapidly  switching  CPU  affinity  can
513              contribute to poor cache behaviour and high context switch rate.
514
515       --affinity-ops N
516              stop affinity workers after N bogo affinity operations.
517
518       --affinity-delay N
519              delay  for  N  nanoseconds  before changing affinity to the next
520              CPU.  The delay will spin on CPU scheduling yield operations for
521              N  nanoseconds  before  the process is moved to another CPU. The
522              default is 0 nanosconds.
523
524       --affinity-pin
525              pin all the 16 per stressor processes to a CPU. All 16 processes
526              follow the CPU chosen by the main parent stressor, forcing heavy
527              per CPU loading.
528
529       --affinity-rand
530              switch CPU affinity randomly rather than the default of  sequen‐
531              tially.
532
533       --affinity-sleep N
534              sleep  for  N  nanoseconds  before changing affinity to the next
535              CPU.
536
537       --af-alg N
538              start N workers that exercise the AF_ALG socket domain by  hash‐
539              ing and encrypting various sized random messages. This exercises
540              the available hashes, ciphers, rng and aead  crypto  engines  in
541              the Linux kernel.
542
543       --af-alg-ops N
544              stop af-alg workers after N AF_ALG messages are hashed.
545
546       --af-alg-dump
547              dump  the  internal  list  representing cryptographic algorithms
548              parsed from the /proc/crypto file to standard output (stdout).
549
550       --aio N
551              start N workers  that  issue  multiple  small  asynchronous  I/O
552              writes  and reads on a relatively small temporary file using the
553              POSIX aio interface.  This will just hit the file  system  cache
554              and  soak  up  a lot of user and kernel time in issuing and han‐
555              dling I/O requests.  By default, each worker process will handle
556              16 concurrent I/O requests.
557
558       --aio-ops N
559              stop  POSIX  asynchronous  I/O workers after N bogo asynchronous
560              I/O requests.
561
562       --aio-requests N
563              specify the number  of  POSIX  asynchronous  I/O  requests  each
564              worker should issue, the default is 16; 1 to 4096 are allowed.
565
566       --aiol N
567              start  N  workers that issue multiple 4K random asynchronous I/O
568              writes using the Linux aio  system  calls  io_setup(2),  io_sub‐
569              mit(2),  io_getevents(2)  and  io_destroy(2).   By default, each
570              worker process will handle 16 concurrent I/O requests.
571
572       --aiol-ops N
573              stop Linux asynchronous I/O workers after  N  bogo  asynchronous
574              I/O requests.
575
576       --aiol-requests N
577              specify  the  number  of  Linux  asynchronous  I/O requests each
578              worker should issue, the default is 16; 1 to 4096 are allowed.
579
580       --alarm N
581              start N workers that exercise alarm(2) with MAXINT, 0 and random
582              alarm  and sleep delays that get prematurely interrupted. Before
583              each alarm is scheduled any previous  pending  alarms  are  can‐
584              celled with zero second alarm calls.
585
586       --alarm-ops N
587              stop after N alarm bogo operations.
588
589       --apparmor N
590              start  N workers that exercise various parts of the AppArmor in‐
591              terface. Currently one needs root permission to run this partic‐
592              ular test. Only available on Linux systems with AppArmor support
593              and requires the CAP_MAC_ADMIN capability.
594
595       --apparmor-ops
596              stop the AppArmor workers after N bogo operations.
597
598       --atomic N
599              start N workers that exercise various GCC __atomic_*() built  in
600              operations  on  8,  16,  32  and 64 bit integers that are shared
601              among the N workers. This stressor is only available for  builds
602              using  GCC  4.7.4  or higher. The stressor forces many front end
603              cache stalls and cache references.
604
605       --atomic-ops N
606              stop the atomic workers after N bogo atomic operations.
607
608       --bad-altstack N
609              start N workers that create broken alternative signal stacks for
610              SIGSEGV  and  SIGBUS  handling  that  in  turn  create secondary
611              SIGSEGV/SIGBUS errors.  A variety of randonly selected nefarious
612              methods are used to create the stacks:
613
614              • Unmapping  the alternative signal stack, before triggering the
615                signal handling.
616              • Changing the alternative signal stack to just being read only,
617                write only, execute only.
618              • Using a NULL alternative signal stack.
619              • Using  the  signal  handler  object  as the alternative signal
620                stack.
621              • Unmapping the alternative signal stack during execution of the
622                signal handler.
623              • Using  a  read-only  text  segment  for the alternative signal
624                stack.
625              • Using an undersized alternative signal stack.
626              • Using the VDSO as an alternative signal stack.
627              • Using an alternative stack mapped onto /dev/zero.
628              • Using an alternative stack mapped to a  zero  sized  temporary
629                file to generate a SIGBUS error.
630
631       --bad-altstack-ops N
632              stop  the  bad  alternative stack stressors after N SIGSEGV bogo
633              operations.
634
635
636       --bad-ioctl N
637              start N workers that perform a range of illegal bad read  ioctls
638              (using  _IOR)  across  the  device  drivers. This exercises page
639              size, 64 bit, 32 bit, 16 bit and 8 bit reads as well as NULL ad‐
640              dresses,  non-readable  pages  and  PROT_NONE mapped pages. Cur‐
641              rently only for Linux and requires the --pathological option.
642
643       --bad-ioctl-ops N
644              stop the bad ioctl stressors after N bogo ioctl operations.
645
646       -B N, --bigheap N
647              start N workers that grow their heaps by reallocating memory. If
648              the  out of memory killer (OOM) on Linux kills the worker or the
649              allocation fails then the allocating  process  starts  all  over
650              again.   Note  that  the OOM adjustment for the worker is set so
651              that the OOM killer will treat these workers as the first candi‐
652              date processes to kill.
653
654       --bigheap-ops N
655              stop the big heap workers after N bogo allocation operations are
656              completed.
657
658       --bigheap-growth N
659              specify amount of memory to grow heap by per iteration. Size can
660              be from 4K to 64MB. Default is 64K.
661
662       --binderfs N
663              start  N  workers that mount, exercise and unmount binderfs. The
664              binder  control  device  is  exercised   with   256   sequential
665              BINDER_CTL_ADD ioctl calls per loop.
666
667       --binderfs-ops N
668              stop after N binderfs cycles.
669
670       --bind-mount N
671              start  N workers that repeatedly bind mount / to / inside a user
672              namespace. This can consume resources rapidly,  forcing  out  of
673              memory  situations.  Do not use this stressor unless you want to
674              risk hanging your machine.
675
676       --bind-mount-ops N
677              stop after N bind mount bogo operations.
678
679       --branch N
680              start N workers that randomly branch to 1024  randomly  selected
681              locations and hence exercise the CPU branch prediction logic.
682
683       --branch-ops N
684              stop the branch stressors after N × 1024 branches
685
686       --brk N
687              start N workers that grow the data segment by one page at a time
688              using multiple brk(2) calls.  Each  successfully  allocated  new
689              page  is  touched to ensure it is resident in memory.  If an out
690              of memory condition occurs then the test  will  reset  the  data
691              segment  to the point before it started and repeat the data seg‐
692              ment resizing over again.  The process adjusts the out of memory
693              setting  so  that  it  may  be killed by the out of memory (OOM)
694              killer before other processes.  If  it  is  killed  by  the  OOM
695              killer  then it will be automatically re-started by a monitoring
696              parent process.
697
698       --brk-ops N
699              stop the brk workers after N bogo brk operations.
700
701       --brk-mlock
702              attempt to mlock future brk pages into memory causing more  mem‐
703              ory pressure. If mlock(MCL_FUTURE) is implemented then this will
704              stop new brk pages from being swapped out.
705
706       --brk-notouch
707              do not touch each newly allocated data segment page.  This  dis‐
708              ables  the  default  of  touching  each newly allocated page and
709              hence avoids the kernel from necessarily backing the  page  with
710              physical memory.
711
712       --bsearch N
713              start  N workers that binary search a sorted array of 32 bit in‐
714              tegers using bsearch(3). By default, there are 65536 elements in
715              the array.  This is a useful method to exercise random access of
716              memory and processor cache.
717
718       --bsearch-ops N
719              stop the bsearch worker after N bogo bsearch operations are com‐
720              pleted.
721
722       --bsearch-size N
723              specify  the  size  (number  of 32 bit integers) in the array to
724              bsearch. Size can be from 1K to 4M.
725
726       -C N, --cache N
727              start N workers that perform random wide spread memory read  and
728              writes to thrash the CPU cache.  The code does not intelligently
729              determine the CPU cache configuration and so it may be sub-opti‐
730              mal  in  producing hit-miss read/write activity for some proces‐
731              sors.
732
733       --cache-cldemote
734              cache line demote (x86 only). This is a no-op for non-x86 archi‐
735              tectures  and older x86 processors that do not support this fea‐
736              ture.
737
738       --cache-clflushopt
739              use optimized cache line flush (x86 only). This is a  no-op  for
740              non-x86  architectures and older x86 processors that do not sup‐
741              port this feature.
742
743       --cache-clwb
744              cache line writeback (x86 only). This is a no-op for non-x86 ar‐
745              chitectures  and  older  x86 processors that do not support this
746              feature.
747
748       --cache-enable-all
749              where appropriate exercise the cache using cldemote, clflushopt,
750              fence, flush, sfence and prefetch.
751
752       --cache-fence
753              force  write  serialization  on each store operation (x86 only).
754              This is a no-op for non-x86 architectures.
755
756       --cache-flush
757              force flush cache on each store operation (x86 only). This is  a
758              no-op for non-x86 architectures.
759
760       --cache-level N
761              specify  level  of  cache  to  exercise (1=L1 cache, 2=L2 cache,
762              3=L3/LLC cache (the default)).  If the cache hierarchy cannot be
763              determined, built-in defaults will apply.
764
765       --cache-no-affinity
766              do not change processor affinity when --cache is in effect.
767
768       --cache-sfence
769              force  write  serialization  on  each  store operation using the
770              sfence instruction (x86 only). This is a no-op for  non-x86  ar‐
771              chitectures.
772
773       --cache-ops N
774              stop cache thrash workers after N bogo cache thrash operations.
775
776       --cache-prefetch
777              force  read  prefetch on next read address on architectures that
778              support prefetching.
779
780       --cache-ways N
781              specify the number of cache ways to exercise. This allows a sub‐
782              set of the overall cache size to be exercised.
783
784       --cacheline N
785              start  N  workers  that  exercise reading and writing individual
786              bytes in a shared buffer that is the size of a cache line.  Each
787              stressor  has  2  running processes that exercise just two bytes
788              that are next to each other.  The intent is to try  and  trigger
789              cacheline  corruption,  stalls and misses with shared memory ac‐
790              cesses. For an N byte sized cacheline, it is recommended to  run
791              N / 2 stressor instances.
792
793       --cacheline-ops N
794              stop cacheline workers after N loops of the byte exercising in a
795              cacheline.
796
797       --cacheline-affinity
798              frequently  change  CPU  affinity,  spread  cacheline  processes
799              evenly  across  all  online CPUs to try and maximize lower-level
800              cache activity. Attempts to keep adjacent cachelines being exer‐
801              cised by adjacent CPUs.
802
803       --cacheline-method method
804              specify  a  cacheline  stress method. By default, all the stress
805              methods are exercised sequentially, however one can specify just
806              one  method  to be used if required.  Available cacheline stress
807              methods are described as follows:
808
809              Method         Description
810              all            iterate over all the below  cpu  stress
811                             methods.
812              adjacent       increment  a  specific byte in a cache‐
813                             line and read the adjacent byte,  check
814                             for corruption every 7 increments.
815              atomicinc      atomically increment a specific byte in
816                             a cacheline and  check  for  corruption
817                             every 7 increments.
818              bits           write  and  read  back shifted bit pat‐
819                             terns into specifc byte in a  cacheline
820                             and check for corruption.
821              copy           copy  an  adjacent  byte  to a specific
822                             byte in a cacheline.
823              inc            increment and read back a specific byte
824                             in a cacheline and check for corruption
825                             every 7 increments.
826
827
828              mix            perform a mix of  increment,  left  and
829                             right  rotates  a  specifc  byte  in  a
830                             cacheline and check for corruption.
831              rdfwd64        increment a specific byte in  a  cache‐
832                             line and then read in forward direction
833                             an entire cacheline using 64 bit reads.
834              rdints         increment a specific byte in  a  cache‐
835                             line  and  then  read data at that byte
836                             location in naturally aligned locations
837                             integer  values  of  size 8, 16, 32, 64
838                             and 128 bits.
839              rdrev64        increment a specific byte in  a  cache‐
840                             line and then read in reverse direction
841                             an entire cacheline using 64 bit reads.
842              rdwr           read and write the  same  8  bit  value
843                             into a specific byte in a cacheline and
844                             check for corruption.
845
846       --cap N
847              start N workers that read per process capabilities via calls  to
848              capget(2) (Linux only).
849
850       --cap-ops N
851              stop after N cap bogo operations.
852
853       --chattr N
854              start N workers that attempt to exercise file attributes via the
855              EXT2_IOC_SETFLAGS ioctl. This is intended  to  be  intentionally
856              racy  and  exercise a range of chattr attributes by enabling and
857              disabling them on a file shared amongst the  N  chattr  stressor
858              processes. (Linux only).
859
860       --chattr-ops N
861              stop after N chattr bogo operations.
862
863       --chdir N
864              start  N workers that change directory between directories using
865              chdir(2).
866
867       --chdir-ops N
868              stop after N chdir bogo operations.
869
870       --chdir-dirs N
871              exercise chdir on N directories. The default  is  8192  directo‐
872              ries, this allows 64 to 65536 directories to be used instead.
873
874       --chmod N
875              start  N workers that change the file mode bits via chmod(2) and
876              fchmod(2) on the same file. The greater the value for N then the
877              more  contention  on  the  single  file.  The stressor will work
878              through all the combination of mode bits.
879
880       --chmod-ops N
881              stop after N chmod bogo operations.
882
883       --chown N
884              start N workers that exercise chown(2) on  the  same  file.  The
885              greater  the  value for N then the more contention on the single
886              file.
887
888       --chown-ops N
889              stop the chown workers after N bogo chown(2) operations.
890
891       --chroot N
892              start N workers that exercise chroot(2) on various valid and in‐
893              valid chroot paths. Only available on Linux systems and requires
894              the CAP_SYS_ADMIN capability.
895
896       --chroot-ops N
897              stop the chroot workers after N bogo chroot(2) operations.
898
899       --clock N
900              start N workers exercising clocks  and  POSIX  timers.  For  all
901              known clock types this will exercise clock_getres(2), clock_get‐
902              time(2) and clock_nanosleep(2).  For all known  timers  it  will
903              create  a  50000ns  timer  and  busy poll this until it expires.
904              This stressor will cause frequent context switching.
905
906       --clock-ops N
907              stop clock stress workers after N bogo operations.
908
909       --clone N
910              start N  workers  that  create  clones  (via  the  clone(2)  and
911              clone3()  system  calls).  This will rapidly try to create a de‐
912              fault of 8192 clones that immediately die and wait in  a  zombie
913              state  until they are reaped.  Once the maximum number of clones
914              is reached (or clone fails because one has reached  the  maximum
915              allowed)  the  oldest  clone thread is reaped and a new clone is
916              then created in a first-in first-out manner, and then  repeated.
917              A  random  clone flag is selected for each clone to try to exer‐
918              cise different clone operations.  The clone stressor is a  Linux
919              only option.
920
921       --clone-ops N
922              stop clone stress workers after N bogo clone operations.
923
924       --clone-max N
925              try  to  create  as  many  as  N  clone threads. This may not be
926              reached if the system limit is less than N.
927
928       --close N
929              start N workers that try to force  race  conditions  on  closing
930              opened  file  descriptors.   These  file  descriptors  have been
931              opened in various ways to  try  and  exercise  different  kernel
932              close handlers.
933
934       --close-ops N
935              stop close workers after N bogo close operations.
936
937       --context N
938              start  N  workers that run three threads that use swapcontext(3)
939              to implement the thread-to-thread context switching. This  exer‐
940              cises  rapid  process  context saving and restoring and is band‐
941              width limited by register and memory save and restore rates.
942
943       --context-ops N
944              stop context workers after N bogo  context  switches.   In  this
945              stressor, 1 bogo op is equivalent to 1000 swapcontext calls.
946
947       --copy-file N
948              start   N   stressors   that   copy   a  file  using  the  Linux
949              copy_file_range(2) system call. 128 KB chunks of data are copied
950              from  random  locations  from  one file to random locations to a
951              destination file.  By default, the files are  256  MB  in  size.
952              Data  is  sync'd to the filesystem after each copy_file_range(2)
953              call.
954
955       --copy-file-ops N
956              stop after N copy_file_range() calls.
957
958       --copy-file-bytes N
959              copy file size, the default is 256 MB. One can specify the  size
960              as  %  of  free  space  on the file system or in units of Bytes,
961              KBytes, MBytes and GBytes using the suffix b, k, m or g.
962
963       -c N, --cpu N
964              start N workers  exercising  the  CPU  by  sequentially  working
965              through  all  the different CPU stress methods. Instead of exer‐
966              cising all the CPU stress methods, one can  specify  a  specific
967              CPU stress method with the --cpu-method option.
968
969       --cpu-ops N
970              stop cpu stress workers after N bogo operations.
971
972       -l P, --cpu-load P
973              load CPU with P percent loading for the CPU stress workers. 0 is
974              effectively a sleep (no load) and  100  is  full  loading.   The
975              loading  loop is broken into compute time (load%) and sleep time
976              (100% - load%). Accuracy depends on the overall load of the pro‐
977              cessor  and  the  responsiveness of the scheduler, so the actual
978              load may be different from the desired load.  Note that the num‐
979              ber  of  bogo CPU operations may not be linearly scaled with the
980              load as some systems employ CPU frequency scaling and so heavier
981              loads  produce  an  increased CPU frequency and greater CPU bogo
982              operations.
983
984              Note: This option only applies to the --cpu stressor option  and
985              not to all of the cpu class of stressors.
986
987       --cpu-load-slice S
988              note  -  this option is only useful when --cpu-load is less than
989              100%. The CPU load is broken into multiple busy and idle cycles.
990              Use this option to specify the duration of a busy time slice.  A
991              negative value for S specifies the number of iterations  to  run
992              before  idling  the CPU (e.g. -30 invokes 30 iterations of a CPU
993              stress loop).  A zero value selects a random busy time between 0
994              and 0.5 seconds.  A positive value for S specifies the number of
995              milliseconds to run before idling the CPU (e.g.  100  keeps  the
996              CPU  busy for 0.1 seconds).  Specifying small values for S lends
997              to  small  time  slices  and   smoother   scheduling.    Setting
998              --cpu-load  as a relatively low value and --cpu-load-slice to be
999              large will cycle the CPU between long idle and busy  cycles  and
1000              exercise  different  CPU  frequencies.  The thermal range of the
1001              CPU is also cycled, so this is a good mechanism to exercise  the
1002              scheduler,  frequency scaling and passive/active thermal cooling
1003              mechanisms.
1004
1005              Note: This option only applies to the --cpu stressor option  and
1006              not to all of the cpu class of stressors.
1007
1008       --cpu-old-metrics
1009              as  of  version V0.14.02 the cpu stressor now normalizes each of
1010              the cpu stressor method bogo-op counters to  try  and  ensure  a
1011              similar  bogo-op  rate  for all the methods to avoid the shorter
1012              running (and faster) methods from skewing the bogo-op rates when
1013              using  the  default  "all" method.  This is based on a reference
1014              Intel i5-8350U processor and hence the bogo-ops normalizing fac‐
1015              tors  will  be  skew somewhat on different CPUs, but so signifi‐
1016              cantly as the original bogo-op counter  rates.  To  disable  the
1017              normalization  and  fall  back to the original metrics, use this
1018              option.
1019
1020       --cpu-method method
1021              specify a cpu stress method. By default, all the stress  methods
1022              are  exercised  sequentially,  however  one can specify just one
1023              method to be used if required.  Available cpu stress methods are
1024              described as follows:
1025
1026              Method              Description
1027              all                 iterate  over  all the below cpu stress
1028                                  methods
1029              ackermann           Ackermann function:  compute  A(3,  7),
1030                                  where:
1031                                   A(m, n) = n + 1 if m = 0;
1032                                   A(m - 1, 1) if m > 0 and n = 0;
1033                                   A(m - 1, A(m, n - 1)) if m > 0 and n >
1034                                  0
1035              apery               calculate Apery's  constant  ζ(3);  the
1036                                  sum  of  1/(n  ↑  3)  to a precision of
1037                                  1.0x10↑14
1038              bitops              various bit  operations  from  bithack,
1039                                  namely: reverse bits, parity check, bit
1040                                  count, round to nearest power of 2
1041              callfunc            recursively call 8 argument C  function
1042                                  to a depth of 1024 calls and unwind
1043              cfloat              1000  iterations  of  a mix of floating
1044                                  point complex operations
1045              cdouble             1000 iterations  of  a  mix  of  double
1046                                  floating point complex operations
1047              clongdouble         1000 iterations of a mix of long double
1048                                  floating point complex operations
1049
1050              collatz             compute the 1348 steps in  the  collatz
1051                                  sequence     starting    from    number
1052                                  989345275647.  Where f(n) = n / 2  (for
1053                                  even n) and f(n) = 3n + 1 (for odd n).
1054              correlate           perform  a  8192  ×  512 correlation of
1055                                  random doubles
1056              cpuid               fetch cpu  specific  information  using
1057                                  the cpuid instruction (x86 only)
1058              crc16               compute  1024  rounds of CCITT CRC16 on
1059                                  random data
1060              decimal32           1000 iterations of a mix of 32 bit dec‐
1061                                  imal  floating  point  operations  (GCC
1062                                  only)
1063              decimal64           1000 iterations of a mix of 64 bit dec‐
1064                                  imal  floating  point  operations  (GCC
1065                                  only)
1066              decimal128          1000 iterations of a  mix  of  128  bit
1067                                  decimal  floating point operations (GCC
1068                                  only)
1069              dither              Floyd-Steinberg dithering of a  1024  ×
1070                                  768  random image from 8 bits down to 1
1071                                  bit of depth
1072              div8                50,000 8 bit unsigned integer divisions
1073              div16               50,000 16 bit  unsigned  integer  divi‐
1074                                  sions
1075              div32               50,000  32  bit  unsigned integer divi‐
1076                                  sions
1077              div64               50,000 64 bit  unsigned  integer  divi‐
1078                                  sions
1079              div128              50,000  128  bit unsigned integer divi‐
1080                                  sions
1081              double              1000 iterations of a mix of double pre‐
1082                                  cision floating point operations
1083              euler               compute e using n = (1 + (1 ÷ n)) ↑ n
1084              explog              iterate on n = exp(log(n) ÷ 1.00002)
1085              factorial           find factorials from 1..150 using Stir‐
1086                                  ling's and Ramanujan's approximations
1087              fibonacci           compute Fibonacci sequence of 0, 1,  1,
1088                                  2, 5, 8...
1089              fft                 4096 sample Fast Fourier Transform
1090              fletcher16          1024  rounds  of a naïve implementation
1091                                  of a 16 bit Fletcher's checksum
1092              float               1000 iterations of a  mix  of  floating
1093                                  point operations
1094              float16             1000  iterations  of  a  mix  of 16 bit
1095                                  floating point operations
1096              float32             1000 iterations of  a  mix  of  32  bit
1097                                  floating point operations
1098              float64             1000  iterations  of  a  mix  of 64 bit
1099                                  floating point operations
1100              float80             1000 iterations of  a  mix  of  80  bit
1101                                  floating point operations
1102              float128            1000  iterations  of  a  mix of 128 bit
1103                                  floating point operations
1104              floatconversion     perform 65536  iterations  of  floating
1105                                  point conversions between float, double
1106                                  and long double  floating  point  vari‐
1107                                  ables.
1108              gamma               calculate the Euler-Mascheroni constant
1109                                  γ using the limiting difference between
1110                                  the  harmonic  series  (1 + 1/2 + 1/3 +
1111                                  1/4 + 1/5 ... + 1/n)  and  the  natural
1112                                  logarithm ln(n), for n = 80000.
1113              gcd                 compute GCD of integers
1114              gray                calculate  binary to gray code and gray
1115                                  code back to binary for integers from 0
1116                                  to 65535
1117
1118
1119
1120
1121
1122
1123
1124              hamming             compute  Hamming H(8,4) codes on 262144
1125                                  lots of 4 bit data. This  turns  4  bit
1126                                  data into 8 bit Hamming code containing
1127                                  4 parity bits. For  data  bits  d1..d4,
1128                                  parity bits are computed as:
1129                                    p1 = d2 + d3 + d4
1130                                    p2 = d1 + d3 + d4
1131                                    p3 = d1 + d2 + d4
1132                                    p4 = d1 + d2 + d3
1133              hanoi               solve  a  21 disc Towers of Hanoi stack
1134                                  using the recursive solution
1135              hyperbolic          compute sinh(θ) × cosh(θ) + sinh(2θ)  +
1136                                  cosh(3θ)  for  float,  double  and long
1137                                  double hyperbolic sine and cosine func‐
1138                                  tions where θ = 0 to 2π in 1500 steps
1139              idct                8  ×  8  IDCT  (Inverse Discrete Cosine
1140                                  Transform).
1141              int8                1000 iterations of a mix of 8 bit inte‐
1142                                  ger operations.
1143              int16               1000  iterations of a mix of 16 bit in‐
1144                                  teger operations.
1145              int32               1000 iterations of a mix of 32 bit  in‐
1146                                  teger operations.
1147              int64               1000  iterations of a mix of 64 bit in‐
1148                                  teger operations.
1149              int128              1000 iterations of a mix of 128 bit in‐
1150                                  teger operations (GCC only).
1151              int32float          1000  iterations of a mix of 32 bit in‐
1152                                  teger and floating point operations.
1153              int32double         1000 iterations of a mix of 32 bit  in‐
1154                                  teger  and  double  precision  floating
1155                                  point operations.
1156              int32longdouble     1000 iterations of a mix of 32 bit  in‐
1157                                  teger  and long double precision float‐
1158                                  ing point operations.
1159              int64float          1000 iterations of a mix of 64 bit  in‐
1160                                  teger and floating point operations.
1161              int64double         1000  iterations of a mix of 64 bit in‐
1162                                  teger  and  double  precision  floating
1163                                  point operations.
1164              int64longdouble     1000  iterations of a mix of 64 bit in‐
1165                                  teger and long double precision  float‐
1166                                  ing point operations.
1167              int128float         1000 iterations of a mix of 128 bit in‐
1168                                  teger  and  floating  point  operations
1169                                  (GCC only).
1170              int128double        1000 iterations of a mix of 128 bit in‐
1171                                  teger  and  double  precision  floating
1172                                  point operations (GCC only).
1173              int128longdouble    1000 iterations of a mix of 128 bit in‐
1174                                  teger and long double precision  float‐
1175                                  ing point operations (GCC only).
1176              int128decimal32     1000 iterations of a mix of 128 bit in‐
1177                                  teger and 32 bit decimal floating point
1178                                  operations (GCC only).
1179              int128decimal64     1000 iterations of a mix of 128 bit in‐
1180                                  teger and 64 bit decimal floating point
1181                                  operations (GCC only).
1182              int128decimal128    1000 iterations of a mix of 128 bit in‐
1183                                  teger  and  128  bit  decimal  floating
1184                                  point operations (GCC only).
1185              intconversion       perform  65536  iterations  of  integer
1186                                  conversions between  int16,  int32  and
1187                                  int64 variables.
1188              ipv4checksum        compute 1024 rounds of the 16 bit ones'
1189                                  complement IPv4 checksum.
1190              jmp                 Simple unoptimised compare >, <, == and
1191                                  jmp branching.
1192
1193
1194
1195
1196
1197
1198              lfsr32              16384  iterations  of  a  32 bit Galois
1199                                  linear feedback  shift  register  using
1200                                  the polynomial x↑32 + x↑31 + x↑29 + x +
1201                                  1. This generates a ring of  2↑32  -  1
1202                                  unique values (all 32 bit values except
1203                                  for 0).
1204              ln2                 compute ln(2) based on series:
1205                                   1 - 1/2 + 1/3 - 1/4 + 1/5 - 1/6 ...
1206              logmap              16384 iterations computing chaotic dou‐
1207                                  ble precision values using the logistic
1208                                  map Χn+1 = r ×  Χn × (1 - Χn) where r >
1209                                  ≈ 3.56994567
1210              longdouble          1000 iterations of a mix of long double
1211                                  precision floating point operations.
1212              loop                simple empty loop.
1213              matrixprod          matrix product of two 128 × 128  matri‐
1214                                  ces of double floats. Testing on 64 bit
1215                                  x86 hardware shows that  this  is  pro‐
1216                                  vides  a  good mix of memory, cache and
1217                                  floating point operations and is proba‐
1218                                  bly  the best CPU method to use to make
1219                                  a CPU run hot.
1220              nsqrt               compute sqrt() of  long  doubles  using
1221                                  Newton-Raphson.
1222              omega               compute  the  omega constant defined by
1223                                  Ωe↑Ω = 1 using efficient  iteration  of
1224                                  Ωn+1 = (1 + Ωn) / (1 + e↑Ωn).
1225              parity              compute  parity  using  various methods
1226                                  from the Standford Bit Twiddling Hacks.
1227                                  Methods  employed  are:  the naïve way,
1228                                  the naïve way with the  Brian  Kernigan
1229                                  bit counting optimisation, the multiply
1230                                  way, the parallel way, the lookup table
1231                                  ways   (2  variations)  and  using  the
1232                                  __builtin_parity function.
1233              phi                 compute the Golden Ratio  ϕ  using  se‐
1234                                  ries.
1235              pi                  compute π using the Srinivasa Ramanujan
1236                                  fast convergence algorithm.
1237              prime               find the first 10000 prime numbers  us‐
1238                                  ing  a  slightly  optimised brute force
1239                                  naïve trial division search.
1240              psi                 compute  ψ  (the  reciprocal  Fibonacci
1241                                  constant) using the sum of the recipro‐
1242                                  cals of the Fibonacci numbers.
1243              queens              compute all the solutions of the  clas‐
1244                                  sic  8  queens  problem for board sizes
1245                                  1..11.
1246              rand                16384 iterations of rand(), where  rand
1247                                  is the MWC pseudo random number genera‐
1248                                  tor.  The MWC random function  concate‐
1249                                  nates  two  16  bit multiply-with-carry
1250                                  generators:
1251                                   x(n) = 36969 × x(n - 1) + carry,
1252                                   y(n) = 18000 × y(n - 1) + carry mod  2
1253                                  ↑ 16
1254
1255                                  and has period of around 2 ↑ 60.
1256              rand48              16384   iterations  of  drand48(3)  and
1257                                  lrand48(3).
1258              rgb                 convert RGB to  YUV  and  back  to  RGB
1259                                  (CCIR 601).
1260              sieve               find  the first 10000 prime numbers us‐
1261                                  ing the sieve of Eratosthenes.
1262              stats               calculate minimum, maximum,  arithmetic
1263                                  mean,  geometric  mean,  harmoninc mean
1264                                  and standard deviation on 250  randomly
1265                                  generated   positive  double  precision
1266                                  values.
1267              sqrt                compute sqrt(rand()), where rand is the
1268                                  MWC pseudo random number generator.
1269
1270
1271
1272              trig                compute  sin(θ)  ×  cos(θ)  + sin(2θ) +
1273                                  cos(3θ) for float, double and long dou‐
1274                                  ble sine and cosine functions where θ =
1275                                  0 to 2π in 1500 steps.
1276              union               perform integer arithmetic on a mix  of
1277                                  bit  fields  in  a C union.  This exer‐
1278                                  cises how well the compiler and CPU can
1279                                  perform  integer  bit  field  loads and
1280                                  stores.
1281              zeta                compute the Riemann Zeta function  ζ(s)
1282                                  for s = 2.0..10.0
1283
1284              Note  that  some  of  these methods try to exercise the CPU with
1285              computations found in some real world use  cases.  However,  the
1286              code  has not been optimised on a per-architecture basis, so may
1287              be a sub-optimal compared to hand-optimised code  used  in  some
1288              applications.   They do try to represent the typical instruction
1289              mixes found in these use cases.
1290
1291       --cpu-online N
1292              start N workers that put randomly selected CPUs offline and  on‐
1293              line.  This  Linux only stressor requires root privilege to per‐
1294              form this action. By default the first CPU (CPU 0) is never  of‐
1295              flined  as this has been found to be problematic on some systems
1296              and can result in a shutdown.
1297
1298       --cpu-online-all
1299              The default is to never offline the first CPU.  This option will
1300              offline  and  online  all the CPUs include CPU 0. This may cause
1301              some systems to shutdown.
1302
1303       --cpu-online-ops N
1304              stop after offline/online operations.
1305
1306       --crypt N
1307              start N workers that encrypt a 16 character random password  us‐
1308              ing  crypt(3).  The password is encrypted using MD5, SHA-256 and
1309              SHA-512 encryption methods.
1310
1311       --crypt-ops N
1312              stop after N bogo encryption operations.
1313
1314       --cyclic N
1315              start N workers that exercise the real time FIFO or Round  Robin
1316              schedulers  with  cyclic  nanosecond  sleeps. Normally one would
1317              just use 1 worker instance with this stressor  to  get  reliable
1318              statistics. By default this stressor measures the first 10 thou‐
1319              sand latencies and calculates the mean, mode,  minimum,  maximum
1320              latencies  along  with  various latency percentiles for the just
1321              the first cyclic stressor instance. One has to run this stressor
1322              with  CAP_SYS_NICE capability to enable the real time scheduling
1323              policies. The FIFO scheduling policy is the default.
1324
1325       --cyclic-ops N
1326              stop after N sleeps.
1327
1328       --cyclic-dist N
1329              calculate and print a latency distribution with the interval  of
1330              N  nanoseconds.   This is helpful to see where the latencies are
1331              clustering.
1332
1333       --cyclic-method [ clock_ns | itimer |  poll  |  posix_ns  |  pselect  |
1334       usleep ]
1335              specify  the  cyclic method to be used, the default is clock_ns.
1336              The available cyclic methods are as follows:
1337
1338              Method         Description
1339              clock_ns       sleep for the specified time using  the
1340                             clock_nanosleep(2)    high   resolution
1341                             nanosleep and the  CLOCK_REALTIME  real
1342                             time clock.
1343              itimer         wakeup   a   paused   process   with  a
1344                             CLOCK_REALTIME itimer signal.
1345
1346              poll           delay for the specified  time  using  a
1347                             poll  delay  loop  that checks for time
1348                             changes using clock_gettime(2)  on  the
1349                             CLOCK_REALTIME clock.
1350              posix_ns       sleep  for the specified time using the
1351                             POSIX  nanosleep(2)   high   resolution
1352                             nanosleep.
1353              pselect        sleep for the specified time using pse‐
1354                             lect(2) with null file descriptors.
1355              usleep         sleep to the nearest microsecond  using
1356                             usleep(2).
1357
1358       --cyclic-policy [ fifo | rr ]
1359              specify  the  desired real time scheduling policy, ff (first-in,
1360              first-out) or rr (round robin).
1361
1362       --cyclic-prio P
1363              specify the scheduling priority P. Range from 1 (lowest) to  100
1364              (highest).
1365
1366       --cyclic-samples N
1367              measure N samples. Range from 1 to 10000000 samples.
1368
1369       --cyclic-sleep N
1370              sleep  for N nanoseconds per test cycle using clock_nanosleep(2)
1371              with the  CLOCK_REALTIME  timer.  Range  from  1  to  1000000000
1372              nanoseconds.
1373
1374       --daemon N
1375              start  N workers that each create a daemon that dies immediately
1376              after creating another daemon and so on. This effectively  works
1377              through the process table with short lived processes that do not
1378              have a parent and are waited for by init.  This puts pressure on
1379              init  to  do  rapid child reaping.  The daemon processes perform
1380              the usual mix of calls to turn into  typical  UNIX  daemons,  so
1381              this artificially mimics very heavy daemon system stress.
1382
1383       --daemon-ops N
1384              stop daemon workers after N daemons have been created.
1385
1386       --dccp N
1387              start  N  workers  that send and receive data using the Datagram
1388              Congestion Control Protocol (DCCP) (RFC4340).  This  involves  a
1389              pair  of  client/server processes performing rapid connect, send
1390              and receives and disconnects on the local host.
1391
1392       --dccp-domain D
1393              specify the domain to use, the default is ipv4.  Currently  ipv4
1394              and ipv6 are supported.
1395
1396       --dccp-if NAME
1397              use  network  interface NAME. If the interface NAME does not ex‐
1398              ist, is not up or does not support the domain then the  loopback
1399              (lo) interface is used as the default.
1400
1401       --dccp-port P
1402              start  DCCP at port P. For N dccp worker processes, ports P to P
1403              - 1 are used.
1404
1405       --dccp-ops N
1406              stop dccp stress workers after N bogo operations.
1407
1408       --dccp-opts [ send | sendmsg | sendmmsg ]
1409              by default, messages are sent using send(2). This option  allows
1410              one  to  specify the sending method using send(2), sendmsg(2) or
1411              sendmmsg(2).  Note that sendmmsg is  only  available  for  Linux
1412              systems that support this system call.
1413
1414       --dekker N
1415              start  N workers that exercises mutex exclusion between two pro‐
1416              cesses using shared memory with the Dekker Algorithm. Where pos‐
1417              sible  this  uses  memory  fencing  and  falls back to using GCC
1418              __sync_synchronize if they are not available. The stressors con‐
1419              tain simple mutex and memory coherency sanity checks.
1420
1421       --dekker-ops N
1422              stop dekker workers after N mutex operations.
1423
1424       -D N, --dentry N
1425              start  N workers that create and remove directory entries.  This
1426              should create file system meta data activity. The directory  en‐
1427              try  names  are suffixed by a gray-code encoded number to try to
1428              mix up the hashing of the namespace.
1429
1430       --dentry-ops N
1431              stop denty thrash workers after N bogo dentry operations.
1432
1433       --dentry-order [ forward | reverse | stride | random ]
1434              specify unlink order of dentries, can be  one  of  forward,  re‐
1435              verse,  stride  or random.  By default, dentries are unlinked in
1436              random order.  The forward order will unlink them from first  to
1437              last,  reverse order will unlink them from last to first, stride
1438              order will unlink them by stepping around order in a  quasi-ran‐
1439              dom  pattern  and  random order will randomly select one of for‐
1440              ward, reverse or stride orders.
1441
1442       --dentries N
1443              create N dentries per dentry thrashing loop, default is 2048.
1444
1445       --dev N
1446              start N workers that exercise the /dev devices. Each worker runs
1447              5  concurrent  threads that perform open(2), fstat(2), lseek(2),
1448              poll(2), fcntl(2), mmap(2), munmap(2), fsync(2) and close(2)  on
1449              each device.  Note that watchdog devices are not exercised.
1450
1451       --dev-ops N
1452              stop dev workers after N bogo device exercising operations.
1453
1454       --dev-file filename
1455              specify  the device file to exercise, for example, /dev/null. By
1456              default the stressor will work through all the device  files  it
1457              can fine, however, this option allows a single device file to be
1458              exercised.
1459
1460       --dev-shm N
1461              start N workers that fallocate large files in /dev/shm and  then
1462              mmap  these  into memory and touch all the pages. This exercises
1463              pages being moved to/from the buffer cache. Linux only.
1464
1465       --dev-shm-ops N
1466              stop after N bogo allocation and mmap /dev/shm operations.
1467
1468       --dir N
1469              start N workers that create and remove directories  using  mkdir
1470              and rmdir.
1471
1472       --dir-ops N
1473              stop directory thrash workers after N bogo directory operations.
1474
1475       --dir-dirs N
1476              exercise  dir on N directories. The default is 8192 directories,
1477              this allows 64 to 65536 directories to be used instead.
1478
1479       --dirdeep N
1480              start N workers that create a depth-first tree of directories to
1481              a  maximum  depth  as limited by PATH_MAX or ENAMETOOLONG (which
1482              ever occurs first).  By default, each level of the tree contains
1483              one directory, but this can be increased to a maximum of 10 sub-
1484              trees using the --dirdeep-dir option.  To stress inode creation,
1485              a  symlink  and  a hardlink to a file at the root of the tree is
1486              created in each level.
1487
1488       --dirdeep-ops N
1489              stop directory depth workers after N bogo directory operations.
1490
1491       --dirdeep-bytes N
1492              allocated file size, the default is 0. One can specify the  size
1493              as  %  of  free  space  on the file system or in units of Bytes,
1494              KBytes, MBytes and GBytes using the suffix b, k, m or g. Used in
1495              conjunction with the --dirdeep-files option.
1496
1497       --dirdeep-dirs N
1498              create  N  directories at each tree level. The default is just 1
1499              but can be increased to a maximum of 36 per level.
1500
1501       --dirdeep-files N
1502              create N files  at each tree level. The default is  0  with  the
1503              file size specified by the --dirdeep-bytes option.
1504
1505       --dirdeep-inodes N
1506              consume  up  to N inodes per dirdeep stressor while creating di‐
1507              rectories and links. The value N can be the number of inodes  or
1508              a  percentage of the total available free inodes on the filesys‐
1509              tem being used.
1510
1511       --dirmany N
1512              start N stressors that create as many files in  a  directory  as
1513              possible  and  then  remove  them. The file creation phase stops
1514              when an error occurs (for  example,  out  of  inodes,  too  many
1515              files, quota reached, etc.) and then the files are removed. This
1516              cycles until the run time is reached or the file creation  count
1517              bogo-ops  metric  is  reached.  This  is a much faster and light
1518              weight directory exercising  stressor  compared  to  the  dentry
1519              stressor.
1520
1521       --dirmany-ops N
1522              stop dirmany stressors after N empty files have been created.
1523
1524       --dirmany-bytes N
1525              allocated  file size, the default is 0. One can specify the size
1526              as % of free space on the file system  or  in  units  of  Bytes,
1527              KBytes, MBytes and GBytes using the suffix b, k, m or g.
1528
1529       --dnotify N
1530              start  N  workers performing file system activities such as mak‐
1531              ing/deleting files/directories, renaming files, etc.  to  stress
1532              exercise the various dnotify events (Linux only).
1533
1534       --dnotify-ops N
1535              stop inotify stress workers after N dnotify bogo operations.
1536
1537       --dup N
1538              start N workers that perform dup(2) and then close(2) operations
1539              on /dev/zero.  The maximum opens at one time is system  defined,
1540              so  the test will run up to this maximum, or 65536 open file de‐
1541              scriptors, which ever comes first.
1542
1543       --dup-ops N
1544              stop the dup stress workers after N bogo open operations.
1545
1546       --dynlib N
1547              start N workers that dynamically load and unload various  shared
1548              libraries.  This exercises memory mapping and dynamic code load‐
1549              ing and symbol lookups. See dlopen(3) for more details  of  this
1550              mechanism.
1551
1552       --dynlib-ops N
1553              stop workers after N bogo load/unload cycles.
1554
1555       --efivar N
1556              start N works that exercise the Linux /sys/firmware/efi/vars in‐
1557              terface by reading the EFI  variables.  This  is  a  Linux  only
1558              stress  test  for  platforms that support the EFI vars interface
1559              and requires the CAP_SYS_ADMIN capability.
1560
1561       --efivar-ops N
1562              stop the efivar stressors after N EFI variable read operations.
1563
1564       --enosys N
1565              start N workers that exercise non-functional  system  call  num‐
1566              bers.  This  calls a wide range of system call numbers to see if
1567              it can break a system where these are not  wired  up  correctly.
1568              It  also keeps track of system calls that exist (ones that don't
1569              return ENOSYS) so that it can focus on purely finding and  exer‐
1570              cising non-functional system calls. This stressor exercises sys‐
1571              tem calls from 0 to __NR_syscalls + 1024,  random  system  calls
1572              within  constrained in the ranges of 0 to 2↑8, 2↑16, 2↑24, 2↑32,
1573              2↑40, 2↑48, 2↑56 and 2↑64 bits, high  system  call  numbers  and
1574              various  other bit patterns to try to get wide coverage. To keep
1575              the environment clean, each system call being tested runs  in  a
1576              child process with reduced capabilities.
1577
1578       --enosys-ops N
1579              stop after N bogo enosys system call attempts
1580
1581       --env N
1582              start  N  workers  that creates numerous large environment vari‐
1583              ables  to  try  to  trigger  out  of  memory  conditions   using
1584              setenv(3).  If ENOMEM occurs then the environment is emptied and
1585              another memory filling retry occurs.  The process  is  restarted
1586              if it is killed by the Out Of Memory (OOM) killer.
1587
1588       --env-ops N
1589              stop after N bogo setenv/unsetenv attempts.
1590
1591       --epoll N
1592              start  N  workers that perform various related socket stress ac‐
1593              tivity using epoll_wait(2) to monitor  and  handle  new  connec‐
1594              tions.  This  involves  client/server processes performing rapid
1595              connect, send/receives and disconnects on the local host.  Using
1596              epoll  allows  a  large  number of connections to be efficiently
1597              handled, however, this can lead to the connection table  filling
1598              up  and  blocking further socket connections, hence impacting on
1599              the epoll bogo op stats.  For ipv4 and  ipv6  domains,  multiple
1600              servers are spawned on multiple ports. The epoll stressor is for
1601              Linux only.
1602
1603       --epoll-domain D
1604              specify the domain to use, the default is unix (aka local). Cur‐
1605              rently ipv4, ipv6 and unix are supported.
1606
1607       --epoll-port P
1608              start at socket port P. For N epoll worker processes, ports P to
1609              (P * 4) - 1 are used for ipv4, ipv6 domains and ports P to P - 1
1610              are used for the unix domain.
1611
1612       --epoll-ops N
1613              stop epoll workers after N bogo operations.
1614
1615       --epoll-sockets N
1616              specify  the maximum number of concurrently open sockets allowed
1617              in server.  Setting a high value impacts on memory usage and may
1618              trigger out of memory conditions.
1619
1620       --eventfd N
1621              start  N parent and child worker processes that read and write 8
1622              byte event messages  between  them  via  the  eventfd  mechanism
1623              (Linux only).
1624
1625       --eventfd-ops N
1626              stop eventfd workers after N bogo operations.
1627
1628       --eventfd-nonblock N
1629              enable  EFD_NONBLOCK to allow non-blocking on the event file de‐
1630              scriptor. This will cause reads and writes to return with EAGAIN
1631              rather  the  blocking  and  hence causing a high rate of polling
1632              I/O.
1633
1634       --exec N
1635              start N workers continually forking children that exec stress-ng
1636              and  then  exit almost immediately. If a system has pthread sup‐
1637              port then 1 in 4 of the exec's will be from inside a pthread  to
1638              exercise exec'ing from inside a pthread context.
1639
1640       --exec-ops N
1641              stop exec stress workers after N bogo operations.
1642
1643       --exec-fork-method [ clone | fork | spawn | vfork ]
1644              select  the  process  creation  method  using clone(2), fork(2),
1645              posix_spawn(3) or vfork(2). Note that vfork will only exec  pro‐
1646              grams  using  execve  due to the constraints on the shared stack
1647              between the parent and the child process.
1648
1649       --exec-max P
1650              create P child processes that exec stress-ng and then  wait  for
1651              them  to  exit per iteration. The default is 4096; higher values
1652              may create many temporary zombie processes that are  waiting  to
1653              be  reaped.  One can potentially fill up the process table using
1654              high values for --exec-max and --exec.
1655
1656       --exec-method [ all | execve | execveat ]
1657              select the exec system call to use; all will  perform  a  random
1658              choice  between  execve(2)  and execveat(2), execve will use ex‐
1659              ecve(2) and execveat will use execveat(2) if it is available.
1660
1661       --exec-no-pthread
1662              do not use pthread_create(3).
1663
1664       --exit-group N
1665              start N workers  that  create  16  pthreads  and  terminate  the
1666              pthreads  and the controlling child process using exit_group(2).
1667              (Linux only stressor).
1668
1669       --exit-group-ops N
1670              stop after N iterations of pthread creation and deletion loops.
1671
1672       -F N, --fallocate N
1673              start N  workers  continually  fallocating  (preallocating  file
1674              space)  and  ftruncating  (file truncating) temporary files.  If
1675              the file is larger than the free space, fallocate  will  produce
1676              an ENOSPC error which is ignored by this stressor.
1677
1678       --fallocate-bytes N
1679              allocated  file  size,  the default is 1 GB. One can specify the
1680              size as % of free space on the file system or in units of Bytes,
1681              KBytes, MBytes and GBytes using the suffix b, k, m or g.
1682
1683       --fallocate-ops N
1684              stop fallocate stress workers after N bogo fallocate operations.
1685
1686       --fanotify N
1687              start N workers performing file system activities such as creat‐
1688              ing, opening, writing, reading and unlinking files  to  exercise
1689              the  fanotify  event  monitoring  interface  (Linux  only). Each
1690              stressor runs a child process to generate file events and a par‐
1691              ent  process  to  read file events using fanotify. Has to be run
1692              with CAP_SYS_ADMIN capability.
1693
1694       --fanotify-ops N
1695              stop fanotify stress workers after N bogo fanotify events.
1696
1697       --fault N
1698              start N workers that generates minor and major page faults.
1699
1700       --fault-ops N
1701              stop the page fault workers after N bogo page fault operations.
1702
1703       --fcntl N
1704              start N workers that perform fcntl(2) calls  with  various  com‐
1705              mands.   The  exercised  commands  (if  available) are: F_DUPFD,
1706              F_DUPFD_CLOEXEC, F_GETFD, F_SETFD, F_GETFL,  F_SETFL,  F_GETOWN,
1707              F_SETOWN, F_GETOWN_EX, F_SETOWN_EX, F_GETSIG, F_SETSIG, F_GETLK,
1708              F_SETLK, F_SETLKW, F_OFD_GETLK, F_OFD_SETLK and F_OFD_SETLKW.
1709
1710       --fcntl-ops N
1711              stop the fcntl workers after N bogo fcntl operations.
1712
1713       --fiemap N
1714              start N workers that each  create  a  file  with  many  randomly
1715              changing  extents  and  has  4  child  processes per worker that
1716              gather the extent information using the FS_IOC_FIEMAP ioctl(2).
1717
1718       --fiemap-ops N
1719              stop after N fiemap bogo operations.
1720
1721       --fiemap-bytes N
1722              specify the size of the fiemap'd file in bytes.  One can specify
1723              the  size  as  % of free space on the file system or in units of
1724              Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or  g.
1725              Larger files will contain more extents, causing more stress when
1726              gathering extent information.
1727
1728       --fifo N
1729              start N workers that exercise a named pipe  by  transmitting  64
1730              bit integers.
1731
1732       --fifo-ops N
1733              stop fifo workers after N bogo pipe write operations.
1734
1735       --fifo-readers N
1736              for  each  worker,  create  N  fifo reader workers that read the
1737              named pipe using simple blocking reads.
1738
1739       --file-ioctl N
1740              start N workers that exercise  various  file  specific  ioctl(2)
1741              calls. This will attempt to use the FIONBIO, FIOQSIZE, FIGETBSZ,
1742              FIOCLEX,  FIONCLEX,  FIONBIO,  FIOASYNC,   FIOQSIZE,   FIFREEZE,
1743              FITHAW,   FICLONE,   FICLONERANGE,   FIONREAD,   FIONWRITE   and
1744              FS_IOC_RESVSP ioctls if these are defined.
1745
1746       --file-ioctl-ops N
1747              stop file-ioctl workers after N file ioctl bogo operations.
1748
1749       --filename N
1750              start N workers that exercise file creation using various length
1751              filenames  containing  a  range  of allowed filename characters.
1752              This will try to see if it can exceed the  file  system  allowed
1753              filename  length  was  well as test various filename lengths be‐
1754              tween 1 and the maximum allowed by the file system.
1755
1756       --filename-ops N
1757              stop filename workers after N bogo filename tests.
1758
1759       --filename-opts opt
1760              use characters in the filename based on option 'opt'. Valid  op‐
1761              tions are:
1762
1763              Option    Description
1764              probe     default option, probe the file system for valid
1765                        allowed characters in a file name and use these
1766              posix     use characters as specified by The  Open  Group
1767                        Base   Specifications  Issue  7,  POSIX.1-2008,
1768                        3.278 Portable Filename Character Set
1769              ext       use characters allowed by the ext2, ext3,  ext4
1770                        file  systems, namely any 8 bit character apart
1771                        from NUL and /
1772
1773       --flock N
1774              start N workers locking on a single file.
1775
1776       --flock-ops N
1777              stop flock stress workers after N bogo flock operations.
1778
1779       -f N, --fork N
1780              start N workers continually forking  children  that  immediately
1781              exit.
1782
1783       --fork-ops N
1784              stop fork stress workers after N bogo operations.
1785
1786       --fork-max P
1787              create  P child processes and then wait for them to exit per it‐
1788              eration. The default is just 1; higher values will  create  many
1789              temporary  zombie  processes  that are waiting to be reaped. One
1790              can potentially fill up the process table using high values  for
1791              --fork-max and --fork.
1792
1793       --fork-vm
1794              enable  detrimental performance virtual memory advice using mad‐
1795              vise on all pages of the forked  process.  Where  possible  this
1796              will try to set every page in the new process with using madvise
1797              MADV_MERGEABLE,  MADV_WILLNEED,  MADV_HUGEPAGE  and  MADV_RANDOM
1798              flags. Linux only.
1799
1800       --fp-error N
1801              start  N workers that generate floating point exceptions. Compu‐
1802              tations are performed to force and check for  the  FE_DIVBYZERO,
1803              FE_INEXACT, FE_INVALID, FE_OVERFLOW and FE_UNDERFLOW exceptions.
1804              EDOM and ERANGE errors are also checked.
1805
1806       --fp-error-ops N
1807              stop after N bogo floating point exceptions.
1808
1809       --fpunch N
1810              start N workers that punch and fill holes in a 16 MB file  using
1811              five  concurrent  processes  per stressor exercising on the same
1812              file.   Where   available,   this   uses    fallocate(2)    FAL‐
1813              LOC_FL_KEEP_SIZE,   FALLOC_FL_PUNCH_HOLE,  FALLOC_FL_ZERO_RANGE,
1814              FALLOC_FL_COLLAPSE_RANGE and FALLOC_FL_INSERT_RANGE to make  and
1815              fill holes across the file and breaks it into multiple extents.
1816
1817       --fpunch-ops N
1818              stop fpunch workers after N punch and fill bogo operations.
1819
1820       --fstat N
1821              start  N  workers  fstat'ing  files  in  a directory (default is
1822              /dev).
1823
1824       --fstat-ops N
1825              stop fstat stress workers after N bogo fstat operations.
1826
1827       --fstat-dir directory
1828              specify the directory to fstat to override the default of  /dev.
1829              All the files in the directory will be fstat'd repeatedly.
1830
1831       --full N
1832              start N workers that exercise /dev/full.  This attempts to write
1833              to the device (which should always get error  ENOSPC),  to  read
1834              from  the  device (which should always return a buffer of zeros)
1835              and to seek randomly on the device  (which  should  always  suc‐
1836              ceed).  (Linux only).
1837
1838       --full-ops N
1839              stop the stress full workers after N bogo I/O operations.
1840
1841       --funccall N
1842              start N workers that call functions of 1 through to 9 arguments.
1843              By default functions with uint64_t arguments  are  called,  how‐
1844              ever, this can be changed using the --funccall-method option.
1845
1846       --funccall-ops N
1847              stop the funccall workers after N bogo function call operations.
1848              Each bogo operation is 1000 calls of functions of 1 through to 9
1849              arguments of the chosen argument type.
1850
1851       --funccall-method method
1852              specify the method of funccall argument type to be used. The de‐
1853              fault is uint64_t but can be one of bool, uint8, uint16, uint32,
1854              uint64,  uint128,  float,  double,  longdouble,  cfloat (complex
1855              float), cdouble (complex double), clongdouble (complex long dou‐
1856              ble),  float16,  float32, float64, float80, float128, decimal32,
1857              decimal64 and decimal128.  Note that some  of  these  types  are
1858              only  available  with  specific  architectures and compiler ver‐
1859              sions.
1860
1861       --funcret N
1862              start N workers that pass and return by value various  small  to
1863              large data types.
1864
1865       --funcret-ops N
1866              stop the funcret workers after N bogo function call operations.
1867
1868       --funcret-method method
1869              specify  the method of funcret argument type to be used. The de‐
1870              fault is uint64_t but can be one of uint8 uint16  uint32  uint64
1871              uint128 float double longdouble float80 float128 decimal32 deci‐
1872              mal64 decimal128 uint8x32 uint8x128 uint64x128.
1873
1874       --futex N
1875              start N workers that rapidly exercise  the  futex  system  call.
1876              Each worker has two processes, a futex waiter and a futex waker.
1877              The waiter waits with a very small timeout to stress the timeout
1878              and  rapid polled futex waiting. This is a Linux specific stress
1879              option.
1880
1881       --futex-ops N
1882              stop futex workers after N bogo  successful  futex  wait  opera‐
1883              tions.
1884
1885       --get N
1886              start  N workers that call system calls that fetch data from the
1887              kernel, currently these are: getpid,  getppid,  getcwd,  getgid,
1888              getegid,  getuid,  getgroups, getpgrp, getpgid, getpriority, ge‐
1889              tresgid, getresuid, getrlimit, prlimit, getrusage, getsid,  get‐
1890              tid,  getcpu,  gettimeofday,  uname,  adjtimex,  sysfs.  Some of
1891              these system calls are OS specific.
1892
1893       --get-ops N
1894              stop get workers after N bogo get operations.
1895
1896       --getdent N
1897              start N workers that recursively read directories /proc,  /dev/,
1898              /tmp, /sys and /run using getdents and getdents64 (Linux only).
1899
1900       --getdent-ops N
1901              stop getdent workers after N bogo getdent bogo operations.
1902
1903       --getrandom N
1904              start N workers that get 8192 random bytes from the /dev/urandom
1905              pool using the getrandom(2) system call (Linux) or getentropy(2)
1906              (OpenBSD).
1907
1908       --getrandom-ops N
1909              stop getrandom workers after N bogo get operations.
1910
1911       --goto N
1912              start  N workers that perform 1024 forward branches (to next in‐
1913              struction) or backward branches (to  previous  instruction)  for
1914              each  bogo  operation loop.  By default, every 1024 branches the
1915              direction is randomly chosen to be forward  or  backward.   This
1916              stressor  exercises  suboptimal  pipelined  execution and branch
1917              prediction logic.
1918
1919       --goto-ops N
1920              stop goto workers after N bogo loops  of  1024  branch  instruc‐
1921              tions.
1922
1923       --goto-direction [ forward | backward | random ]
1924              select the branching direction in the stressor loop, forward for
1925              forward only branching, backward for a backward only  branching,
1926              random  for a random choice of forward or random branching every
1927              1024 branches.
1928
1929       --gpu N
1930              start N worker that exercise the GPU. This specifies a 2-D  tex‐
1931              ture image that allows the elements of an image array to be read
1932              by shaders, and render primitives using an opengl context.
1933
1934       --gpu-ops N
1935              stop gpu workers after N render loop operations.
1936
1937       --gpu-devnode DEVNAME
1938              specify the device node name of the GPU device, the  default  is
1939              /dev/dri/renderD128.
1940
1941       --gpu-frag N
1942              specify  shader  core  usage per pixel, this sets N loops in the
1943              fragment shader.
1944
1945       --gpu-tex-size N
1946              specify upload texture N × N, by default this value  is  4096  ×
1947              4096.
1948
1949       --gpu-xsize X
1950              use a framebuffer size of X pixels. The default is 256 pixels.
1951
1952       --gpu-ysize Y
1953              use a framebuffer size of Y pixels. The default is 256 pixels.
1954
1955       --gpu-upload N
1956              specify  upload  texture N times per frame, the default value is
1957              1.
1958
1959       --handle N
1960              start N  workers  that  exercise  the  name_to_handle_at(2)  and
1961              open_by_handle_at(2) system calls. (Linux only).
1962
1963       --handle-ops N
1964              stop after N handle bogo operations.
1965
1966       --hash N
1967              start  N workers that exercise various hashing functions. Random
1968              strings from 1 to 128 bytes are hashed and the hashing rate  and
1969              chi  squared  is  calculated from the number of hashes performed
1970              over a period of time. The chi squared value is the goodness-of-
1971              fit  measure,  it  is  the  actual distribution of items in hash
1972              buckets versus the expected distribution of items.  Typically  a
1973              chi  squared value of 0.95..1.05 indicates a good hash distribu‐
1974              tion.
1975
1976       --hash-ops N
1977              stop after N hashing rounds
1978
1979       --hash-method method
1980              specify the hashing method to use, by default  all  the  hashing
1981              methods are cycled through. Methods available are:
1982
1983              Method          Description
1984              all             cycle through all the hashing methods
1985              adler32         Mark  Adler checksum, a modification of
1986                              the Fletcher checksum
1987              coffin          xor and 5 bit rotate left hash
1988              coffin32        xor and 5 bit rotate left hash with  32
1989                              bit fetch optimization
1990              crc32c          compute CRC32C (Castagnoli CRC32) inte‐
1991                              ger hash
1992              djb2a           Dan Bernstein hash using the xor  vari‐
1993                              ant
1994              fnv1a           FNV-1a  Fowler-Noll-Vo  hash  using the
1995                              xor then multiply variant
1996              jenkin          Jenkin's integer hash
1997              kandr           Kernighan and Richie's multiply  by  31
1998                              and  add  hash  from "The C Programming
1999                              Language", 2nd Edition
2000              knuth           Donald E. Knuth's hash from "The Art Of
2001                              Computer  Programming", Volume 3, chap‐
2002                              ter 6.4
2003              loselose        Kernighan and Richie's simple hash from
2004                              "The  C Programming Language", 1st Edi‐
2005                              tion
2006              mid5            xor shift hash of the middle 5  charac‐
2007                              ters  of  the string. Designed by Colin
2008                              Ian King
2009              muladd32        simple multiply and add hash  using  32
2010                              bit math and xor folding of overflow
2011              muladd64        simple  multiply  and add hash using 64
2012                              bit math and xor folding of overflow
2013              mulxror64       64 bit multiply, xor and rotate  right.
2014                              Mangles  64  bits  where  possible. De‐
2015                              signed by Colin Ian King
2016              murmur3_32      murmur3_32 hash, Austin Appleby's  Mur‐
2017                              mur3 hash, 32 bit variant
2018              nhash           exim's nhash.
2019
2020              pjw             a  non-cryptographic hash function cre‐
2021                              ated by Peter  J.  Weinberger  of  AT&T
2022                              Bell  Labs,  used  in  UNIX  ELF object
2023                              files
2024              sdbm            sdbm hash as used in the SDBM  database
2025                              and GNU awk
2026              x17             multiply by 17 and add. The multiplica‐
2027                              tion can be optimized down  to  a  fast
2028                              right shift by 4 and add on some archi‐
2029                              tectures
2030              xor             simple rotate shift and xor of values
2031              xxhash          the  "Extremely  fast"  hash  in   non-
2032                              streaming mode
2033
2034       -d N, --hdd N
2035              start N workers continually writing, reading and removing tempo‐
2036              rary files. The default mode is to stress test sequential writes
2037              and  reads.   With  the  --aggressive option enabled without any
2038              --hdd-opts options the hdd stressor will work  through  all  the
2039              --hdd-opt options one by one to cover a range of I/O options.
2040
2041       --hdd-bytes N
2042              write N bytes for each hdd process, the default is 1 GB. One can
2043              specify the size as % of free space on the  file  system  or  in
2044              units of Bytes, KBytes, MBytes and GBytes using the suffix b, k,
2045              m or g.
2046
2047       --hdd-opts list
2048              specify various stress test options as a comma  separated  list.
2049              Options are as follows:
2050
2051              Option        Description
2052              direct        try  to minimize cache effects of the I/O. File
2053                            I/O writes are  performed  directly  from  user
2054                            space  buffers and synchronous transfer is also
2055                            attempted.  To guarantee synchronous I/O,  also
2056                            use the sync option.
2057              dsync         ensure  output has been transferred to underly‐
2058                            ing hardware and file metadata has been updated
2059                            (using  the O_DSYNC open flag). This is equiva‐
2060                            lent to each write(2) being followed by a  call
2061                            to fdatasync(2). See also the fdatasync option.
2062              fadv-dontneed advise  kernel  to  expect the data will not be
2063                            accessed in the near future.
2064              fadv-noreuse  advise kernel to expect the data to be accessed
2065                            only once.
2066              fadv-normal   advise kernel there are no explicit access pat‐
2067                            tern for the data. This is the  default  advice
2068                            assumption.
2069              fadv-rnd      advise  kernel to expect random access patterns
2070                            for the data.
2071              fadv-seq      advise kernel to expect sequential access  pat‐
2072                            terns for the data.
2073              fadv-willneed advise kernel to expect the data to be accessed
2074                            in the near future.
2075              fsync         flush all  modified  in-core  data  after  each
2076                            write  to  the  output device using an explicit
2077                            fsync(2) call.
2078              fdatasync     similar to fsync, but do not flush the modified
2079                            metadata  unless metadata is required for later
2080                            data reads to be handled correctly.  This  uses
2081                            an explicit fdatasync(2) call.
2082              iovec         use  readv/writev  multiple  buffer I/Os rather
2083                            than read/write. Instead of 1 read/write opera‐
2084                            tion,  the buffer is broken into an iovec of 16
2085                            buffers.
2086              noatime       do not update the file last  access  timestamp,
2087                            this can reduce metadata writes.
2088              sync          ensure  output has been transferred to underly‐
2089                            ing hardware (using the O_SYNC open flag). This
2090                            is equivalent to a each write(2) being followed
2091                            by a call to fsync(2). See also the  fsync  op‐
2092                            tion.
2093
2094
2095
2096              rd-rnd        read data randomly. By default, written data is
2097                            not read back, however, this option will  force
2098                            it to be read back randomly.
2099              rd-seq        read  data  sequentially.  By  default, written
2100                            data is not read  back,  however,  this  option
2101                            will force it to be read back sequentially.
2102              syncfs        write  all buffered modifications of file meta‐
2103                            data and data on the filesystem  that  contains
2104                            the hdd worker files.
2105              utimes        force  update  of  file timestamp which may in‐
2106                            crease metadata writes.
2107              wr-rnd        write data randomly. The wr-seq  option  cannot
2108                            be used at the same time.
2109              wr-seq        write data sequentially. This is the default if
2110                            no write modes are specified.
2111
2112       Note that some of these options are mutually  exclusive,  for  example,
2113       there  can  be  only  one  method of writing or reading.  Also, fadvise
2114       flags may be mutually exclusive, for example  fadv-willneed  cannot  be
2115       used with fadv-dontneed.
2116
2117       --hdd-ops N
2118              stop hdd stress workers after N bogo operations.
2119
2120       --hdd-write-size N
2121              specify  size of each write in bytes. Size can be from 1 byte to
2122              4MB.
2123
2124       --heapsort N
2125              start N workers that sort 32 bit integers using  the  BSD  heap‐
2126              sort.
2127
2128       --heapsort-ops N
2129              stop heapsort stress workers after N bogo heapsorts.
2130
2131       --heapsort-size N
2132              specify  number  of  32  bit integers to sort, default is 262144
2133              (256 × 1024).
2134
2135       --hrtimers N
2136              start N workers that exercise high resolution times  at  a  high
2137              frequency.  Each stressor starts 32 processes that run with ran‐
2138              dom timer  intervals  of  0..499999  nanoseconds.  Running  this
2139              stressor  with  appropriate  privilege  will  run these with the
2140              SCHED_RR policy.
2141
2142       --hrtimers-ops N
2143              stop hrtimers stressors after N timer event bogo operations
2144
2145       --hrtimers-adjust
2146              enable automatic timer rate adjustment to try  to  maximize  the
2147              hrtimer  frequency.   The signal rate is measured every 0.1 sec‐
2148              onds and the hrtimer delay is adjusted to try and set the  opti‐
2149              mal hrtimer delay to generate the highest hrtimer rates.
2150
2151       --hsearch N
2152              start  N  workers  that  search  a  80%  full  hash  table using
2153              hsearch(3). By default, there are 8192  elements  inserted  into
2154              the  hash  table.  This is a useful method to exercise access of
2155              memory and processor cache.
2156
2157       --hsearch-ops N
2158              stop the hsearch workers after N  bogo  hsearch  operations  are
2159              completed.
2160
2161       --hsearch-size N
2162              specify  the number of hash entries to be inserted into the hash
2163              table. Size can be from 1K to 4M.
2164
2165       --icache N
2166              start N workers that stress the instruction cache by forcing in‐
2167              struction  cache  reloads.  This is achieved by modifying an in‐
2168              struction cache line,  causing the processor to reload  it  when
2169              we call a function in inside it. Currently only verified and en‐
2170              abled for Intel x86 CPUs.
2171
2172       --icache-ops N
2173              stop the icache workers after N bogo icache operations are  com‐
2174              pleted.
2175
2176       --icmp-flood N
2177              start  N  workers  that flood localhost with randonly sized ICMP
2178              ping packets.  This stressor requires the CAP_NET_RAW capbility.
2179
2180       --icmp-flood-ops N
2181              stop icmp flood workers after N  ICMP  ping  packets  have  been
2182              sent.
2183
2184       --idle-scan N
2185              start N workers that scan the idle page bitmap across a range of
2186              physical pages. This sets and checks for idle pages via the idle
2187              page  tracking  interface /sys/kernel/mm/page_idle/bitmap.  This
2188              is for Linux only.
2189
2190       --idle-scan-ops N
2191              stop after N bogo page scan operations. Currently one bogo  page
2192              scan operation is equivalent to setting and checking 64 physical
2193              pages.
2194
2195       --idle-page N
2196              start N workers that walks through  every  page  exercising  the
2197              Linux    /sys/kernel/mm/page_idle/bitmap   interface.   Requires
2198              CAP_SYS_RESOURCE capability.
2199
2200       --idle-page-ops N
2201              stop after N bogo idle page operations.
2202
2203       --inode-flags N
2204              start N workers that exercise inode flags using the  FS_IOC_GET‐
2205              FLAGS  and  FS_IOC_SETFLAGS ioctl(2). This attempts to apply all
2206              the available inode flags onto a directory and file even if  the
2207              underlying  file  system may not support these flags (errors are
2208              just ignored).  Each worker runs 4  threads  that  exercise  the
2209              flags on the same directory and file to try to force races. This
2210              is a Linux only stressor, see ioctl_iflags(2) for more details.
2211
2212       --inode-flags-ops N
2213              stop the inode-flags workers after  N  ioctl  flag  setting  at‐
2214              tempts.
2215
2216       --inotify N
2217              start  N  workers performing file system activities such as mak‐
2218              ing/deleting files/directories, moving files, etc. to stress ex‐
2219              ercise the various inotify events (Linux only).
2220
2221       --inotify-ops N
2222              stop inotify stress workers after N inotify bogo operations.
2223
2224       -i N, --io N
2225              start  N  workers  continuously calling sync(2) to commit buffer
2226              cache to disk.  This can be used in conjunction with  the  --hdd
2227              options.
2228
2229       --io-ops N
2230              stop io stress workers after N bogo operations.
2231
2232       --iomix N
2233              start  N  workers  that  perform a mix of sequential, random and
2234              memory mapped read/write operations as well as random copy  file
2235              read/writes,  forced  sync'ing  and (if run as root) cache drop‐
2236              ping.  Multiple child processes are spawned to all share a  sin‐
2237              gle file and perform different I/O operations on the same file.
2238
2239       --iomix-bytes N
2240              write  N  bytes  for each iomix worker process, the default is 1
2241              GB. One can specify the size as % of free space on the file sys‐
2242              tem  or  in  units of Bytes, KBytes, MBytes and GBytes using the
2243              suffix b, k, m or g.
2244
2245       --iomix-ops N
2246              stop iomix stress workers after N bogo iomix I/O operations.
2247
2248       --ioport N
2249              start N workers than perform bursts of 16 reads and 16 writes of
2250              ioport  0x80  (x86  Linux  systems  only).  I/O performed on x86
2251              platforms on port 0x80 will cause delays on the  CPU  performing
2252              the I/O.
2253
2254       --ioport-ops N
2255              stop the ioport stressors after N bogo I/O operations
2256
2257       --ioport-opts [ in | out | inout ]
2258              specify  if  port  reads  in,  port read writes out or reads and
2259              writes are to be performed.  The default is both in and out.
2260
2261       --ioprio N
2262              start  N  workers  that  exercise  the  ioprio_get(2)  and   io‐
2263              prio_set(2) system calls (Linux only).
2264
2265       --ioprio-ops N
2266              stop after N io priority bogo operations.
2267
2268       --io-uring N
2269              start N workers that perform iovec write and read I/O operations
2270              using the Linux io-uring interface. On each bogo-loop 1024 × 512
2271              byte writes and 1024 × reads are performed on a temporary file.
2272
2273       --io-uring-ops
2274              stop after N rounds of write and reads.
2275
2276       --ipsec-mb N
2277              start  N workers that perform cryptographic processing using the
2278              highly optimized Intel Multi-Buffer Crypto  for  IPsec  library.
2279              Depending  on  the  features available, SSE, AVX, AVX and AVX512
2280              CPU features will be used on data encrypted by SHA,  DES,  CMAC,
2281              CTR, HMAC MD5, HMAC SHA1 and HMAC SHA512 cryptographic routines.
2282              This is only available for x86-64 modern Intel CPUs.
2283
2284       --ipsec-mb-ops N
2285              stop after N rounds of processing  of  data  using  the  crypto‐
2286              graphic routines.
2287
2288       --ipsec-mb-feature [ sse | avx | avx2 | avx512 ]
2289              Just  use  the  specified processor CPU feature. By default, all
2290              the available features for the CPU are exercised.
2291
2292       --ipsec-mb-jobs N
2293              Procrss N multi-block rounds of cryptographic processing per it‐
2294              eration. The default is 256.
2295
2296       --itimer N
2297              start  N  workers that exercise the system interval timers. This
2298              sets up an ITIMER_PROF itimer that generates a  SIGPROF  signal.
2299              The  default  frequency  for  the  itimer is 1 MHz, however, the
2300              Linux kernel will set this to be no more that the jiffy setting,
2301              hence  high frequency SIGPROF signals are not normally possible.
2302              A busy loop spins on getitimer(2) calls to consume CPU and hence
2303              decrement  the  itimer  based on amount of time spent in CPU and
2304              system time.
2305
2306       --itimer-ops N
2307              stop itimer stress workers after N bogo itimer SIGPROF signals.
2308
2309       --itimer-freq F
2310              run itimer at F Hz; range from 1 to  1000000  Hz.  Normally  the
2311              highest  frequency  is  limited by the number of jiffy ticks per
2312              second, so running above 1000 Hz is difficult to attain in prac‐
2313              tice.
2314
2315       --itimer-rand
2316              select  an  interval  timer  frequency based around the interval
2317              timer frequency +/- 12.5% random jitter.  This  tries  to  force
2318              more  variability  in  the timer interval to make the scheduling
2319              less predictable.
2320
2321       --jpeg N
2322              start N workers that use jpeg compression on a machine generated
2323              plasma field image. The default image is a plasma field, however
2324              different image types may be selected. The starting raster  line
2325              is  changed  on  each  compression iteration to cycle around the
2326              data.
2327
2328       --jpeg-ops N
2329              stop after N jpeg compression operations.
2330
2331       --jpeg-height H
2332              use a RGB sample image height of H pixels. The  default  is  512
2333              pixels.
2334
2335       --jpeg-image [ brown | flat | gradient | noise | plasma | xstripes ]
2336              select  the  source image type to be compressed. Available image
2337              types are:
2338
2339              Type           Description
2340              brown          brown noise, red and green values  vary
2341                             by a 3 bit value, blue values vary by a
2342                             2 bit value.
2343              flat           a single random colour for  the  entire
2344                             image.
2345              gradient       linear  gradient  of the red, green and
2346                             blue components across  the  width  and
2347                             height of the image.
2348              noise          random white noise for red, green, blue
2349                             values.
2350              plasma         plasma field with smooth colour transi‐
2351                             tions and hard boundary edges.
2352              xstripes       a  random  colour  for  each horizontal
2353                             line.
2354
2355       --jpeg-width H
2356              use a RGB sample image width of H pixels.  The  default  is  512
2357              pixels.
2358
2359       --jpeg-quality Q
2360              use  the  compression  quality Q. The range is 1..100 (1 lowest,
2361              100 highest), with a default of 95
2362
2363       --judy N
2364              start N workers that insert, search and delete 32  bit  integers
2365              in  a  Judy array using a predictable yet sparse array index. By
2366              default, there are 131072 integers used in the Judy array.  This
2367              is  a useful method to exercise random access of memory and pro‐
2368              cessor cache.
2369
2370       --judy-ops N
2371              stop the judy workers after N  bogo  judy  operations  are  com‐
2372              pleted.
2373
2374       --judy-size N
2375              specify  the  size (number of 32 bit integers) in the Judy array
2376              to exercise.  Size can be from 1K to 4M 32 bit integers.
2377
2378       --kcmp N
2379              start N workers that use kcmp(2) to  compare  parent  and  child
2380              processes to determine if they share kernel resources. Supported
2381              only for Linux and requires CAP_SYS_PTRACE capability.
2382
2383       --kcmp-ops N
2384              stop kcmp workers after N bogo kcmp operations.
2385
2386       --key N
2387              start N workers that create and manipulate keys using add_key(2)
2388              and  ketctl(2).  As  many keys are created as the per user limit
2389              allows and then the following keyctl commands are  exercised  on
2390              each  key:  KEYCTL_SET_TIMEOUT,  KEYCTL_DESCRIBE, KEYCTL_UPDATE,
2391              KEYCTL_READ, KEYCTL_CLEAR and KEYCTL_INVALIDATE.
2392
2393       --key-ops N
2394              stop key workers after N bogo key operations.
2395
2396       --kill N
2397              start N workers sending SIGUSR1 kill signals to a SIG_IGN signal
2398              handler  in  the  stressor  and  SIGUSR1  kill signal to a child
2399              stressor with a SIGUSR1 handler. Most of the process  time  will
2400              end up in kernel space.
2401
2402       --kill-ops N
2403              stop kill workers after N bogo kill operations.
2404
2405       --klog N
2406              start  N  workers  exercising  the kernel syslog(2) system call.
2407              This will attempt to read the kernel log with various sized read
2408              buffers. Linux only.
2409
2410       --klog-ops N
2411              stop klog workers after N syslog operations.
2412
2413       --kvm N
2414              start  N  workers that create, run and destroy a minimal virtual
2415              machine. The virtual machine reads,  increments  and  writes  to
2416              port 0x80 in a spin loop and the stressor handles the I/O trans‐
2417              actions. Currently for x86 and Linux only.
2418
2419       --kvm-ops N
2420              stop kvm stressors after N virtual machines have  been  created,
2421              run and destroyed.
2422
2423       --l1cache N
2424              start  N  workers that exercise the CPU level 1 cache with reads
2425              and writes. A cache aligned buffer that is  twice  the  level  1
2426              cache  size  is read and then written in level 1 cache set sized
2427              steps over each level 1 cache set. This is designed to  exercise
2428              cache  block evictions. The bogo-op count measures the number of
2429              million cache lines touched.  Where possible, the level 1  cache
2430              geometry  is  determined  from  the kernel, however, this is not
2431              possible on some architectures or kernels, so one  may  need  to
2432              specify  these  manually.  One  can specify 3 out of the 4 cache
2433              geometric parameters, these are as follows:
2434
2435       --l1cache-line-size N
2436              specify the level 1 cache line size (in bytes)
2437
2438       --l1cache-sets N
2439              specify the number of level 1 cache sets
2440
2441       --l1cache-size N
2442              specify the level 1 cache size (in bytes)
2443
2444       --l1cache-ways N
2445              specify the number of level 1 cache ways
2446
2447       --landlock N
2448              start N workers that exercise Linux 5.13 landlocking. A range of
2449              landlock_create_ruleset  flags  are  exercised  with a read only
2450              file rule to see if a directory can be accessed and a read-write
2451              file create can be blocked. Each ruleset attempt is exercised in
2452              a new child context and this is the limiting factor on the speed
2453              of the stressor.
2454
2455       --landlock-ops N
2456              stop the landlock stressors after N landlock ruleset bogo opera‐
2457              tions.
2458
2459       --lease N
2460              start N workers locking, unlocking and breaking leases  via  the
2461              fcntl(2)  F_SETLEASE operation. The parent processes continually
2462              lock and unlock a lease on a file while a user selectable number
2463              of  child  processes  open  the file with a non-blocking open to
2464              generate SIGIO lease breaking notifications to the parent.  This
2465              stressor  is  only  available if F_SETLEASE, F_WRLCK and F_UNLCK
2466              support is provided by fcntl(2).
2467
2468       --lease-ops N
2469              stop lease workers after N bogo operations.
2470
2471       --lease-breakers N
2472              start N lease breaker child processes per  lease  worker.   Nor‐
2473              mally one child is plenty to force many SIGIO lease breaking no‐
2474              tification signals to the parent, however,  this  option  allows
2475              one to specify more child processes if required.
2476
2477       --link N
2478              start N workers creating and removing hardlinks.
2479
2480       --link-ops N
2481              stop link stress workers after N bogo operations.
2482
2483       --list N
2484              start  N workers that exercise list data structures. The default
2485              is to add, find and remove 5,000 64 bit  integers  into  circleq
2486              (doubly  linked  circle queue), list (doubly linked list), slist
2487              (singly linked list), slistt (singly linked  list  using  tail),
2488              stailq  (singly linked tail queue) and tailq (doubly linked tail
2489              queue) lists. The intention of this stressor is to exercise mem‐
2490              ory and cache with the various list operations.
2491
2492       --list-ops N
2493              stop list stressors after N bogo ops. A bogo op covers the addi‐
2494              tion, finding and removing all the items into the list(s).
2495
2496       --list-size N
2497              specify the size of the list, where N is the number  of  64  bit
2498              integers to be added into the list.
2499
2500       --list-method [ all | circleq | list | slist | stailq | tailq ]
2501              specify  the  list  to be used. By default, all the list methods
2502              are used (the 'all' option).
2503
2504       --loadavg N
2505              start N workers that attempt to  create  thousands  of  pthreads
2506              that run at the lowest nice priority to force very high load av‐
2507              erages. Linux systems will also perform some I/O writes as pend‐
2508              ing I/O is also factored into system load accounting.
2509
2510       --loadavg-ops N
2511              stop  loadavg  workers  after  N  bogo  scheduling yields by the
2512              pthreads have been reached.
2513
2514       --lockbus N
2515              start N workers that rapidly lock and increment 64 bytes of ran‐
2516              domly chosen memory from a 16MB mmap'd region (Intel x86 and ARM
2517              CPUs only).  This will cause cacheline misses  and  stalling  of
2518              CPUs.
2519
2520       --lockbus-ops N
2521              stop lockbus workers after N bogo operations.
2522
2523       --locka N
2524              start  N workers that randomly lock and unlock regions of a file
2525              using  the  POSIX  advisory  locking  mechanism  (see  fcntl(2),
2526              F_SETLK,  F_GETLK).  Each  worker creates a 1024 KB file and at‐
2527              tempts to hold a maximum of 1024 concurrent locks with  a  child
2528              process that also tries to hold 1024 concurrent locks. Old locks
2529              are unlocked in a first-in, first-out basis.
2530
2531       --locka-ops N
2532              stop locka workers after N bogo locka operations.
2533
2534       --lockf N
2535              start N workers that randomly lock and unlock regions of a  file
2536              using  the POSIX lockf(3) locking mechanism. Each worker creates
2537              a 64 KB file and attempts to hold a maximum of  1024  concurrent
2538              locks  with a child process that also tries to hold 1024 concur‐
2539              rent locks. Old locks are unlocked in a first-in, first-out  ba‐
2540              sis.
2541
2542       --lockf-ops N
2543              stop lockf workers after N bogo lockf operations.
2544
2545       --lockf-nonblock
2546              instead  of  using  blocking  F_LOCK lockf(3) commands, use non-
2547              blocking F_TLOCK commands and re-try if the lock  failed.   This
2548              creates  extra  system  call overhead and CPU utilisation as the
2549              number of lockf workers increases and  should  increase  locking
2550              contention.
2551
2552       --lockofd N
2553              start  N workers that randomly lock and unlock regions of a file
2554              using the Linux  open  file  description  locks  (see  fcntl(2),
2555              F_OFD_SETLK,  F_OFD_GETLK).   Each worker creates a 1024 KB file
2556              and attempts to hold a maximum of 1024 concurrent locks  with  a
2557              child process that also tries to hold 1024 concurrent locks. Old
2558              locks are unlocked in a first-in, first-out basis.
2559
2560       --lockofd-ops N
2561              stop lockofd workers after N bogo lockofd operations.
2562
2563       --longjmp N
2564              start N workers  that  exercise  setjmp(3)/longjmp(3)  by  rapid
2565              looping on longjmp calls.
2566
2567       --longjmp-ops N
2568              stop  longjmp  stress workers after N bogo longjmp operations (1
2569              bogo op is 1000 longjmp calls).
2570
2571       --loop N
2572              start N workers that exercise the loopback control device.  This
2573              creates 2MB loopback devices, expands them to 4MB, performs some
2574              loopback status information get  and  set  operations  and  then
2575              destoys them. Linux only and requires CAP_SYS_ADMIN capability.
2576
2577       --loop-ops N
2578              stop after N bogo loopback creation/deletion operations.
2579
2580       --lsearch N
2581              start  N  workers  that linear search a unsorted array of 32 bit
2582              integers using lsearch(3). By default, there are  8192  elements
2583              in  the  array.   This is a useful method to exercise sequential
2584              access of memory and processor cache.
2585
2586       --lsearch-ops N
2587              stop the lsearch workers after N  bogo  lsearch  operations  are
2588              completed.
2589
2590       --lsearch-size N
2591              specify  the  size  (number  of 32 bit integers) in the array to
2592              lsearch. Size can be from 1K to 4M.
2593
2594       --madvise N
2595              start N workers that apply random madvise(2) advise settings  on
2596              pages of a 4MB file backed shared memory mapping.
2597
2598       --madvise-ops N
2599              stop madvise stressors after N bogo madvise operations.
2600
2601       --malloc N
2602              start N workers continuously calling malloc(3), calloc(3), real‐
2603              loc(3),  posix_memalign(3),  aligned_alloc(3),  memalign(3)  and
2604              free(3).  By  default,  up to 65536 allocations can be active at
2605              any point, but this can be altered with the --malloc-max option.
2606              Allocation,  reallocation  and freeing are chosen at random; 50%
2607              of the time memory is allocation (via one of malloc,  calloc  or
2608              realloc, posix_memalign, aligned_alloc, memalign) and 50% of the
2609              time allocations are free'd.  Allocation sizes are also  random,
2610              with  the  maximum  allocation  size  controlled  by  the --mal‐
2611              loc-bytes option, the default size being 64K.  The worker is re-
2612              started if it is killed by the out of memory (OOM) killer.
2613
2614       --malloc-bytes N
2615              maximum  per  allocation/reallocation size. Allocations are ran‐
2616              domly selected from 1 to N bytes. One can specify the size as  %
2617              of  total  available memory or in units of Bytes, KBytes, MBytes
2618              and GBytes using the suffix b, k,  m  or  g.   Large  allocation
2619              sizes  cause the memory allocator to use mmap(2) rather than ex‐
2620              panding the heap using brk(2).
2621
2622       --malloc-max N
2623              maximum number of active allocations  allowed.  Allocations  are
2624              chosen at random and placed in an allocation slot. Because about
2625              50%/50% split between allocation and freeing, typically half  of
2626              the allocation slots are in use at any one time.
2627
2628       --malloc-ops N
2629              stop after N malloc bogo operations. One bogo operations relates
2630              to a successful malloc(3), calloc(3) or realloc(3).
2631
2632       --malloc-pthreads N
2633              specify number of malloc stressing concurrent pthreads  to  run.
2634              The  default is 0 (just one main process, no pthreads). This op‐
2635              tion will do nothing if pthreads are not supported.
2636
2637       --malloc-thresh N
2638              specify the threshold  where  malloc  uses  mmap(2)  instead  of
2639              sbrk(2)  to allocate more memory. This is only available on sys‐
2640              tems that provide the GNU C mallopt(3) tuning function.
2641
2642       --malloc-touch
2643              touch every allocated page to force pages  to  be  populated  in
2644              memory.  This will increase the memory pressure and exercise the
2645              virtual memory harder. By default the malloc stressor will  mad‐
2646              vise  pages into memory or use mincore to check for non-resident
2647              memory pages and try to force them into memory; this option  ag‐
2648              gressively forces pages to be memory resident.
2649
2650       --malloc-zerofree
2651              zero  allocated memory before free'ing. If we have a bad alloca‐
2652              tor than this may be useful in touching broken  allocations  and
2653              triggering  failures. Also useful for forcing extra cache/memory
2654              writes.
2655
2656       --matrix N
2657              start N workers that perform various matrix operations on float‐
2658              ing point values. Testing on 64 bit x86 hardware shows that this
2659              provides a good mix of memory, cache and floating  point  opera‐
2660              tions and is an excellent way to make a CPU run hot.
2661
2662              By default, this will exercise all the matrix stress methods one
2663              by one.  One can specify a specific matrix  stress  method  with
2664              the --matrix-method option.
2665
2666       --matrix-ops N
2667              stop matrix stress workers after N bogo operations.
2668
2669       --matrix-method method
2670              specify  a matrix stress method. Available matrix stress methods
2671              are described as follows:
2672
2673              Method     Description
2674              all        iterate over all the below matrix stress  meth‐
2675                         ods
2676              add        add two N × N matrices
2677              copy       copy one N × N matrix to another
2678              div        divide an N × N matrix by a scalar
2679              frobenius  Frobenius product of two N × N matrices
2680              hadamard   Hadamard product of two N × N matrices
2681              identity   create an N × N identity matrix
2682              mean       arithmetic mean of two N × N matrices
2683              mult       multiply an N × N matrix by a scalar
2684              negate     negate an N × N matrix
2685              prod       product of two N × N matrices
2686              sub        subtract  one  N  × N matrix from another N × N
2687                         matrix
2688              square     multiply an N × N matrix by itself
2689              trans      transpose an N × N matrix
2690              zero       zero an N × N matrix
2691
2692       --matrix-size N
2693              specify the N × N size of the matrices.  Smaller  values  result
2694              in  a floating point compute throughput bound stressor, where as
2695              large values result in a cache  and/or  memory  bandwidth  bound
2696              stressor.
2697
2698       --matrix-yx
2699              perform  matrix  operations  in order y by x rather than the de‐
2700              fault x by y. This is suboptimal ordering compared  to  the  de‐
2701              fault and will perform more data cache stalls.
2702
2703       --matrix-3d N
2704              start  N  workers  that  perform various 3D matrix operations on
2705              floating point values. Testing on 64 bit x86 hardware shows that
2706              this provides a good mix of memory, cache and floating point op‐
2707              erations and is an excellent way to make a CPU run hot.
2708
2709              By default, this will exercise all the 3D matrix stress  methods
2710              one  by one.  One can specify a specific 3D matrix stress method
2711              with the --matrix-3d-method option.
2712
2713       --matrix-3d-ops N
2714              stop the 3D matrix stress workers after N bogo operations.
2715
2716       --matrix-3d-method method
2717              specify a 3D matrix stress method. Available  3D  matrix  stress
2718              methods are described as follows:
2719
2720              Method     Description
2721              all        iterate  over all the below matrix stress meth‐
2722                         ods
2723              add        add two N × N × N matrices
2724              copy       copy one N × N × N matrix to another
2725              div        divide an N × N × N matrix by a scalar
2726              frobenius  Frobenius product of two N × N × N matrices
2727              hadamard   Hadamard product of two N × N × N matrices
2728              identity   create an N × N × N identity matrix
2729              mean       arithmetic mean of two N × N × N matrices
2730              mult       multiply an N × N × N matrix by a scalar
2731              negate     negate an N × N × N matrix
2732              sub        subtract one N × N × N matrix from another N  ×
2733                         N × N matrix
2734              trans      transpose an N × N × N matrix
2735              zero       zero an N × N × N matrix
2736
2737       --matrix-3d-size N
2738              specify  the N × N × N size of the matrices.  Smaller values re‐
2739              sult in a floating  point  compute  throughput  bound  stressor,
2740              where  as large values result in a cache and/or memory bandwidth
2741              bound stressor.
2742
2743       --matrix-3d-zyx
2744              perform matrix operations in order z by y by x rather  than  the
2745              default x by y by z. This is suboptimal ordering compared to the
2746              default and will perform more data cache stalls.
2747
2748       --mcontend N
2749              start N workers that produce memory contention  read/write  pat‐
2750              terns.  Each stressor runs with 5 threads that read and write to
2751              two different mappings of the  same  underlying  physical  page.
2752              Various caching operations are also exercised to cause sub-opti‐
2753              mal memory access patterns.  The threads  also  randomly  change
2754              CPU affinity to exercise CPU and memory migration stress.
2755
2756       --mcontend-ops N
2757              stop mcontend stressors after N bogo read/write operations.
2758
2759       --membarrier N
2760              start  N workers that exercise the membarrier system call (Linux
2761              only).
2762
2763       --membarrier-ops N
2764              stop membarrier stress workers after N  bogo  membarrier  opera‐
2765              tions.
2766
2767       --memcpy N
2768              start  N workers that copy 2MB of data from a shared region to a
2769              buffer using memcpy(3) and then move the data in the buffer with
2770              memmove(3)  with 3 different alignments. This will exercise pro‐
2771              cessor cache and system memory.
2772
2773       --memcpy-ops N
2774              stop memcpy stress workers after N bogo memcpy operations.
2775
2776       --memcpy-method [ all | libc | builtin | naive | naive_o0 .. naive_o3 ]
2777              specify a memcpy copying method. Available  memcpy  methods  are
2778              described as follows:
2779
2780              Method    Description
2781              all       use libc, builtin and naïve methods
2782
2783
2784              libc      use  libc memcpy and memmove functions, this is
2785                        the default
2786              builtin   use the compiler built in optimized memcpy  and
2787                        memmove functions
2788              naive     use  naïve byte by byte copying and memory mov‐
2789                        ing build with  default  compiler  optimization
2790                        flags
2791              naive_o0  use  unoptimized naïve byte by byte copying and
2792                        memory moving
2793              naive_o1  use unoptimized naïve byte by byte copying  and
2794                        memory moving with -O1 optimization
2795              naive_o2  use  optimized  naïve  byte by byte copying and
2796                        memory moving build with -O2  optimization  and
2797                        where possible use CPU specific optimizations
2798              naive_o3  use  optimized  naïve  byte by byte copying and
2799                        memory moving build with -O3  optimization  and
2800                        where possible use CPU specific optimizations
2801
2802       --memfd N
2803              start  N  workers  that  create  allocations of 1024 pages using
2804              memfd_create(2) and ftruncate(2) for allocation and  mmap(2)  to
2805              map  the  allocation  into  the  process  address space.  (Linux
2806              only).
2807
2808       --memfd-bytes N
2809              allocate N bytes per memfd stress worker, the default is  256MB.
2810              One can specify the size in as % of total available memory or in
2811              units of Bytes, KBytes, MBytes and GBytes using the suffix b, k,
2812              m or g.
2813
2814       --memfd-fds N
2815              create N memfd file descriptors, the default is 256. One can se‐
2816              lect 8 to 4096 memfd file descriptions with this option.
2817
2818       --memfd-ops N
2819              stop after N memfd-create(2) bogo operations.
2820
2821       --memhotplug N
2822              start N workers that offline and online memory hotplug  regions.
2823              Linux only and requires CAP_SYS_ADMIN capabilities.
2824
2825       --memhotplug-ops N
2826              stop memhotplug stressors after N memory offline and online bogo
2827              operations.
2828
2829       --memrate N
2830              start N workers that exercise a buffer with 1024, 512, 256, 128,
2831              64,  32,  16 and 8 bit reads and writes. 1024, 512 and 256 reads
2832              and writes are available with  compilers  that  support  integer
2833              vectors.   x86-64 cpus that support uncached (non-temporal "nt")
2834              writes also exercise 128, 64  and  32  writes  providing  higher
2835              write  rates  than  the  normal cached writes. CPUs that support
2836              prefetching reads also exercise 64 prefetched "pf" reads.   This
2837              memory  stressor allows one to also specify the maximum read and
2838              write rates. The stressors will run at maximum speed if no  read
2839              or write rates are specified.
2840
2841       --memrate-ops N
2842              stop after N bogo memrate operations.
2843
2844       --memrate-bytes N
2845              specify  the  size of the memory buffer being exercised. The de‐
2846              fault size is 256MB. One can specify the size in units of Bytes,
2847              KBytes, MBytes and GBytes using the suffix b, k, m or g.
2848
2849       --memrate-rd-mbs N
2850              specify the maximum allowed read rate in MB/sec. The actual read
2851              rate is dependent on scheduling jitter and memory accesses  from
2852              other running processes.
2853
2854       --memrate-wr-mbs N
2855              specify  the  maximum  allowed  read  rate in MB/sec. The actual
2856              write rate is dependent on scheduling jitter and memory accesses
2857              from other running processes.
2858
2859       --memthrash N
2860              start  N workers that thrash and exercise a 16MB buffer in vari‐
2861              ous ways to try and trip thermal overrun.   Each  stressor  will
2862              start  1  or  more  threads.  The number of threads is chosen so
2863              that there will be at least 1 thread per CPU. Note that the  op‐
2864              timal  choice  for  N is a value that divides into the number of
2865              CPUs.
2866
2867       --memthrash-ops N
2868              stop after N memthrash bogo operations.
2869
2870       --memthrash-method method
2871              specify a memthrash stress method.  Available  memthrash  stress
2872              methods are described as follows:
2873
2874              Method     Description
2875              all        iterate over all the below memthrash methods
2876              chunk1     memset 1 byte chunks of random data into random
2877                         locations
2878              chunk8     memset 8 byte chunks of random data into random
2879                         locations
2880              chunk64    memset  64 byte chunks of random data into ran‐
2881                         dom locations
2882              chunk256   memset 256 byte chunks of random data into ran‐
2883                         dom locations
2884              chunkpage  memset  page  size  chunks  of random data into
2885                         random locations
2886              copy128    copy 128 byte chunks from chunk N + 1 to  chunk
2887                         N  with streaming reads and writes with 128 bit
2888                         memory accesses where possible.
2889              flip       flip (invert) all bits in random locations
2890              flush      flush cache line in random locations
2891              lock       lock randomly choosing locations (Intel x86 and
2892                         ARM CPUs only)
2893              matrix     treat  memory as a 2 × 2 matrix and swap random
2894                         elements
2895              memmove    copy all the data in buffer to the next  memory
2896                         location
2897              memset     memset the memory with random data
2898              memset64   memset the memory with a random 64 bit value in
2899                         64 byte chunks  using  non-temporal  stores  if
2900                         possible or normal stores as a fallback
2901              mfence     stores with write serialization
2902              prefetch   prefetch data at random memory locations
2903              random     randomly  run  any of the memthrash methods ex‐
2904                         cept for 'random' and 'all'
2905              spinread   spin loop read the same  random  location  2↑19
2906                         times
2907              spinwrite  spin  loop  write the same random location 2↑19
2908                         times
2909              swap       step through memory swapping bytes in steps  of
2910                         65 and 129 byte strides
2911              swap64     work  through  memory swapping adjacent 64 byte
2912                         chunks
2913
2914       --mergesort N
2915              start N workers that sort 32 bit integers using the  BSD  merge‐
2916              sort.
2917
2918       --mergesort-ops N
2919              stop mergesort stress workers after N bogo mergesorts.
2920
2921       --mergesort-size N
2922              specify  number  of  32  bit integers to sort, default is 262144
2923              (256 × 1024).
2924
2925       --mincore N
2926              start N workers that walk through all of memory 1 page at a time
2927              checking if the page mapped and also is resident in memory using
2928              mincore(2). It also maps and unmaps a page to check if the  page
2929              is mapped or not using mincore(2).
2930
2931       --mincore-ops N
2932              stop  after  N  mincore  bogo operations. One mincore bogo op is
2933              equivalent to a 300 mincore(2) calls.  --mincore-random  instead
2934              of  walking  through pages sequentially, select pages at random.
2935              The chosen address is iterated over by  shifting  it  right  one
2936              place  and checked by mincore until the address is less or equal
2937              to the page size.
2938
2939       --misaligned N
2940              start N workers that perform misaligned read and writes. By  de‐
2941              fault,  this will exercise 128 bit misaligned read and writes in
2942              8 × 16 bits, 4 × 32 bits, 2 × 64 bits and 1 × 128  bits  at  the
2943              start of a page boundary, at the end of a page boundary and over
2944              a cache boundary. Misaligned read and writes operate at  1  byte
2945              offset  from the natural alignment of the data type. On some ar‐
2946              chitectures this can cause SIGBUS, SIGILL or SIGSEGV, these  are
2947              handled  and the misaligned stressor method causing the error is
2948              disabled.
2949
2950       --misaligned-ops N
2951              stop after N misaligned bogo operation. A misaligned bogo op  is
2952              equivalent to 65536 × 128 bit reads or writes.
2953
2954       --misaligned-method method
2955              Available misaligned stress methods are described as follows:
2956
2957              Method       Description
2958              all          iterate over all the following misaligned methods
2959              int16rd      8 × 16 bit integer reads
2960              int16wr      8 × 16 bit integer writes
2961              int16inc     8 × 16 bit integer increments
2962              int16atomic  8 × 16 bit atomic integer increments
2963              int32rd      4 × 32 bit integer reads
2964              int32wr      4 × 32 bit integer writes
2965              int32wtnt    4 × 32 bit non-temporal stores (x86 only)
2966              int32inc     4 × 32 bit integer increments
2967              int32atomic  4 × 32 bit atomic integer increments
2968              int64rd      2 × 64 bit integer reads
2969              int64wr      2 × 64 bit integer writes
2970              int64wtnt    4 × 64 bit non-temporal stores (x86 only)
2971              int64inc     2 × 64 bit integer increments
2972              int64atomic  2 × 64 bit atomic integer increments
2973              int128rd     1 × 128 bit integer reads
2974              int128wr     1 × 128 bit integer writes
2975              int128inc    1 × 128 bit integer increments
2976              int128atomic 1 × 128 bit atomic integer increments
2977
2978       Note  that  some of these options (128 bit integer and/or atomic opera‐
2979       tions) may not be available on some systems.
2980
2981       --mknod N
2982              start N workers that create and remove fifos,  empty  files  and
2983              named sockets using mknod and unlink.
2984
2985       --mknod-ops N
2986              stop directory thrash workers after N bogo mknod operations.
2987
2988       --mlock N
2989              start  N  workers that lock and unlock memory mapped pages using
2990              mlock(2), munlock(2), mlockall(2)  and  munlockall(2).  This  is
2991              achieved by the mapping of three contiguous pages and then lock‐
2992              ing the second page, hence  ensuring  non-contiguous  pages  are
2993              locked  . This is then repeated until the maximum allowed mlocks
2994              or a maximum of 262144 mappings are made.  Next, all future map‐
2995              pings  are  mlocked and the worker attempts to map 262144 pages,
2996              then all pages are munlocked and the pages are unmapped.
2997
2998       --mlock-ops N
2999              stop after N mlock bogo operations.
3000
3001       --mlockmany N
3002              start N workers that fork off a default of 1024 child  processes
3003              in  total; each child will attempt to anonymously mmap and mlock
3004              the maximum allowed mlockable memory size.  The stress test  at‐
3005              tempts to avoid swapping by tracking low memory and swap alloca‐
3006              tions (but some swapping may occur).  Once  either  the  maximum
3007              number of child process is reached or all mlockable in-core mem‐
3008              ory is locked then child processes are  killed  and  the  stress
3009              test is repeated.
3010
3011       --mlockmany-ops N
3012              stop after N mlockmany (mmap and mlock) operations.
3013
3014       --mlockmany-procs N
3015              set  the  number  of child processes to create per stressor. The
3016              default is to start a maximum of 1024 child processes  in  total
3017              across  all  the  stressors. This option allows the setting of N
3018              child processes per stressor.
3019
3020       --mmap N
3021              start N workers  continuously  calling  mmap(2)/munmap(2).   The
3022              initial   mapping   is   a   large   chunk  (size  specified  by
3023              --mmap-bytes) followed  by  pseudo-random  4K  unmappings,  then
3024              pseudo-random  4K mappings, and then linear 4K unmappings.  Note
3025              that this can cause systems to trip the  kernel  OOM  killer  on
3026              Linux  systems  if  not  enough  physical memory and swap is not
3027              available.  The MAP_POPULATE option is used  to  populate  pages
3028              into memory on systems that support this.  By default, anonymous
3029              mappings are used, however, the --mmap-file and --mmap-async op‐
3030              tions allow one to perform file based mappings if desired.
3031
3032       --mmap-ops N
3033              stop mmap stress workers after N bogo operations.
3034
3035       --mmap-async
3036              enable  file based memory mapping and use asynchronous msync'ing
3037              on each page, see --mmap-file.
3038
3039       --mmap-bytes N
3040              allocate N bytes per mmap stress worker, the default  is  256MB.
3041              One  can  specify  the size as % of total available memory or in
3042              units of Bytes, KBytes, MBytes and GBytes using the suffix b, k,
3043              m or g.
3044
3045       --mmap-file
3046              enable  file based memory mapping and by default use synchronous
3047              msync'ing on each page.
3048
3049       --mmap-mmap2
3050              use mmap2 for 4K page aligned offsets  if  mmap2  is  available,
3051              otherwise fall back to mmap.
3052
3053       --mmap-mprotect
3054              change  protection settings on each page of memory.  Each time a
3055              page or a group of pages are mapped or remapped then this option
3056              will  make the pages read-only, write-only, exec-only, and read-
3057              write.
3058
3059       --mmap-odirect
3060              enable file based memory mapping and use O_DIRECT direct I/O.
3061
3062       --mmap-osync
3063              enable file based memory mapping and used O_SYNC synchronous I/O
3064              integrity completion.
3065
3066       --mmapaddr N
3067              start  N  workers that memory map pages at a random memory loca‐
3068              tion that is not already mapped.  On 64 bit machines the  random
3069              address is randomly chosen 32 bit or 64 bit address. If the map‐
3070              ping works a second page is memory mapped from the first  mapped
3071              address.  The  stressor  exercises mmap/munmap, mincore and seg‐
3072              fault handling.
3073
3074       --mmapaddr-ops N
3075              stop after N random address mmap bogo operations.
3076
3077       --mmapfork N
3078              start N workers that each fork off 32 child processes,  each  of
3079              which tries to allocate some of the free memory left in the sys‐
3080              tem (and trying to avoid any  swapping).   The  child  processes
3081              then hint that the allocation will be needed with madvise(2) and
3082              then memset it to zero and hint that it is no longer needed with
3083              madvise before exiting.  This produces significant amounts of VM
3084              activity, a lot of cache misses and with minimal swapping.
3085
3086       --mmapfork-ops N
3087              stop after N mmapfork bogo operations.
3088
3089       --mmapfixed N
3090              start N workers that perform fixed address allocations from  the
3091              top  virtual address down to 128K.  The allocated sizes are from
3092              1 page to 8  pages  and  various  random  mmap  flags  are  used
3093              MAP_SHARED/MAP_PRIVATE, MAP_LOCKED, MAP_NORESERVE, MAP_POPULATE.
3094              If successfully map'd then the allocation is remap'd first to  a
3095              large  range of addresses based on a random start and finally an
3096              address that is several pages higher  in  memory.  Mappings  and
3097              remappings  are  madvised with random madvise options to further
3098              exercise the mappings.
3099
3100       --mmapfixed-ops N
3101              stop after N mmapfixed memory mapping bogo operations.
3102
3103       --mmaphuge N
3104              start N workers that attempt to mmap a set  of  huge  pages  and
3105              large huge page sized mappings. Successful mappings are madvised
3106              with MADV_NOHUGEPAGE and MADV_HUGEPAGE settings and then  1/64th
3107              of the normal small page size pages are touched. Finally, an at‐
3108              tempt to unmap a small page size page at the end of the  mapping
3109              is  made  (these may fail on huge pages) before the set of pages
3110              are unmapped. By default 8192 mappings are attempted  per  round
3111              of mappings or until swapping is detected.
3112
3113       --mmaphuge-ops N
3114              stop after N mmaphuge bogo operations
3115
3116       --mmaphuge-mmaps N
3117              set the number of huge page mappings to attempt in each round of
3118              mappings. The default is 8192 mappings.
3119
3120       --mmapmany N
3121              start N workers that attempt to create the maximum allowed  per-
3122              process  memory mappings. This is achieved by mapping 3 contigu‐
3123              ous pages and then unmapping the middle page hence splitting the
3124              mapping  into  two.  This is then repeated until the maximum al‐
3125              lowed mappings or a maximum of 262144 mappings are made.
3126
3127       --mmapmany-ops N
3128              stop after N mmapmany bogo operations
3129
3130       --mprotect N
3131              start N workers that exercise changing page protection  settings
3132              and access memory after each change. 8 processes per worker con‐
3133              tend with each other  changing  page  proection  settings  on  a
3134              shared memory region of just a few pages to cause TLB flushes. A
3135              read and write to the pages can cause  segmentation  faults  and
3136              these are handled by the stressor. All combinations of page pro‐
3137              tection settings are exercised including invalid combinations.
3138
3139       --mprotect-ops N
3140              stop after N mprotect calls.
3141
3142       --mq N start N sender and receiver processes that continually send  and
3143              receive messages using POSIX message queues. (Linux only).
3144
3145       --mq-ops N
3146              stop after N bogo POSIX message send operations completed.
3147
3148       --mq-size N
3149              specify size of POSIX message queue. The default size is 10 mes‐
3150              sages and most Linux systems this is the  maximum  allowed  size
3151              for  normal users. If the given size is greater than the allowed
3152              message queue size then a warning is issued and the maximum  al‐
3153              lowed size is used instead.
3154
3155       --mremap N
3156              start N workers continuously calling mmap(2), mremap(2) and mun‐
3157              map(2).  The initial anonymous mapping is a  large  chunk  (size
3158              specified by --mremap-bytes) and then iteratively halved in size
3159              by remapping all the way down to a page size and then back up to
3160              the original size.  This worker is only available for Linux.
3161
3162       --mremap-ops N
3163              stop mremap stress workers after N bogo operations.
3164
3165       --mremap-bytes N
3166              initially  allocate N bytes per remap stress worker, the default
3167              is 256MB. One can specify the size in units  of  Bytes,  KBytes,
3168              MBytes and GBytes using the suffix b, k, m or g.
3169
3170       --mremap-mlock
3171              attempt  to  mlock  remapped  pages into memory prohibiting them
3172              from being paged out.  This is a no-op if mlock(2) is not avail‐
3173              able.
3174
3175       --msg N
3176              start  N sender and receiver processes that continually send and
3177              receive messages using System V message IPC.
3178
3179       --msg-ops N
3180              stop after N bogo message send operations completed.
3181
3182       --msg-types N
3183              select the quality of message types (mtype) to use. By  default,
3184              msgsnd  sends messages with a mtype of 1, this option allows one
3185              to send messages types in the range 1..N to exercise the message
3186              queue receive ordering. This will also impact throughput perfor‐
3187              mance.
3188
3189       --msync N
3190              start N stressors that msync data from a file backed memory map‐
3191              ping  from  memory back to the file and msync modified data from
3192              the file back to the mapped memory. This exercises the  msync(2)
3193              MS_SYNC and MS_INVALIDATE sync operations.
3194
3195       --msync-ops N
3196              stop after N msync bogo operations completed.
3197
3198       --msync-bytes N
3199              allocate  N  bytes  for  the  memory mapped file, the default is
3200              256MB. One can specify the size as % of total  available  memory
3201              or in units of Bytes, KBytes, MBytes and GBytes using the suffix
3202              b, k, m or g.
3203
3204       --msyncmany N
3205              start N stressors that memory map up to 32768 pages on the  same
3206              page of a temporary file, change the first 32 bits in a page and
3207              msync the data back to the file.  The other 32767 pages are  ex‐
3208              amined to see if the 32 bit check value is msync'd back to these
3209              pages.
3210
3211       --msyncmany-ops N
3212              stop after N msync calls in the  msyncmany  stressors  are  com‐
3213              pleted.
3214
3215       --munmap N
3216              start  N  stressors  that  exercise unmapping of shared non-exe‐
3217              cutable mapped regions of child processes (Linux only). The  un‐
3218              mappings  map  shared  memory  regions page by page with a prime
3219              sized stride that creates many temporary mapping holes.  One the
3220              unmappings  are  complete  the  child will exit and a new one is
3221              started.  Note that this may trigger segmentation faults in  the
3222              child  process,  these are handled where possible by forcing the
3223              child process to call _exit(2).
3224
3225       --munmap-ops N
3226              stop after N page unmappings.
3227
3228       --mutex N
3229              start N stressors that exercise pthread mutex  locking  and  un‐
3230              locking. If run with enough privilege then the FIFO scheduler is
3231              used and a random priority between 0 and 80% of the maximum FIFO
3232              priority level is selected for the locking operation.  The mini‐
3233              mum FIFO priority level is selected for the critical mutex  sec‐
3234              tion  and unlocking operation to exercise random inverted prior‐
3235              ity scheduling.
3236
3237       --mutex-ops N
3238              stop after N bogo mutex lock/unlock operations.
3239
3240       --mutex-affinity
3241              enable random CPU affinity changing between mutex lock  and  un‐
3242              lock.
3243
3244       --mutex-procs N
3245              By  default 2 threads are used for locking/unlocking on a single
3246              mutex. This option allows the default to be changed to 2  to  64
3247              concurrent threads.
3248
3249       --nanosleep N
3250              start  N  workers that each run 256 pthreads that call nanosleep
3251              with random delays from 1 to 2↑18 nanoseconds. This should exer‐
3252              cise the high resolution timers and scheduler.
3253
3254       --nanosleep-ops N
3255              stop the nanosleep stressor after N bogo nanosleep operations.
3256
3257       --netdev N
3258              start  N  workers that exercise various netdevice ioctl commands
3259              across all the available network devices. The  ioctls  exercised
3260              by  this  stressor  are  as  follows: SIOCGIFCONF, SIOCGIFINDEX,
3261              SIOCGIFNAME, SIOCGIFFLAGS, SIOCGIFADDR, SIOCGIFNETMASK, SIOCGIF‐
3262              METRIC, SIOCGIFMTU, SIOCGIFHWADDR, SIOCGIFMAP and SIOCGIFTXQLEN.
3263              See netdevice(7) for more details of these ioctl commands.
3264
3265       --netdev-ops N
3266              stop after N netdev bogo operations completed.
3267
3268       --netlink-proc N
3269              start  N  workers  that  spawn  child  processes   and   monitor
3270              fork/exec/exit  process  events  via the proc netlink connector.
3271              Each event received is counted as a bogo op. This  stressor  can
3272              only be run on Linux and requires CAP_NET_ADMIN capability.
3273
3274       --netlink-proc-ops N
3275              stop the proc netlink connector stressors after N bogo ops.
3276
3277       --netlink-task N
3278              start  N  workers  that  collect task statistics via the netlink
3279              taskstats interface.  This stressor can only be run on Linux and
3280              requires CAP_NET_ADMIN capability.
3281
3282       --netlink-task-ops N
3283              stop the taskstats netlink connector stressors after N bogo ops.
3284
3285       --nice N
3286              start  N  cpu consuming workers that exercise the available nice
3287              levels. Each iteration forks  off  a  child  process  that  runs
3288              through the all the nice levels running a busy loop for 0.1 sec‐
3289              onds per level and then exits.
3290
3291       --nice-ops N
3292              stop after N nice bogo nice loops
3293
3294       --nop N
3295              start N workers that consume cpu cycles issuing  no-op  instruc‐
3296              tions.  This stressor is available if the assembler supports the
3297              "nop" instruction.
3298
3299       --nop-ops N
3300              stop nop workers after N no-op bogo operations. Each bogo-opera‐
3301              tion is equivalent to 256 loops of 256 no-op instructions.
3302
3303       --nop-instr INSTR
3304              use alternative nop instruction INSTR. For x86 CPUs INSTR can be
3305              one of nop, pause, nop2 (2 byte nop) through to nop11  (11  byte
3306              nop).  For ARM CPUs, INSTR can be one of nop or yield. For PPC64
3307              CPUs, INSTR can be one of nop, mdoio, mdoom or yield.  For  S390
3308              CPUs, INSTR can be one of nop or nopr. For other processors, IN‐
3309              STR is only nop. The random INSTR option selects a randon mix of
3310              the available nop instructions. If the chosen INSTR generates an
3311              SIGILL signal, then the stressor falls back to the  vanilla  nop
3312              instruction.
3313
3314       --null N
3315              start N workers writing to /dev/null.
3316
3317       --null-ops N
3318              stop  null  stress  workers  after N /dev/null bogo write opera‐
3319              tions.
3320
3321       --numa N
3322              start N workers that migrate stressors and a 4MB  memory  mapped
3323              buffer  around  all  the  available  NUMA  nodes.  This uses mi‐
3324              grate_pages(2)  to  move  the   stressors   and   mbind(2)   and
3325              move_pages(2) to move the pages of the mapped buffer. After each
3326              move, the buffer is written to force activity over the bus which
3327              results  cache misses.  This test will only run on hardware with
3328              NUMA enabled and more than 1 NUMA node.
3329
3330       --numa-ops N
3331              stop NUMA stress workers after N bogo NUMA operations.
3332
3333       --oom-pipe N
3334              start N workers that create as many pipes as allowed  and  exer‐
3335              cise  expanding  and  shrinking  the pipes from the largest pipe
3336              size down to a page size. Data is written  into  the  pipes  and
3337              read  out  again to fill the pipe buffers. With the --aggressive
3338              mode enabled the data is not read out when the pipes are shrunk,
3339              causing  the kernel to OOM processes aggressively.  Running many
3340              instances of this stressor will force kernel  to  OOM  processes
3341              due to the many large pipe buffer allocations.
3342
3343       --oom-pipe-ops N
3344              stop after N bogo pipe expand/shrink operations.
3345
3346       --opcode N
3347              start  N  workers  that  fork off children that execute randomly
3348              generated executable code.  This will generate  issues  such  as
3349              illegal  instructions,  bus  errors, segmentation faults, traps,
3350              floating point errors that are handled gracefully by the  stres‐
3351              sor.
3352
3353       --opcode-ops N
3354              stop after N attempts to execute illegal code.
3355
3356       --opcode-method [ inc | mixed | random | text ]
3357              select  the  opcode generation method.  By default, random bytes
3358              are used to generate the executable code. This option allows one
3359              to select one of the three methods:
3360
3361              Method        Description
3362              inc           use incrementing 32 bit opcode patterns
3363                            from 0x00000000 to 0xfffffff inclusive.
3364              mixed         use a mix of incrementing 32 bit opcode
3365                            patterns  and random 32 bit opcode pat‐
3366                            terns that are also  inverted,  encoded
3367                            with gray encoding and bit reversed.
3368              random        generate  opcodes  using  random  bytes
3369                            from a mwc random generator.
3370              text          copies random chunks of code  from  the
3371                            stress-ng  text  segment  and  randomly
3372                            flips single bits in a random choice of
3373                            1/8th of the code.
3374
3375       -o N, --open N
3376              start  N  workers  that perform open(2) and then close(2) opera‐
3377              tions on /dev/zero. The maximum opens at one time is system  de‐
3378              fined,  so  the  test will run up to this maximum, or 65536 open
3379              file descriptors, which ever comes first.
3380
3381       --open-ops N
3382              stop the open stress workers after N bogo open operations.
3383
3384       --open-fd
3385              run a child process that scans  /proc/$PID/fd  and  attempts  to
3386              open the files that the stressor has opened. This exercises rac‐
3387              ing open/close operations on the proc interface.
3388
3389       --open-max N
3390              try to open a maximum of N files (or  up  to  the  maximum  per-
3391              process  open file system limit). The value can be the number of
3392              files or a percentage of the maximum per-process open file  sys‐
3393              tem limit.
3394
3395       --pageswap N
3396              start  N  workers that exercise page swap in and swap out. Pages
3397              are allocated and paged out using madvise MADV_PAGEOUT. One  the
3398              maximum  per  process number of mmaps are reached or 65536 pages
3399              are allocated the pages are read to page them back  in  and  un‐
3400              mapped in reverse mapping order.
3401
3402       --pageswap-ops N
3403              stop after N page allocation bogo operations.
3404
3405       --pci N
3406              exercise  PCI  sysfs  by  running  N workers that read data (and
3407              mmap/unmap PCI config or PCI resource files). Linux  only.  Run‐
3408              ning as root will allow config and resource mmappings to be read
3409              and exercises PCI I/O mapping.
3410
3411       --pci-ops N
3412              stop pci stress workers after N PCI subdirectory exercising  op‐
3413              erations.
3414
3415       --personality N
3416              start  N workers that attempt to set personality and get all the
3417              available personality types (process execution domain types) via
3418              the personality(2) system call. (Linux only).
3419
3420       --personality-ops N
3421              stop  personality stress workers after N bogo personality opera‐
3422              tions.
3423
3424       --peterson N
3425              start N workers that exercises mutex exclusion between two  pro‐
3426              cesses  using  shared  memory with the Peterson Algorithm. Where
3427              possible this uses memory fencing and falls back  to  using  GCC
3428              __sync_synchronize if they are not available. The stressors con‐
3429              tain simple mutex and memory coherency sanity checks.
3430
3431       --peterson-ops N
3432              stop peterson workers after N mutex operations.
3433
3434       --physpage N
3435              start N workers that use /proc/self/pagemap and /proc/kpagecount
3436              to  determine  the  physical  page  and  page count of a virtual
3437              mapped page and a page that is shared among all  the  stressors.
3438              Linux only and requires the CAP_SYS_ADMIN capabilities.
3439
3440       --physpage-ops N
3441              stop  physpage  stress  workers  after  N  bogo physical address
3442              lookups.
3443
3444       --pidfd N
3445              start  N  workers  that  exercise   signal   sending   via   the
3446              pidfd_send_signal system call.  This stressor creates child pro‐
3447              cesses and checks if they exist and can  be  stopped,  restarted
3448              and killed using the pidfd_send_signal system call.
3449
3450       --pidfd-ops N
3451              stop pidfd stress workers after N child processes have been cre‐
3452              ated, tested and killed with pidfd_send_signal.
3453
3454       --ping-sock N
3455              start N workers that send small randomized ICMP messages to  the
3456              localhost  across  a range of ports (1024..65535) using a "ping"
3457              socket with an AF_INET domain, a SOCK_DGRAM socket type  and  an
3458              IPPROTO_ICMP protocol.
3459
3460       --ping-sock-ops N
3461              stop  the  ping-sock  stress  workers  after N ICMP messages are
3462              sent.
3463
3464       -p N, --pipe N
3465              start N workers that perform large pipe writes and reads to  ex‐
3466              ercise  pipe I/O.  This exercises memory write and reads as well
3467              as context switching.  Each worker has two processes,  a  reader
3468              and a writer.
3469
3470       --pipe-ops N
3471              stop pipe stress workers after N bogo pipe write operations.
3472
3473       --pipe-data-size N
3474              specifies  the  size  in  bytes of each write to the pipe (range
3475              from 4 bytes to 4096 bytes). Setting  a  small  data  size  will
3476              cause more writes to be buffered in the pipe, hence reducing the
3477              context switch rate between the pipe writer and pipe reader pro‐
3478              cesses. Default size is the page size.
3479
3480       --pipe-size N
3481              specifies  the  size of the pipe in bytes (for systems that sup‐
3482              port the F_SETPIPE_SZ fcntl() command).  Setting  a  small  pipe
3483              size  will  cause  the  pipe  to fill and block more frequently,
3484              hence increasing the context switch rate between the pipe writer
3485              and the pipe reader processes. Default size is 512 bytes.
3486
3487       --pipeherd N
3488              start  N  workers  that  pass a 64 bit token counter to/from 100
3489              child processes over a shared pipe. This forces a  high  context
3490              switch  rate  and  can trigger a "thundering herd" of wakeups on
3491              processes that are blocked on pipe waits.
3492
3493       --pipeherd-ops N
3494              stop pipe stress workers after N bogo pipe write operations.
3495
3496       --pipeherd-yield
3497              force a scheduling yield after each write,  this  increases  the
3498              context switch rate.
3499
3500       --pkey N
3501              start N workers that change memory protection using a protection
3502              key (pkey) and the pkey_mprotect call (Linux  only).  This  will
3503              try  to  allocate  a  pkey and use this for the page protection,
3504              however, if this fails then the special pkey  -1  will  be  used
3505              (and the kernel will use the normal mprotect mechanism instead).
3506              Various page protection mixes of  read/write/exec/none  will  be
3507              cycled through on randomly chosen pre-allocated pages.
3508
3509       --pkey-ops N
3510              stop after N pkey_mprotect page protection cycles.
3511
3512       --plugin N
3513              start N workers that run user provided stressor functions loaded
3514              from a shared library. The shared library  can  contain  one  or
3515              more  stressor functions prefixed with stress_ in their name. By
3516              default the plugin stressor will  find  all  functions  prefixed
3517              with  stress_  in  their name and exercise these one by one in a
3518              round-robin loop, but a specific stressor can be selected  using
3519              the  --plugin-method option.  The stressor function takes no pa‐
3520              rameters and returns 0 for success and non-zero for failure (and
3521              will  terminate the plugin stressor). Each time a stressor func‐
3522              tion is executed the bogo-op counter is incremented by one.  The
3523              following example performs 10,000 nop instructions per bogo-op:
3524
3525                 int stress_example(void)
3526                 {
3527                         int i;
3528
3529                         for (i = 0; i < 10000; i++) {
3530                                 __volatile__ __asm__("nop");
3531                         }
3532                         return 0;  /* Success */
3533                 }
3534
3535              and compile the source into a shared library as, for example:
3536
3537                 gcc -fpic -shared -o example.so example.c
3538
3539              and run as using:
3540
3541                 stress-ng --plugin 1 --plugin-so ./example.so
3542
3543       --plugin-ops N
3544              stop  after  N  iterations  of  the user provided stressor func‐
3545              tion(s).
3546
3547       --plugin-so name
3548              specify the shared library containing the user provided stressor
3549              function(s).
3550
3551       --plugin-method function
3552              run  a  specific stressor function, specify the name without the
3553              leading stress_ prefix.
3554
3555       -P N, --poll N
3556              start N workers  that  perform  zero  timeout  polling  via  the
3557              poll(2),  ppoll(2),  select(2),  pselect(2)  and sleep(3) calls.
3558              This wastes system and user time doing nothing.
3559
3560       --poll-ops N
3561              stop poll stress workers after N bogo poll operations.
3562
3563       --poll-fds N
3564              specify the number of file descriptors to poll/ppoll/select/pse‐
3565              lect  on.   The  maximum number for select/pselect is limited by
3566              FD_SETSIZE and the upper maximum is also limited by the  maximum
3567              number of pipe open descriptors allowed.
3568
3569       --prctl N
3570              start  N workers that exercise the majority of the prctl(2) sys‐
3571              tem call options. Each batch of prctl calls is performed  inside
3572              a  new  child  process to ensure the limit of prctl is contained
3573              inside a new process every time.  Some prctl options are  archi‐
3574              tecture  specific,  however,  this  stressor will exercise these
3575              even if they are not implemented.
3576
3577       --prctl-ops N
3578              stop prctl workers after N batches of prctl calls
3579
3580       --prefetch N
3581              start N workers that benchmark prefetch and  non-prefetch  reads
3582              of a L3 cache sized buffer. The buffer is read with loops of 8 ×
3583              64 bit reads per iteration.  In  the  prefetch  cases,  data  is
3584              prefetched  ahead  of the current read position by various sized
3585              offsets, from 64 bytes to  8K  to  find  the  best  memory  read
3586              throughput.  The stressor reports the non-prefetch read rate and
3587              the best prefetched read rate. It also reports the prefetch off‐
3588              set  and  an estimate of the amount of time between the prefetch
3589              issue and the actual memory  read  operation.  These  statistics
3590              will  vary from run-to-run due to system noise and CPU frequency
3591              scaling.
3592
3593       --prefetch-ops N
3594              stop prefetch stressors after N benchmark operations
3595
3596       --prefetch-l3-size N
3597              specify the size of the l3 cache
3598
3599       --procfs N
3600              start N workers that read files from /proc and recursively  read
3601              files from /proc/self (Linux only).
3602
3603       --procfs-ops N
3604              stop  procfs  reading  after N bogo read operations. Note, since
3605              the number of entries may vary between kernels,  this  bogo  ops
3606              metric is probably very misleading.
3607
3608       --pthread N
3609              start N workers that iteratively creates and terminates multiple
3610              pthreads (the default is 1024 pthreads per worker). In each  it‐
3611              eration,  each  newly created pthread waits until the worker has
3612              created all the pthreads and then they all terminate together.
3613
3614       --pthread-ops N
3615              stop pthread workers after N bogo pthread create operations.
3616
3617       --pthread-max N
3618              create N pthreads per worker. If the product of  the  number  of
3619              pthreads by the number of workers is greater than the soft limit
3620              of allowed pthreads then the maximum is re-adjusted down to  the
3621              maximum allowed.
3622
3623       --ptrace N
3624              start  N  workers  that  fork  and trace system calls of a child
3625              process using ptrace(2).
3626
3627       --ptrace-ops N
3628              stop ptracer workers after N bogo system calls are traced.
3629
3630       --pty N
3631              start N workers that repeatedly attempt to open  pseudoterminals
3632              and  perform  various  pty  ioctls  upon the ptys before closing
3633              them.
3634
3635       --pty-ops N
3636              stop pty workers after N pty bogo operations.
3637
3638       --pty-max N
3639              try to open a maximum  of  N  pseudoterminals,  the  default  is
3640              65536. The allowed range of this setting is 8..65536.
3641
3642       -Q, --qsort N
3643              start N workers that sort 32 bit integers using qsort.
3644
3645       --qsort-ops N
3646              stop qsort stress workers after N bogo qsorts.
3647
3648       --qsort-size N
3649              specify  number  of  32  bit integers to sort, default is 262144
3650              (256 × 1024).
3651
3652       --quota N
3653              start N workers that exercise the Q_GETQUOTA,  Q_GETFMT,  Q_GET‐
3654              INFO,  Q_GETSTATS  and  Q_SYNC  quotactl(2)  commands on all the
3655              available mounted block based file systems. Requires CAP_SYS_AD‐
3656              MIN capability to run.
3657
3658       --quota-ops N
3659              stop quota stress workers after N bogo quotactl operations.
3660
3661       --radixsort N
3662              start N workers that sort random 8 byte strings using radixsort.
3663
3664       --radixsort-ops N
3665              stop radixsort stress workers after N bogo radixsorts.
3666
3667       --radixsort-size N
3668              specify  number  of  strings  to  sort, default is 262144 (256 ×
3669              1024).
3670
3671       --ramfs N
3672              start N workers mounting a memory based file system using  ramfs
3673              and  tmpfs  (Linux  only).  This alternates between mounting and
3674              umounting a ramfs or tmpfs file  system  using  the  traditional
3675              mount(2)  and  umount(2)  system call as well as the newer Linux
3676              5.2 fsopen(2), fsmount(2), fsconfig(2) and move_mount(2)  system
3677              calls if they are available. The default ram file system size is
3678              2MB.
3679
3680       --ramfs-ops N
3681              stop after N ramfs mount operations.
3682
3683       --ramfs-size N
3684              set the ramfs size (must be multiples of the page size).
3685
3686       --rawdev N
3687              start N workers that read the underlying raw drive device  using
3688              direct  IO  reads.  The device (with minor number 0) that stores
3689              the current working directory is the raw device to  be  read  by
3690              the stressor.  The read size is exactly the size of the underly‐
3691              ing device block size.  By default, this stressor will  exercise
3692              all  the of the rawdev methods (see the --rawdev-method option).
3693              This is a Linux only stressor and requires root privilege to  be
3694              able to read the raw device.
3695
3696       --rawdev-ops N
3697              stop  the rawdev stress workers after N raw device read bogo op‐
3698              erations.
3699
3700       --rawdev-method method
3701              Available rawdev stress methods are described as follows:
3702
3703              Method   Description
3704              all      iterate over all the rawdev stress  methods  as
3705                       listed below:
3706              sweep    repeatedly  read across the raw device from the
3707                       0th block to the end block in steps of the num‐
3708                       ber  of  blocks on the device / 128 and back to
3709                       the start again.
3710              wiggle   repeatedly read across the raw  device  in  128
3711                       evenly steps with each step reading 1024 blocks
3712                       backwards from each step.
3713              ends     repeatedly read the first and  last  128  start
3714                       and  end  blocks  of the raw device alternating
3715                       from start of the device to the end of the  de‐
3716                       vice.
3717              random   repeatedly read 256 random blocks
3718              burst    repeatedly  read 256 sequential blocks starting
3719                       from a random block on the raw device.
3720
3721       --randlist N
3722              start N workers that creates a list  of  objects  in  randomized
3723              memory  order and traverses the list setting and reading the ob‐
3724              jects. This is designed to exerise memory and  cache  thrashing.
3725              Normally  the objects are allocated on the heap, however for ob‐
3726              jects of page size or larger there is a 1 in 16  chance  of  ob‐
3727              jects  being  allocated using shared anonymous memory mapping to
3728              mix up the address spaces of the allocations to create more  TLB
3729              thrashing.
3730
3731       --randlist-ops N
3732              stop randlist workers after N list traversals
3733
3734       --randist-compact
3735              Allocate  all  the  list objects using one large heap allocation
3736              and divide this up for all the list objects.  This  removes  the
3737              overhead  of  the  heap keeping track of each list object, hence
3738              uses less memory.
3739
3740       --randlist-items N
3741              Allocate N items on the list. By default, 100,000 items are  al‐
3742              located.
3743
3744       --randlist-size N
3745              Allocate  each  item to be N bytes in size. By default, the size
3746              is 64 bytes of data payload plus the list handling pointer over‐
3747              head.
3748
3749       --rawsock N
3750              start  N  workers  that  send  and receive packet data using raw
3751              sockets on the localhost. Requires CAP_NET_RAW to run.
3752
3753       --rawsock-ops N
3754              stop rawsock workers after N packets are received.
3755
3756       --rawpkt N
3757              start N workers that sends and receives ethernet  packets  using
3758              raw  packets  on the localhost via the loopback device. Requires
3759              CAP_NET_RAW to run.
3760
3761       --rawpkt-ops N
3762              stop rawpkt workers after N packets from the sender process  are
3763              received.
3764
3765       --rawpkt-port N
3766              start  at port P. For N rawpkt worker processes, ports P to (P *
3767              4) - 1 are used. The default starting port is port 14000.
3768
3769       --rawudp N
3770              start N workers that send and  receive  UDP  packets  using  raw
3771              sockets on the localhost. Requires CAP_NET_RAW to run.
3772
3773       --rawudp-if NAME
3774              use  network  interface NAME. If the interface NAME does not ex‐
3775              ist, is not up or does not support the domain then the  loopback
3776              (lo) interface is used as the default.
3777
3778       --rawudp-ops N
3779              stop rawudp workers after N packets are received.
3780
3781       --rawudp-port N
3782              start  at port P. For N rawudp worker processes, ports P to (P *
3783              4) - 1 are used. The default starting port is port 13000.
3784
3785       --rdrand N
3786              start N workers that read a random number from an on-chip random
3787              number  generator  This uses the rdrand instruction on Intel x86
3788              processors or the darn instruction on Power9 processors.
3789
3790       --rdrand-ops N
3791              stop rdrand stress workers after N  bogo  rdrand  operations  (1
3792              bogo op = 2048 random bits successfully read).
3793
3794       --rdrand-seed
3795              use rdseed instead of rdrand (x86 only).
3796
3797       --readahead N
3798              start  N  workers  that  randomly  seek  and  perform  4096 byte
3799              read/write I/O operations on a file with readahead. The  default
3800              file  size  is  64 MB.  Readaheads and reads are batched into 16
3801              readaheads and then 16 reads.
3802
3803       --readahead-bytes N
3804              set the size of readahead file, the default is  1  GB.  One  can
3805              specify  the  size  as  % of free space on the file system or in
3806              units of Bytes, KBytes, MBytes and GBytes using the suffix b, k,
3807              m or g.
3808
3809       --readahead-ops N
3810              stop readahead stress workers after N bogo read operations.
3811
3812       --reboot N
3813              start  N  workers  that exercise the reboot(2) system call. When
3814              possible, it will create a process in a PID namespace  and  per‐
3815              form  a  reboot  power  off  command  that  should  shutdown the
3816              process.  Also, the stressor exercises invalid reboot magic val‐
3817              ues  and  invalid reboots when there are insufficient privileges
3818              that will not actually reboot the system.
3819
3820       --reboot-ops N
3821              stop the reboot stress workers after N bogo reboot cycles.
3822
3823       --regs N
3824              start N workers that shuffle data around the CPU registers exer‐
3825              cising  register move instuctions.  Each bogo-op represents 1000
3826              calls of a shuffling function that  shuffles  the  registers  32
3827              times. Only implemented for the GCC compiler since this requires
3828              register annotations and optimization level 0 to compile  appro‐
3829              priately.
3830
3831       --regs N
3832              stop regs stressors after N bogo operations.
3833
3834       --remap N
3835              start  N workers that map 512 pages and re-order these pages us‐
3836              ing the deprecated system call remap_file_pages(2). Several page
3837              re-orderings  are  exercised:  forward, reverse, random and many
3838              pages to 1 page.
3839
3840       --remap-ops N
3841              stop after N remapping bogo operations.
3842
3843       -R N, --rename N
3844              start N workers that each create a file and then repeatedly  re‐
3845              name it.
3846
3847       --rename-ops N
3848              stop rename stress workers after N bogo rename operations.
3849
3850       --resched N
3851              start  N workers that exercise process rescheduling. Each stres‐
3852              sor spawns a child process for each of the positive nice  levels
3853              and  iterates over the nice levels from 0 to the lowest priority
3854              level (highest nice value). For each of the nice levels 1024 it‐
3855              erations  over  3  non-real time scheduling polices SCHED_OTHER,
3856              SCHED_BATCH and SCHED_IDLE are set and a sched_yield  occurs  to
3857              force  heavy  rescheduling activity.  When the -v verbose option
3858              is used the distribution of the number of yields across the nice
3859              levels is printed for the first stressor out of the N stressors.
3860
3861       --resched-ops N
3862              stop after N rescheduling sched_yield calls.
3863
3864       --resources N
3865              start  N  workers  that  consume  various system resources. Each
3866              worker will spawn 1024 child processes that iterate  1024  times
3867              consuming  shared memory, heap, stack, temporary files and vari‐
3868              ous file descriptors (eventfds, memoryfds,  userfaultfds,  pipes
3869              and sockets).
3870
3871       --resources-ops N
3872              stop after N resource child forks.
3873
3874       --revio N
3875              start N workers continually writing in reverse position order to
3876              temporary files. The default mode is to stress test reverse  po‐
3877              sition  ordered  writes with randomly sized sparse holes between
3878              each write.  With the --aggressive option  enabled  without  any
3879              --revio-opts  options  the  revio stressor will work through all
3880              the --revio-opt options one by one to cover a range of  I/O  op‐
3881              tions.
3882
3883       --revio-bytes N
3884              write  N  bytes for each revio process, the default is 1 GB. One
3885              can specify the size as % of free space on the file system or in
3886              units of Bytes, KBytes, MBytes and GBytes using the suffix b, k,
3887              m or g.
3888
3889       --revio-opts list
3890              specify various stress test options as a comma  separated  list.
3891              Options are the same as --hdd-opts but without the iovec option.
3892
3893       --revio-ops N
3894              stop revio stress workers after N bogo operations.
3895
3896       --revio-write-size N
3897              specify  size of each write in bytes. Size can be from 1 byte to
3898              4MB.
3899
3900       --rlimit N
3901              start N workers that exceed CPU and file  size  resource  imits,
3902              generating SIGXCPU and SIGXFSZ signals.
3903
3904       --rlimit-ops N
3905              stop  after  N bogo resource limited SIGXCPU and SIGXFSZ signals
3906              have been caught.
3907
3908       --rmap N
3909              start N workers that exercise the VM reverse-mapping. This  cre‐
3910              ates  16  processes  per  worker  that write/read multiple file-
3911              backed memory mappings. There are 64 lots  of  4  page  mappings
3912              made  onto  the file, with each mapping overlapping the previous
3913              by 3 pages and at least 1 page of non-mapped memory between each
3914              of  the mappings. Data is synchronously msync'd to the file 1 in
3915              every 256 iterations in a random manner.
3916
3917       --rmap-ops N
3918              stop after N bogo rmap memory writes/reads.
3919
3920       --rseq N
3921              start N workers that  exercise  restartable  sequences  via  the
3922              rseq(2)  system  call.  This loops over a long duration critical
3923              section that is likely to be interrupted.  A rseq abort  handler
3924              keeps  count of the number of interruptions and a SIGSEV handler
3925              also tracks any failed rseq aborts that can occur if there is  a
3926              mistmatch in a rseq check signature. Linux only.
3927
3928       --rseq-ops N
3929              stop  after  N bogo rseq operations. Each bogo rseq operation is
3930              equivalent to 10000 iterations over a long duration rseq handled
3931              critical section.
3932
3933       --rtc N
3934              start  N  workers that exercise the real time clock (RTC) inter‐
3935              faces  via  /dev/rtc  and  /sys/class/rtc/rtc0.  No  destructive
3936              writes (modifications) are performed on the RTC. This is a Linux
3937              only stressor.
3938
3939       --rtc-ops N
3940              stop after N bogo RTC interface accesses.
3941
3942       --schedpolicy N
3943              start N workers that work set the worker  to  various  available
3944              scheduling policies out of SCHED_OTHER, SCHED_BATCH, SCHED_IDLE,
3945              SCHED_FIFO, SCHED_RR and  SCHED_DEADLINE.   For  the  real  time
3946              scheduling  policies a random sched priority is selected between
3947              the minimum and maximum scheduling priority settings.
3948
3949       --schedpolicy-ops N
3950              stop after N bogo scheduling policy changes.
3951
3952       --sctp N
3953              start N workers that perform network sctp stress activity  using
3954              the  Stream Control Transmission Protocol (SCTP).  This involves
3955              client/server processes performing rapid connect,  send/receives
3956              and disconnects on the local host.
3957
3958       --sctp-domain D
3959              specify  the  domain to use, the default is ipv4. Currently ipv4
3960              and ipv6 are supported.
3961
3962       --sctp-if NAME
3963              use network interface NAME. If the interface NAME does  not  ex‐
3964              ist,  is not up or does not support the domain then the loopback
3965              (lo) interface is used as the default.
3966
3967       --sctp-ops N
3968              stop sctp workers after N bogo operations.
3969
3970       --sctp-port P
3971              start at sctp port P. For N sctp worker processes, ports P to (P
3972              *  4)  -  1 are used for ipv4, ipv6 domains and ports P to P - 1
3973              are used for the unix domain.
3974
3975       --sctp-sched [ fcfs | prio | rr ]
3976              specify SCTP scheduler, one of fcfs (default),  prio  (priority)
3977              or rr (round-robin).
3978
3979       --seal N
3980              start  N  workers  that exercise the fcntl(2) SEAL commands on a
3981              small anonymous file created using memfd_create(2).  After  each
3982              SEAL  command  is  issued the stressor also sanity checks if the
3983              seal operation has sealed the file correctly.  (Linux only).
3984
3985       --seal-ops N
3986              stop after N bogo seal operations.
3987
3988       --seccomp N
3989              start N workers that exercise Secure Computing system call  fil‐
3990              tering.  Each  worker creates child processes that write a short
3991              message to /dev/null and then exits. 2% of the  child  processes
3992              have  a  seccomp filter that disallows the write system call and
3993              hence it is killed by seccomp with a  SIGSYS.   Note  that  this
3994              stressor  can  generate  many  audit  log messages each time the
3995              child is killed.  Requires CAP_SYS_ADMIN to run.
3996
3997       --seccomp-ops N
3998              stop seccomp stress workers after N seccomp filter tests.
3999
4000       --secretmem N
4001              start N workers  that  mmap  pages  using  file  mapping  off  a
4002              memfd_secret  file  descriptor.  Each stress loop iteration will
4003              expand the mappable region by 3 pages using ftruncate  and  mmap
4004              and  touches  the pages. The pages are then fragmented by unmap‐
4005              ping the middle page and then umapping the first and last pages.
4006              This  tries  to force page fragmentation and also trigger out of
4007              memory (OOM) kills of the stressor when the secret memory is ex‐
4008              hausted.   Note this is a Linux 5.11+ only stressor and the ker‐
4009              nel needs to be booted with "secretmem=" option  to  allocate  a
4010              secret memory reservation.
4011
4012       --secretmem-ops N
4013              stop secretmem stress workers after N stress loop iterations.
4014
4015       --seek N
4016              start  N  workers  that  randomly  seeks  and  performs 512 byte
4017              read/write I/O operations on a file. The default file size is 16
4018              GB.
4019
4020       --seek-ops N
4021              stop seek stress workers after N bogo seek operations.
4022
4023       --seek-punch
4024              punch  randomly located 8K holes into the file to cause more ex‐
4025              tents to force a more demanding seek stressor, (Linux only).
4026
4027       --seek-size N
4028              specify the size of the file in bytes. Small  file  sizes  allow
4029              the  I/O  to occur in the cache, causing greater CPU load. Large
4030              file sizes force more I/O operations to drive causing more  wait
4031              time  and  more  I/O  on  the drive. One can specify the size in
4032              units of Bytes, KBytes, MBytes and GBytes using the suffix b, k,
4033              m or g.
4034
4035       --sem N
4036              start N workers that perform POSIX semaphore wait and post oper‐
4037              ations. By default, a parent and  4  children  are  started  per
4038              worker  to  provide  some  contention  on  the  semaphore.  This
4039              stresses fast semaphore operations and  produces  rapid  context
4040              switching.
4041
4042       --sem-ops N
4043              stop semaphore stress workers after N bogo semaphore operations.
4044
4045       --sem-procs N
4046              start  N  child  workers per worker to provide contention on the
4047              semaphore, the default is 4 and a maximum of 64 are allowed.
4048
4049       --sem-sysv N
4050              start N workers that perform System V semaphore  wait  and  post
4051              operations.  By default, a parent and 4 children are started per
4052              worker  to  provide  some  contention  on  the  semaphore.  This
4053              stresses  fast  semaphore  operations and produces rapid context
4054              switching.
4055
4056       --sem-sysv-ops N
4057              stop semaphore stress workers after N bogo  System  V  semaphore
4058              operations.
4059
4060       --sem-sysv-procs N
4061              start  N child processes per worker to provide contention on the
4062              System V semaphore, the default is 4 and a maximum of 64 are al‐
4063              lowed.
4064
4065       --sendfile N
4066              start N workers that send an empty file to /dev/null. This oper‐
4067              ation spends nearly all the time in  the  kernel.   The  default
4068              sendfile size is 4MB.  The sendfile options are for Linux only.
4069
4070       --sendfile-ops N
4071              stop sendfile workers after N sendfile bogo operations.
4072
4073       --sendfile-size S
4074              specify  the  size to be copied with each sendfile call. The de‐
4075              fault size is 4MB. One can specify the size in units  of  Bytes,
4076              KBytes, MBytes and GBytes using the suffix b, k, m or g.
4077
4078       --session N
4079              start  N workers that create child and grandchild processes that
4080              set and get their session ids. 25% of the  grandchild  processes
4081              are not waited for by the child to create orphaned sessions that
4082              need to be reaped by init.
4083
4084       --session-ops N
4085              stop session workers after N child  processes  are  spawned  and
4086              reaped.
4087
4088       --set N
4089              start  N  workers that call system calls that try to set data in
4090              the kernel, currently these are: setgid,  sethostname,  setpgid,
4091              setpgrp,  setuid,  setgroups, setreuid, setregid, setresuid, se‐
4092              tresgid and setrlimit.  Some of these system calls are  OS  spe‐
4093              cific.
4094
4095       --set-ops N
4096              stop set workers after N bogo set operations.
4097
4098       --shellsort N
4099              start N workers that sort 32 bit integers using shellsort.
4100
4101       --shellsort-ops N
4102              stop shellsort stress workers after N bogo shellsorts.
4103
4104       --shellsort-size N
4105              specify  number  of  32  bit integers to sort, default is 262144
4106              (256 × 1024).
4107
4108       --shm N
4109              start N workers that open and allocate shared memory objects us‐
4110              ing  the  POSIX  shared memory interfaces.  By default, the test
4111              will repeatedly create and destroy  32  shared  memory  objects,
4112              each of which is 8MB in size.
4113
4114       --shm-ops N
4115              stop  after N POSIX shared memory create and destroy bogo opera‐
4116              tions are complete.
4117
4118       --shm-bytes N
4119              specify the size of the POSIX shared memory objects to  be  cre‐
4120              ated. One can specify the size as % of total available memory or
4121              in units of Bytes, KBytes, MBytes and GBytes using the suffix b,
4122              k, m or g.
4123
4124       --shm-objs N
4125              specify the number of shared memory objects to be created.
4126
4127       --shm-sysv N
4128              start  N  workers that allocate shared memory using the System V
4129              shared memory interface.  By default, the test  will  repeatedly
4130              create  and  destroy  8 shared memory segments, each of which is
4131              8MB in size.
4132
4133       --shm-sysv-ops N
4134              stop after N shared memory create and  destroy  bogo  operations
4135              are complete.
4136
4137       --shm-sysv-bytes N
4138              specify the size of the shared memory segment to be created. One
4139              can specify the size as % of total available memory or in  units
4140              of  Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or
4141              g.
4142
4143       --shm-sysv-segs N
4144              specify the number of shared memory segments to be created.  The
4145              default is 8 segments.
4146
4147       --sigabrt N
4148              start  N workers that create children that are killed by SIGABRT
4149              signals or by calling abort(3).
4150
4151       --sigabrt-ops N
4152              stop the sigabrt workers after N SIGABRT  signals  are  success‐
4153              fully handled.
4154
4155       --sigchld N
4156              start  N  workers  that create children to generate SIGCHLD sig‐
4157              nals. This exercises children that exit (CLD_EXITED), get killed
4158              (CLD_KILLED),  get  stopped (CLD_STOPPED) or continued (CLD_CON‐
4159              TINUED).
4160
4161       --sigchld-ops N
4162              stop the sigchld workers after N SIGCHLD  signals  are  success‐
4163              fully handled.
4164
4165       --sigfd N
4166              start  N  workers that generate SIGRT signals and are handled by
4167              reads by a child process using a file descriptor  set  up  using
4168              signalfd(2).   (Linux  only). This will generate a heavy context
4169              switch load when all CPUs are fully loaded.
4170
4171       --sigfd-ops
4172              stop sigfd workers after N bogo SIGUSR1 signals are sent.
4173
4174       --sigfpe N
4175              start N workers that  rapidly  cause  division  by  zero  SIGFPE
4176              faults.
4177
4178       --sigfpe-ops N
4179              stop sigfpe stress workers after N bogo SIGFPE faults.
4180
4181       --sigio N
4182              start  N  workers that read data from a child process via a pipe
4183              and generate SIGIO signals. This exercises asynchronous I/O  via
4184              SIGIO.
4185
4186       --sigio-ops N
4187              stop sigio stress workers after handling N SIGIO signals.
4188
4189       --signal N
4190              start  N workers that exercise the signal system call three dif‐
4191              ferent signal handlers, SIG_IGN (ignore), a SIGCHLD handler  and
4192              SIG_DFL (default action).  For the SIGCHLD handler, the stressor
4193              sends itself a SIGCHLD signal and checks if it has been handled.
4194              For other handlers, the stressor checks that the SIGCHLD handler
4195              has not been called.  This stress test calls the  signal  system
4196              call  directly when possible and will try to avoid the C library
4197              attempt to replace signal with the more modern sigaction  system
4198              call.
4199
4200       --signal-ops N
4201              stop signal stress workers after N rounds of signal handler set‐
4202              ting.
4203
4204       --signest N
4205              start N workers that exercise nested signal handling.  A  signal
4206              is  raised  and  inside the signal handler a different signal is
4207              raised, working through a list of signals to exercise. An alter‐
4208              native  signal  stack is used that is large enough to handle all
4209              the nested signal calls.  The -v option will log the approximate
4210              size of the stack required and the average stack size per nested
4211              call.
4212
4213       --signest-ops N
4214              stop after handling N nested signals.
4215
4216       --sigpending N
4217              start N workers that check if SIGUSR1 signals are pending.  This
4218              stressor masks SIGUSR1, generates a SIGUSR1 signal and uses sig‐
4219              pending(2) to see if the signal is pending. Then it unmasks  the
4220              signal and checks if the signal is no longer pending.
4221
4222       --sigpending-ops N
4223              stop  sigpending  stress  workers  after N bogo sigpending pend‐
4224              ing/unpending checks.
4225
4226       --sigpipe N
4227              start N workers that repeatedly spawn off child process that ex‐
4228              its before a parent can complete a pipe write, causing a SIGPIPE
4229              signal.  The child process is either spawned using  clone(2)  if
4230              it is available or use the slower fork(2) instead.
4231
4232       --sigpipe-ops N
4233              stop N workers after N SIGPIPE signals have been caught and han‐
4234              dled.
4235
4236       --sigq N
4237              start  N  workers  that  rapidly  send  SIGUSR1  signals   using
4238              sigqueue(3) to child processes that wait for the signal via sig‐
4239              waitinfo(2).
4240
4241       --sigq-ops N
4242              stop sigq stress workers after N bogo signal send operations.
4243
4244       --sigrt N
4245              start N workers that  each  create  child  processes  to  handle
4246              SIGRTMIN  to  SIGRMAX  real  time signals. The parent sends each
4247              child process a RT signal via siqueue(2) and the  child  process
4248              waits  for this via sigwaitinfo(2).  When the child receives the
4249              signal it then sends a RT signal to one of the other child  pro‐
4250              cesses also via sigqueue(2).
4251
4252       --sigrt-ops N
4253              stop  sigrt stress workers after N bogo sigqueue signal send op‐
4254              erations.
4255
4256       --sigsegv N
4257              start N workers  that  rapidly  create  and  catch  segmentation
4258              faults.
4259
4260       --sigsegv-ops N
4261              stop sigsegv stress workers after N bogo segmentation faults.
4262
4263       --sigsuspend N
4264              start  N workers that each spawn off 4 child processes that wait
4265              for a SIGUSR1 signal from the parent  using  sigsuspend(2).  The
4266              parent  sends SIGUSR1 signals to each child in rapid succession.
4267              Each sigsuspend wakeup is counted as one bogo operation.
4268
4269       --sigsuspend-ops N
4270              stop sigsuspend stress workers after N bogo sigsuspend wakeups.
4271
4272       --sigtrap N
4273              start N workers that exercise the SIGTRAP  signal.  For  systems
4274              that  support  SIGTRAP, the signal is generated using raise(SIG‐
4275              TRAP). Only x86 Linux systems the SIGTRAP is also  generated  by
4276              an int 3 instruction.
4277
4278       --sigtrap-ops N
4279              stop sigtrap stress workers after N SIGTRAPs have been handled.
4280
4281       --skiplist N
4282              start  N workers that store and then search for integers using a
4283              skiplist.  By default, 65536 integers are  added  and  searched.
4284              This  is a useful method to exercise random access of memory and
4285              processor cache.
4286
4287       --skiplist-ops N
4288              stop the skiplist worker after N skiplist store and  search  cy‐
4289              cles are completed.
4290
4291       --skiplist-size N
4292              specify the size (number of integers) to store and search in the
4293              skiplist. Size can be from 1K to 4M.
4294
4295       --sleep N
4296              start N workers that spawn off multiple threads that  each  per‐
4297              form multiple sleeps of ranges 1us to 0.1s.  This creates multi‐
4298              ple context switches and timer interrupts.
4299
4300       --sleep-ops N
4301              stop after N sleep bogo operations.
4302
4303       --sleep-max P
4304              start P threads per worker. The default is 1024, the maximum al‐
4305              lowed is 30000.
4306
4307       --smi N
4308              start  N  workers that attempt to generate system management in‐
4309              terrupts (SMIs) into the x86  ring  -2  system  management  mode
4310              (SMM)  by  exercising  the  advanced power management (APM) port
4311              0xb2. This requires the --pathological option and root privilege
4312              and  is  only  implemented on x86 Linux platforms. This probably
4313              does not work in a virtualized environment.  The  stressor  will
4314              attempt  to  determine  the  time stolen by SMIs with some naïve
4315              benchmarking.
4316
4317       --smi-ops N
4318              stop after N attempts to trigger the SMI.
4319
4320       -S N, --sock N
4321              start N workers that perform  various  socket  stress  activity.
4322              This involves a pair of client/server processes performing rapid
4323              connect, send and receives and disconnects on the local host.
4324
4325       --sock-domain D
4326              specify the domain to use, the default is ipv4. Currently  ipv4,
4327              ipv6 and unix are supported.
4328
4329       --sock-if NAME
4330              use  network  interface NAME. If the interface NAME does not ex‐
4331              ist, is not up or does not support the domain then the  loopback
4332              (lo) interface is used as the default.
4333
4334       --sock-nodelay
4335              This  disables the TCP Nagle algorithm, so data segments are al‐
4336              ways sent as soon as  possible.   This  stops  data  from  being
4337              buffered  before  being  transmitted,  hence resulting in poorer
4338              network utilisation and more context switches between the sender
4339              and receiver.
4340
4341       --sock-port P
4342              start  at  socket port P. For N socket worker processes, ports P
4343              to P - 1 are used.
4344
4345       --sock-protocol P
4346              Use the specified protocol P, default is tcp.  Options  are  tcp
4347              and mptcp (if supported by the operating system).
4348
4349       --sock-ops N
4350              stop socket stress workers after N bogo operations.
4351
4352       --sock-opts [ random | send | sendmsg | sendmmsg ]
4353              by  default, messages are sent using send(2). This option allows
4354              one to specify the sending  method  using  send(2),  sendmsg(2),
4355              sendmmsg(2) or a random selection of one of thse 3 on each iter‐
4356              ation.  Note that sendmmsg is only available for  Linux  systems
4357              that support this system call.
4358
4359       --sock-type [ stream | seqpacket ]
4360              specify the socket type to use. The default type is stream. seq‐
4361              packet currently only works for the unix socket domain.
4362
4363       --sock-zerocopy
4364              enable zerocopy for send and recv calls if the  MSG_ZEROCOPY  is
4365              supported.
4366
4367       --sockabuse N
4368              start N workers that abuse a socket file descriptor with various
4369              file based system that don't normally act on sockets. The kernel
4370              should handle these illegal and unexpected calls gracefully.
4371
4372       --sockabuse-ops N
4373              stop after N iterations of the socket abusing stressor loop.
4374
4375       --sockdiag N
4376              start N workers that exercise the Linux sock_diag netlink socket
4377              diagnostics (Linux only).  This currently  requests  diagnostics
4378              using    UDIAG_SHOW_NAME,    UDIAG_SHOW_VFS,    UDIAG_SHOW_PEER,
4379              UDIAG_SHOW_ICONS, UDIAG_SHOW_RQLEN  and  UDIAG_SHOW_MEMINFO  for
4380              the AF_UNIX family of socket connections.
4381
4382       --sockdiag-ops N
4383              stop after receiving N sock_diag diagnostic messages.
4384
4385       --sockfd N
4386              start  N  workers  that pass file descriptors over a UNIX domain
4387              socket using the CMSG(3)  ancillary  data  mechanism.  For  each
4388              worker,  pair of client/server processes are created, the server
4389              opens as many file descriptors  on  /dev/null  as  possible  and
4390              passing  these over the socket to a client that reads these from
4391              the CMSG data and immediately closes the files.
4392
4393       --sockfd-ops N
4394              stop sockfd stress workers after N bogo operations.
4395
4396       --sockfd-port P
4397              start at socket port P. For N socket worker processes,  ports  P
4398              to P - 1 are used.
4399
4400       --sockmany N
4401              start  N workers that use a client process to attempt to open as
4402              many as 100000 TCP/IP socket connections to  a  server  on  port
4403              10000.
4404
4405       --sockmany-ops N
4406              stop after N connections.
4407
4408       --sockmany-if NAME
4409              use  network  interface NAME. If the interface NAME does not ex‐
4410              ist, is not up or does not support the domain then the  loopback
4411              (lo) interface is used as the default.
4412
4413       --sockpair N
4414              start  N  workers that perform socket pair I/O read/writes. This
4415              involves a pair of client/server processes  performing  randomly
4416              sized socket I/O operations.
4417
4418       --sockpair-ops N
4419              stop socket pair stress workers after N bogo operations.
4420
4421       --softlockup N
4422              start N workers that flip between with the "real-time" SCHED_FIO
4423              and SCHED_RR scheduling policies  at  the  highest  priority  to
4424              force  softlockups. This can only be run with CAP_SYS_NICE capa‐
4425              bility and for best results the number of stressors should be at
4426              least  the  number of online CPUs. Once running, this is practi‐
4427              cally impossible to stop and it will force softlockup issues and
4428              may trigger watchdog timeout reboots.
4429
4430       --softlockup-ops N
4431              stop  softlockup  stress  workers  after N bogo scheduler policy
4432              changes.
4433
4434       --sparsematrix N
4435              start N workers that exercise 3 different sparse  matrix  imple‐
4436              mentations  based  on  hashing, Judy array (for 64 bit systems),
4437              2-d  circular  linked-lists,  memory  mapped  2-d  matrix  (non-
4438              sparse),  quick  hashing  (on  preallocated nodes) and red-black
4439              tree.  The sparse matrix is populated with values, random values
4440              potentially  non-existing values are read, known existing values
4441              are read and known existing values are marked as zero. This  de‐
4442              fault  500  ×  500  sparse matrix is used and 5000 items are put
4443              into the sparse matrix making it 2% utilized.
4444
4445       --sparsematrix-ops N
4446              stop after N sparsematrix test iterations.
4447
4448       --sparsematrix-items N
4449              populate the sparse matrix with N items. If N  is  greater  than
4450              the  number  of  elements  in  the  sparse matrix than N will be
4451              capped to create at 100% full sparse matrix.
4452
4453       --sparsematrix-size N
4454              use a N × N sized sparse matrix
4455
4456       --sparsematrix-method [ all | hash | judy | list | mmap | qhash | rb ]
4457              specify the type of sparse matrix  implementation  to  use.  The
4458              'all' method uses all the methods and is the default.
4459
4460              Method        Description
4461              all           exercise   with  all  the  sparsematrix
4462                            stressor methods (see below):
4463
4464
4465              hash          use a hash table and allocate nodes  on
4466                            the heap for each unique value at a (x,
4467                            y) matrix position.
4468              judy          use a Judy array with a  unique  1-to-1
4469                            mapping  of (x, y) matrix position into
4470                            the array.
4471              list          use a circular linked-list for sparse y
4472                            positions  each  with  circular linked-
4473                            lists for sparse x  positions  for  the
4474                            (x, y) matrix coordinates.
4475              mmap          use  a  non-sparse  mmap the entire 2-d
4476                            matrix space. Only (x, y) matrix  posi‐
4477                            tions  that  are  referenced  will  get
4478                            physically  mapped.  Note  that   large
4479                            sparse matrices cannot be mmap'd due to
4480                            lack of  virtual  address  limitations,
4481                            and too many referenced pages can trig‐
4482                            ger the out of memory killer on Linux.
4483              qhash         use a  hash  table  with  pre-allocated
4484                            nodes  for each unique value. This is a
4485                            quick hash table implementation,  nodes
4486                            are not allocated each time with calloc
4487                            and are allocated from a  pre-allocated
4488                            pool leading to quicker hash table per‐
4489                            formance than the hash method.
4490
4491       --spawn N
4492              start N workers continually spawn children using  posix_spawn(3)
4493              that  exec stress-ng and then exit almost immediately. Currently
4494              Linux only.
4495
4496       --spawn-ops N
4497              stop spawn stress workers after N bogo spawns.
4498
4499       --splice N
4500              move data from /dev/zero to /dev/null through a pipe without any
4501              copying  between kernel address space and user address space us‐
4502              ing splice(2). This is only available for Linux.
4503
4504       --splice-ops N
4505              stop after N bogo splice operations.
4506
4507       --splice-bytes N
4508              transfer N bytes per splice call, the default is  64K.  One  can
4509              specify  the  size as % of total available memory or in units of
4510              Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g.
4511
4512       --stack N
4513              start N workers that rapidly cause and catch stack overflows  by
4514              use  of  large  recursive  stack allocations.  Much like the brk
4515              stressor, this can eat up pages rapidly and may trigger the ker‐
4516              nel  OOM  killer on the process, however, the killed stressor is
4517              respawned again by a monitoring parent process.
4518
4519       --stack-fill
4520              the default action is to touch the lowest page on each stack al‐
4521              location.  This  option touches all the pages by filling the new
4522              stack allocation with zeros which forces physical  pages  to  be
4523              allocated and hence is more aggressive.
4524
4525       --stack-mlock
4526              attempt  to  mlock stack pages into memory prohibiting them from
4527              being paged out.  This is a no-op if mlock(2) is not available.
4528
4529       --stack-ops N
4530              stop stack stress workers after N bogo stack overflows.
4531
4532       --stackmmap N
4533              start N workers that use a 2MB stack that is memory mapped  onto
4534              a  temporary file. A recursive function works down the stack and
4535              flushes dirty stack pages back to the memory mapped  file  using
4536              msync(2) until the end of the stack is reached (stack overflow).
4537              This exercises dirty page and stack exception handling.
4538
4539       --stackmmap-ops N
4540              stop workers after N stack overflows have occurred.
4541
4542       --str N
4543              start N workers that exercise various libc string  functions  on
4544              random strings.
4545
4546       --str-method strfunc
4547              select  a  specific  libc  string  function to stress. Available
4548              string functions to stress are: all, index, rindex,  strcasecmp,
4549              strcat,  strchr,  strcoll,  strcmp, strcpy, strlen, strncasecmp,
4550              strncat, strncmp, strrchr and strxfrm.  See string(3)  for  more
4551              information  on these string functions.  The 'all' method is the
4552              default and will exercise all the string methods.
4553
4554       --str-ops N
4555              stop after N bogo string operations.
4556
4557       --stream N
4558              start N workers exercising a memory bandwidth  stressor  loosely
4559              based  on  the STREAM "Sustainable Memory Bandwidth in High Per‐
4560              formance Computers" benchmarking tool by John D. McCalpin, Ph.D.
4561              This  stressor  allocates  buffers that are at least 4 times the
4562              size of the CPU L2 cache and continually performs rounds of fol‐
4563              lowing computations on large arrays of double precision floating
4564              point numbers:
4565
4566              Operation          Description
4567              copy               c[i] = a[i]
4568              scale              b[i] = scalar * c[i]
4569              add                c[i] = a[i] + b[i]
4570              triad              a[i] = b[i] + (c[i] * scalar)
4571
4572              Since this is loosely based on a variant of the STREAM benchmark
4573              code,  DO  NOT submit results based on this as it is intended to
4574              in stress-ng just to stress memory and compute and NOT  intended
4575              for  STREAM accurate tuned or non-tuned benchmarking whatsoever.
4576              Use the official STREAM benchmarking tool if you desire accurate
4577              and standardised STREAM benchmarks.
4578
4579       --stream-ops N
4580              stop  after  N stream bogo operations, where a bogo operation is
4581              one round of copy, scale, add and triad operations.
4582
4583       --stream-index N
4584              specify number of stream indices used to index into the data ar‐
4585              rays  a, b and c.  This adds indirection into the data lookup by
4586              using randomly shuffled indexing into  the  three  data  arrays.
4587              Level  0  (no indexing) is the default, and 3 is where all 3 ar‐
4588              rays are indexed via 3 different randomly shuffled indexes.  The
4589              higher  the index setting the more impact this has on L1, L2 and
4590              L3 caching and hence forces higher memory read/write latencies.
4591
4592       --stream-l3-size N
4593              Specify the CPU Level 3 cache size in bytes.   One  can  specify
4594              the  size in units of Bytes, KBytes, MBytes and GBytes using the
4595              suffix b, k, m or g.  If the L3 cache size is not provided, then
4596              stress-ng  will attempt to determine the cache size, and failing
4597              this, will default the size to 4MB.
4598
4599       --stream-madvise [ hugepage | nohugepage | normal ]
4600              Specify the madvise options used on  the  memory  mapped  buffer
4601              used  in  the  stream stressor. Non-linux systems will only have
4602              the 'normal' madvise advice. The default is 'normal'.
4603
4604       --swap N
4605              start N workers that add and remove small  randomly  sizes  swap
4606              partitions  (Linux only).  Note that if too many swap partitions
4607              are added then the stressors may exit  with  exit  code  3  (not
4608              enough resources).  Requires CAP_SYS_ADMIN to run.
4609
4610       --swap-ops N
4611              stop the swap workers after N swapon/swapoff iterations.
4612
4613       -s N, --switch N
4614              start  N  workers that force context switching between two mutu‐
4615              ally blocking/unblocking  tied  processes.  By  default  message
4616              passing  over  a  pipe is used, but different methods are avail‐
4617              able.
4618
4619       --switch-ops N
4620              stop context switching workers after N bogo operations.
4621
4622       --switch-freq F
4623              run the context switching at the frequency of F context switches
4624              per  second.  Note  that  the  specified  switch rate may not be
4625              achieved because of CPU speed and memory bandwidth limitations.
4626
4627       --switch-method [ mq | pipe | sem-sysv ]
4628              select the preferred context  switch  block/run  synchronization
4629              method, these are as follows:
4630
4631              Method    Description
4632              mq        use  posix  message  queue  with a 1 item size.
4633                        Messages are passed between a  sender  and  re‐
4634                        ceiver process.
4635              pipe      single  character  messages  are  passed down a
4636                        single character sized pipe  between  a  sender
4637                        and receiver process.
4638
4639              sem-sysv  a  SYSV semaphore is used to block/run two pro‐
4640                        cesses.
4641
4642       --symlink N
4643              start N workers creating and removing symbolic links.
4644
4645       --symlink-ops N
4646              stop symlink stress workers after N bogo operations.
4647
4648       --sync-file N
4649              start N workers that perform a range of data syncs across a file
4650              using  sync_file_range(2).   Three mixes of syncs are performed,
4651              from start to the end of the file,  from end of the file to  the
4652              start,  and a random mix. A random selection of valid sync types
4653              are    used,    covering    the     SYNC_FILE_RANGE_WAIT_BEFORE,
4654              SYNC_FILE_RANGE_WRITE and SYNC_FILE_RANGE_WAIT_AFTER flag bits.
4655
4656       --sync-file-ops N
4657              stop sync-file workers after N bogo sync operations.
4658
4659       --sync-file-bytes N
4660              specify  the  size of the file to be sync'd. One can specify the
4661              size as % of free space on the file system in  units  of  Bytes,
4662              KBytes, MBytes and GBytes using the suffix b, k, m or g.
4663
4664       --syncload N
4665              start N workers that produce sporadic short lived loads synchro‐
4666              nized across N stressor processes. By default repeated cycles of
4667              125ms  busy  load  followed by 62.5ms sleep occur across all the
4668              workers in step to create bursts of load  to  exercise  C  state
4669              transitions  and CPU frequency scaling. The busy load and sleeps
4670              have +/-10% jitter added to try exercising scheduling patterns.
4671
4672       --syncload-ops N
4673              stop syncload workers after N load/sleep cycles.
4674
4675       --syncload-msbusy M
4676              specify the busy load duration in milliseconds.
4677
4678       --syncload-mssleep M
4679              specify the sleep duration in milliseconds.
4680
4681       --sysbadaddr N
4682              start N workers that pass bad addresses to system calls to exer‐
4683              cise bad address and fault handling. The addresses used are null
4684              pointers, read only pages, write only pages, unmapped addresses,
4685              text  only  pages,  unaligned  addresses  and  top of memory ad‐
4686              dresses.
4687
4688       --sysbadaddr-ops N
4689              stop the sysbadaddr stressors after N bogo system calls.
4690
4691       --syscall N
4692              start N workers that exercise a range of available system calls.
4693              System calls that fail due to lack of capabilities or errors are
4694              ignored. The stressor will try to maximize the  rate  of  system
4695              calls  being  executed based the entire time taken to setup, run
4696              and cleanup after each system call.
4697
4698       --syscall-ops N
4699              stop after N system calls
4700
4701       --syscall-method method
4702              select the choice of system  calls  to  executed  based  on  the
4703              fasted test duration times.  Note that this includes the time to
4704              setup, execute the system  call  and  cleanup  afterwards.   The
4705              available methods are as follows:
4706
4707              Method        Description
4708              all           select all the available system calls
4709              fast10        select the fastest 10% system call tests
4710              fast25        select the fastest 25% system call tests
4711              fast50        select the fastest 50% system call tests
4712              fast75        select the fastest 75% system call tests
4713              fast90        select the fastest 90% system call tests
4714              geomean1      select  tests that are less or equal to the
4715                            geometric mean of all the test times
4716              geomean1      select tests that are less or equal to 2  ×
4717                            the geometric mean of all the test times
4718              geomean1      select  tests that are less or equal to 3 ×
4719                            the geometric mean of all the test times
4720
4721       --sysinfo N
4722              start N workers that continually read system  and  process  spe‐
4723              cific information.  This reads the process user and system times
4724              using the times(2) system call.   For  Linux  systems,  it  also
4725              reads overall system statistics using the sysinfo(2) system call
4726              and also the file system statistics for all mounted file systems
4727              using statfs(2).
4728
4729       --sysinfo-ops N
4730              stop the sysinfo workers after N bogo operations.
4731
4732       --sysinval N
4733              start  N workers that exercise system calls in random order with
4734              permutations of invalid arguments to force kernel error handling
4735              checks. The stress test autodetects system calls that cause pro‐
4736              cesses to crash or exit prematurely and will blocklist these af‐
4737              ter several repeated breakages. System call arguments that cause
4738              system calls to work successfully are also  detected  an  block‐
4739              listed too.  Linux only.
4740
4741       --sysinval-ops N
4742              stop sysinval workers after N system call attempts.
4743
4744       --sysfs N
4745              start  N  workers  that  recursively read files from /sys (Linux
4746              only).  This may cause specific kernel drivers to emit  messages
4747              into the kernel log.
4748
4749       --sys-ops N
4750              stop sysfs reading after N bogo read operations. Note, since the
4751              number of entries may vary between kernels, this bogo ops metric
4752              is probably very misleading.
4753
4754       --tee N
4755              move  data  from  a  writer  process to a reader process through
4756              pipes and to /dev/null without any copying  between  kernel  ad‐
4757              dress  space  and  user address space using tee(2). This is only
4758              available for Linux.
4759
4760       --tee-ops N
4761              stop after N bogo tee operations.
4762
4763       -T N, --timer N
4764              start N workers creating timer events at a default rate of 1 MHz
4765              (Linux  only);  this  can create a many thousands of timer clock
4766              interrupts. Each timer event is caught by a signal  handler  and
4767              counted as a bogo timer op.
4768
4769       --timer-ops N
4770              stop  timer  stress  workers  after  N  bogo timer events (Linux
4771              only).
4772
4773       --timer-freq F
4774              run timers at F Hz; range from 1 to 1000000000 Hz (Linux  only).
4775              By  selecting  an  appropriate  frequency stress-ng can generate
4776              hundreds of thousands of interrupts per  second.   Note:  it  is
4777              also  worth  using  --timer-slack 0 for high frequencies to stop
4778              the kernel from coalescing timer events.
4779
4780       --timer-rand
4781              select a timer frequency based around the  timer  frequency  +/-
4782              12.5% random jitter. This tries to force more variability in the
4783              timer interval to make the scheduling less predictable.
4784
4785       --timerfd N
4786              start N workers creating timerfd events at a default rate  of  1
4787              MHz  (Linux  only);  this  can  create a many thousands of timer
4788              clock events. Timer events are waited for on the timer file  de‐
4789              scriptor  using  select(2)  and  then read and counted as a bogo
4790              timerfd op.
4791
4792       --timerfd-ops N
4793              stop timerfd stress workers after N bogo timerfd  events  (Linux
4794              only).
4795
4796       --timerfs-fds N
4797              try to use a maximum of N timerfd file descriptors per stressor.
4798
4799       --timerfd-freq F
4800              run  timers at F Hz; range from 1 to 1000000000 Hz (Linux only).
4801              By selecting an appropriate  frequency  stress-ng  can  generate
4802              hundreds of thousands of interrupts per second.
4803
4804       --timerfd-rand
4805              select  a timerfd frequency based around the timer frequency +/-
4806              12.5% random jitter. This tries to force more variability in the
4807              timer interval to make the scheduling less predictable.
4808
4809       --tlb-shootdown N
4810              start  N  workers  that force Translation Lookaside Buffer (TLB)
4811              shootdowns.  This is achieved by creating up to  16  child  pro‐
4812              cesses that all share a region of memory and these processes are
4813              shared amongst the available CPUs.   The  processes  adjust  the
4814              page  mapping  settings  causing TLBs to be force flushed on the
4815              other processors, causing the TLB shootdowns.
4816
4817       --tlb-shootdown-ops N
4818              stop after N bogo TLB shootdown operations are completed.
4819
4820       --tmpfs N
4821              start N workers that create a temporary  file  on  an  available
4822              tmpfs file system and perform various file based mmap operations
4823              upon it.
4824
4825       --tmpfs-ops N
4826              stop tmpfs stressors after N bogo mmap operations.
4827
4828       --tmpfs-mmap-async
4829              enable file based memory mapping and use asynchronous  msync'ing
4830              on each page, see --tmpfs-mmap-file.
4831
4832       --tmpfs-mmap-file
4833              enable  tmpfs  file based memory mapping and by default use syn‐
4834              chronous msync'ing on each page.
4835
4836       --touch N
4837              touch files by using open(2) or creat(2) and  then  closing  and
4838              unlinking  them. The filename contains the bogo-op number and is
4839              incremented on each touch operation, hence this fills the dentry
4840              cache.  Note  that the user time and system time may be very low
4841              as most of the run time is waiting for file I/O  and  this  pro‐
4842              duces very large bogo-op rates for the very low CPU time used.
4843
4844       --touch-opts all, direct, dsync, excl, noatime, sync
4845              specify various file open options as a comma separated list. Op‐
4846              tions are as follows:
4847
4848              Option    Description
4849              all       use all the open options, namely direct, dsync,
4850                        excl, noatime and sync
4851              direct    try to minimize cache effects of the I/O to and
4852                        from this file, using the O_DIRECT open flag.
4853              dsync     ensure output has been transferred to  underly‐
4854                        ing hardware and file metadata has been updated
4855                        using the O_DSYNC open flag.
4856              excl      fail if file already exists (it should not).
4857              noatime   do not update the file last access time if  the
4858                        file is read.
4859              sync      ensure  output has been transferred to underly‐
4860                        ing hardware using the O_SYNC open flag.
4861
4862       --touch-method [ random | open | creat ]
4863              select the method the file is  created,  either  randomly  using
4864              open(2)  or  create(2), just using open(2) with the O_CREAT open
4865              flag, or with creat(2).
4866
4867       --tree N
4868              start N workers that exercise tree data structures. The  default
4869              is  to  add,  find  and  remove 250,000 64 bit integers into AVL
4870              (avl), Red-Black (rb), Splay (splay), btree  and  binary  trees.
4871              The  intention  of this stressor is to exercise memory and cache
4872              with the various tree operations.
4873
4874       --tree-ops N
4875              stop tree stressors after N bogo ops. A bogo op covers the addi‐
4876              tion, finding and removing all the items into the tree(s).
4877
4878       --tree-size N
4879              specify  the  size  of the tree, where N is the number of 64 bit
4880              integers to be added into the tree.
4881
4882       --tree-method [ all | avl | binary | btree | rb | splay ]
4883              specify the tree to be used. By default, all the trees are  used
4884              (the 'all' option).
4885
4886       --tsc N
4887              start N workers that read the Time Stamp Counter (TSC) 256 times
4888              per loop iteration (bogo operation).  This exercises the tsc in‐
4889              struction  for x86, the mftb instruction for ppc64 and the rdcy‐
4890              cle instruction for RISC-V.
4891
4892       --tsc-ops N
4893              stop the tsc workers after N bogo operations are completed.
4894
4895       --tsearch N
4896              start N workers that insert, search and delete 32  bit  integers
4897              on  a  binary tree using tsearch(3), tfind(3) and tdelete(3). By
4898              default, there are 65536 randomized integers used in  the  tree.
4899              This  is a useful method to exercise random access of memory and
4900              processor cache.
4901
4902       --tsearch-ops N
4903              stop the tsearch workers after N bogo tree operations  are  com‐
4904              pleted.
4905
4906       --tsearch-size N
4907              specify  the  size  (number  of 32 bit integers) in the array to
4908              tsearch. Size can be from 1K to 4M.
4909
4910       --tun N
4911              start N workers that create a network tunnel  device  and  sends
4912              and receives packets over the tunnel using UDP and then destroys
4913              it. A new random 192.168.*.* IPv4 address is used  each  time  a
4914              tunnel is created.
4915
4916       --tun-ops N
4917              stop after N iterations of creating/sending/receiving/destroying
4918              a tunnel.
4919
4920       --tun-tap
4921              use network tap device using level 2  frames  (bridging)  rather
4922              than a tun device for level 3 raw packets (tunnelling).
4923
4924       --udp N
4925              start  N  workers  that transmit data using UDP. This involves a
4926              pair of client/server processes performing rapid  connect,  send
4927              and receives and disconnects on the local host.
4928
4929       --udp-domain D
4930              specify  the domain to use, the default is ipv4. Currently ipv4,
4931              ipv6 and unix are supported.
4932
4933       --udp-if NAME
4934              use network interface NAME. If the interface NAME does  not  ex‐
4935              ist,  is not up or does not support the domain then the loopback
4936              (lo) interface is used as the default.
4937
4938       --udp-gro
4939              enable UDP-GRO (Generic Receive Offload) if supported.
4940
4941       --udp-lite
4942              use the UDP-Lite (RFC 3828) protocol (only for ipv4 and ipv6 do‐
4943              mains).
4944
4945       --udp-ops N
4946              stop udp stress workers after N bogo operations.
4947
4948       --udp-port P
4949              start  at  port  P. For N udp worker processes, ports P to P - 1
4950              are used. By default, ports 7000 upwards are used.
4951
4952       --udp-flood N
4953              start N workers that attempt to flood the host with UDP  packets
4954              to random ports. The IP address of the packets are currently not
4955              spoofed.  This  is  only  available  on  systems  that   support
4956              AF_PACKET.
4957
4958       --udp-flood-domain D
4959              specify  the  domain to use, the default is ipv4. Currently ipv4
4960              and ipv6 are supported.
4961
4962       --udp-flood-if NAME
4963              use network interface NAME. If the interface NAME does  not  ex‐
4964              ist,  is not up or does not support the domain then the loopback
4965              (lo) interface is used as the default.
4966
4967       --udp-flood-ops N
4968              stop udp-flood stress workers after N bogo operations.
4969
4970       --unshare N
4971              start N workers that each fork off 32 child processes,  each  of
4972              which  exercises  the  unshare(2)  system call by disassociating
4973              parts of the process execution context. (Linux only).
4974
4975       --unshare-ops N
4976              stop after N bogo unshare operations.
4977
4978       --uprobe N
4979              start N workers that trace the entry to libc  function  getpid()
4980              using  the  Linux uprobe kernel tracing mechanism. This requires
4981              CAP_SYS_ADMIN capabilities and a  modern  Linux  uprobe  capable
4982              kernel.
4983
4984       --uprobe-ops N
4985              stop uprobe tracing after N trace events of the function that is
4986              being traced.
4987
4988       -u N, --urandom N
4989              start N workers reading /dev/urandom  (Linux  only).  This  will
4990              load the kernel random number source.
4991
4992       --urandom-ops N
4993              stop urandom stress workers after N urandom bogo read operations
4994              (Linux only).
4995
4996       --userfaultfd N
4997              start N workers that generate  write  page  faults  on  a  small
4998              anonymously  mapped  memory region and handle these faults using
4999              the user space fault handling  via  the  userfaultfd  mechanism.
5000              This  will  generate  a  large quantity of major page faults and
5001              also context switches during the handling of  the  page  faults.
5002              (Linux only).
5003
5004       --userfaultfd-ops N
5005              stop userfaultfd stress workers after N page faults.
5006
5007       --userfaultfd-bytes N
5008              mmap  N  bytes  per userfaultfd worker to page fault on, the de‐
5009              fault is 16MB.  One can specify the size as % of total available
5010              memory or in units of Bytes, KBytes, MBytes and GBytes using the
5011              suffix b, k, m or g.
5012
5013       --usersyscall N
5014              start N workers that exercise the Linux prctl  userspace  system
5015              call  mechanism.  A userspace system call is handled by a SIGSYS
5016              signal handler and  exercised  with  the  system  call  disabled
5017              (ENOSYS)     and    enabled    (via    SIGSYS)    using    prctl
5018              PR_SET_SYSCALL_USER_DISPATCH.
5019
5020       --usersyscall-ops N
5021              stop after N successful userspace syscalls via a  SIGSYS  signal
5022              handler.
5023
5024       --utime N
5025              start  N  workers  updating  file timestamps. This is mainly CPU
5026              bound when the default is used as the  system  flushes  metadata
5027              changes only periodically.
5028
5029       --utime-ops N
5030              stop utime stress workers after N utime bogo operations.
5031
5032       --utime-fsync
5033              force  metadata  changes  on  each  file  timestamp update to be
5034              flushed to disk.  This forces the test to become I/O  bound  and
5035              will result in many dirty metadata writes.
5036
5037       --vdso N
5038              start  N  workers  that  repeatedly call each of the system call
5039              functions in the vDSO (virtual dynamic shared object).  The vDSO
5040              is  a shared library that the kernel maps into the address space
5041              of all user-space applications to allow fast  access  to  kernel
5042              data  to some system calls without the need of performing an ex‐
5043              pensive system call.
5044
5045       --vdso-ops N
5046              stop after N vDSO functions calls.
5047
5048       --vdso-func F
5049              Instead of calling all the vDSO functions, just  call  the  vDSO
5050              function  F.  The functions depend on the kernel being used, but
5051              are typically clock_gettime, getcpu, gettimeofday and time.
5052
5053       --vecfp N
5054              start N workers that exericise floating point (single and double
5055              precision)  addition,  multiplication and division on vectors of
5056              128, 64, 32, 16 and 8 floating point values. The -v option  will
5057              show  the approximate throughput in millions of floating pointer
5058              operations  per  second  for  each  operation.   For  x86,   the
5059              gcc/clang target clones attribute has been used to produced vec‐
5060              tor optimizations for a range of mmx,  sse,  avx  and  processor
5061              features.
5062
5063       --vecfp-ops N
5064              stop after N vector floating point bogo-operations. Each bogo-op
5065              is equivalent to 65536 loops of 2 vector operations.  For  exam‐
5066              ple,  one bogo-op on a 16 wide vector is equivalent to 65536 × 2
5067              × 16 floating point operations.
5068
5069       --vecfp-method method
5070              specify a vecfp stress method. By default, all the stress  meth‐
5071              ods are exercised sequentially, however one can specify just one
5072              method to be used if required.
5073
5074              Method            Description
5075              all               iterate through all  of  the  following
5076                                vector methods
5077              floatv128add      addition of a vector of 128 single pre‐
5078                                cision floating point values
5079              floatv64add       addition of a vector of 64 single  pre‐
5080                                cision floating point values
5081              floatv32add       addition  of a vector of 32 single pre‐
5082                                cision floating point values
5083              floatv16add       addition of a vector of 16 single  pre‐
5084                                cision floating point values
5085              floatv8add        addition of a vector of 8 single preci‐
5086                                sion floating point values
5087              floatv128mul      multiplication of a vector of 128  sin‐
5088                                gle precision floating point values
5089              floatv64mul       multiplication of a vector of 64 single
5090                                precision floating point values
5091              floatv32mul       multiplication of a vector of 32 single
5092                                precision floating point values
5093              floatv16mul       multiplication of a vector of 16 single
5094                                precision floating point values
5095              floatv8mul        multiplication of a vector of 8  single
5096                                precision floating point values
5097              floatv128div      division of a vector of 128 single pre‐
5098                                cision floating point values
5099              floatv64div       division of a vector of 64 single  pre‐
5100                                cision floating point values
5101              floatv32div       division  of a vector of 32 single pre‐
5102                                cision floating point values
5103              floatv16div       division of a vector of 16 single  pre‐
5104                                cision floating point values
5105              floatv8div        division of a vector of 8 single preci‐
5106                                sion floating point values
5107              doublev128add     addition of a vector of 128 double pre‐
5108                                cision floating point values
5109              doublev64add      addition  of a vector of 64 double pre‐
5110                                cision floating point values
5111              doublev32add      addition of a vector of 32 double  pre‐
5112                                cision floating point values
5113              doublev16add      addition  of a vector of 16 double pre‐
5114                                cision floating point values
5115              doublev8add       addition of a vector of 8 double preci‐
5116                                sion floating point values
5117              doublev128mul     multiplication  of a vector of 128 dou‐
5118                                ble precision floating point values
5119              doublev64mul      multiplication of a vector of 64 double
5120                                precision floating point values
5121              doublev32mul      multiplication of a vector of 32 double
5122                                precision floating point values
5123              doublev16mul      multiplication of a vector of 16 double
5124                                precision floating point values
5125              doublev8mul       multiplication  of a vector of 8 double
5126                                precision floating point values
5127              doublev128div     division of a vector of 128 double pre‐
5128                                cision floating point values
5129              doublev64div      division  of a vector of 64 double pre‐
5130                                cision floating point values
5131              doublev32div      division of a vector of 32 double  pre‐
5132                                cision floating point values
5133              doublev16div      division  of a vector of 16 double pre‐
5134                                cision floating point values
5135              doublev8div       division of a vector of 8 double preci‐
5136                                sion floating point values
5137
5138       --vecmath N
5139              start N workers that perform various unsigned integer math oper‐
5140              ations on various 128 bit vectors. A mix of vector  math  opera‐
5141              tions  are  performed on the following vectors: 16 × 8 bits, 8 ×
5142              16 bits, 4 × 32 bits, 2 × 64 bits. The metrics produced by  this
5143              mix depend on the processor architecture and the vector math op‐
5144              timisations produced by the compiler.
5145
5146       --vecmath-ops N
5147              stop after N bogo vector integer math operations.
5148
5149       --vecshuf N
5150              start N workers that shuffle data on  various  64  byte  vectors
5151              comprised  of  8,  16, 32, 64 and 128 bit unsigned integers. The
5152              integers are shuffled around the vector with  4  shuffle  opera‐
5153              tions  per  loop,  65536 loops make up one bogo-op of shuffling.
5154              The data shuffling rates and shuffle operation rates are  logged
5155              when  using the -v option.  This stressor exercises vector load,
5156              shuffle/permute, packing/unpacking and store operations.
5157
5158       --vecshuf-ops N
5159              stop after N bogo vector shuffle ops. One bogo-op  is  equavlent
5160              of  4  ×  65536  vector shuffle operations on 64 bytes of vector
5161              data.
5162
5163       --vecshuf-method method
5164              specify a vector shuffling stress method. By  default,  all  the
5165              stress methods are exercised sequentially, however one can spec‐
5166              ify just one method to be used if required.
5167
5168              Method        Description
5169              all           iterate through all  of  the  following
5170                            vector methods
5171              u8x64         shuffle  a  vector of 64 unsigned 8 bit
5172                            integers
5173              u16x32        shuffle a vector of 32 unsigned 16  bit
5174                            integers
5175              u32x16        shuffle  a vector of 16 unsigned 32 bit
5176                            integers
5177              u64x8         shuffle a vector of 8 unsigned  64  bit
5178                            integers
5179              u128x4        shuffle  a vector of 4 unsigned 128 bit
5180                            integers (when supported)
5181
5182       --vecwide N
5183              start N workers that perform various 8 bit  math  operations  on
5184              vectors of 4, 8, 16, 32, 64, 128, 256, 512, 1024 and 2048 bytes.
5185              With the -v option the relative compute performance vs  the  ex‐
5186              pected  compute performance based on total run time is shown for
5187              the first vecwide worker. The vecwide stressor exercises various
5188              processor vector instruction mixes and how well the compiler can
5189              map the vector operations to the target instruction set.
5190
5191       --vecwide-ops N
5192              stop after N bogo vector operations (2048 iterations of a mix of
5193              vector instruction operations).
5194
5195       --verity N
5196              start  N  workers  that exercise read-only file based authenticy
5197              protection using  the  verity  ioctls  FS_IOC_ENABLE_VERITY  and
5198              FS_IOC_MEASURE_VERITY.   This  requires file systems with verity
5199              support (currently ext4 and f2fs on Linux) with the verity  fea‐
5200              ture  enabled.  The  test  attempts to creates a small file with
5201              multiple small extents and enables verity on the file and  veri‐
5202              fies  it.  It  also checks to see if the file has verity enabled
5203              with the FS_VERITY_FL bit set on the file flags.
5204
5205       --verity-ops N
5206              stop the verity workers after  N  file  create,  enable  verity,
5207              check verity and unlink cycles.
5208
5209       --vfork N
5210              start  N  workers continually vforking children that immediately
5211              exit.
5212
5213       --vfork-ops N
5214              stop vfork stress workers after N bogo operations.
5215
5216       --vfork-max P
5217              create P processes and then wait for them to exit per iteration.
5218              The  default is just 1; higher values will create many temporary
5219              zombie processes that are waiting to be reaped. One  can  poten‐
5220              tially   fill  up  the  process  table  using  high  values  for
5221              --vfork-max and --vfork.
5222
5223       --vfork-vm
5224              deprecated since stress-ng V0.14.03
5225
5226       --vforkmany N
5227              start N workers that spawn off a chain of vfork  children  until
5228              the  process  table  fills  up  and/or  vfork  fails.  vfork can
5229              rapidly create child processes and the  parent  process  has  to
5230              wait until the child dies, so this stressor rapidly fills up the
5231              process table.
5232
5233       --vforkmany-ops N
5234              stop vforkmany stressors after N vforks have been made.
5235
5236       --vforkmany-vm
5237              enable detrimental performance virtual memory advice using  mad‐
5238              vise  on  all  pages of the vforked process. Where possible this
5239              will try to set every page in the new process with using madvise
5240              MADV_MERGEABLE,  MADV_WILLNEED,  MADV_HUGEPAGE  and  MADV_RANDOM
5241              flags. Linux only.
5242
5243       -m N, --vm N
5244              start N workers continuously calling mmap(2)/munmap(2) and writ‐
5245              ing to the allocated memory. Note that this can cause systems to
5246              trip the kernel OOM killer on Linux systems if not enough physi‐
5247              cal memory and swap is not available.
5248
5249       --vm-bytes N
5250              mmap  N bytes per vm worker, the default is 256MB. One can spec‐
5251              ify the size as % of total  available  memory  or  in  units  of
5252              Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g.
5253
5254       --vm-ops N
5255              stop vm workers after N bogo operations.
5256
5257       --vm-hang N
5258              sleep  N  seconds  before  unmapping memory, the default is zero
5259              seconds.  Specifying 0 will do an infinite wait.
5260
5261       --vm-keep
5262              do not continually unmap and map memory, just keep on re-writing
5263              to it.
5264
5265       --vm-locked
5266              Lock  the  pages  of  the  mapped  region into memory using mmap
5267              MAP_LOCKED (since Linux 2.5.37).  This  is  similar  to  locking
5268              memory as described in mlock(2).
5269
5270       --vm-madvise advice
5271              Specify  the  madvise  'advice' option used on the memory mapped
5272              regions used in the vm stressor.  Non-linux  systems  will  only
5273              have  the  'normal' madvise advice, linux systems support 'dont‐
5274              need', 'hugepage', 'mergeable' , 'nohugepage',  'normal',  'ran‐
5275              dom', 'sequential', 'unmergeable' and 'willneed' advice. If this
5276              option is not used then the default is to  pick  random  madvise
5277              advice for each mmap call. See madvise(2) for more details.
5278
5279       --vm-method method
5280              specify  a  vm stress method. By default, all the stress methods
5281              are exercised sequentially, however one  can  specify  just  one
5282              method  to  be  used if required.  Each of the vm workers have 3
5283              phases:
5284
5285              1. Initialised. The anonymously memory mapped region is set to a
5286              known pattern.
5287
5288              2.  Exercised.  Memory  is  modified in a known predictable way.
5289              Some vm workers alter memory sequentially,  some  use  small  or
5290              large strides to step along memory.
5291
5292              3.  Checked. The modified memory is checked to see if it matches
5293              the expected result.
5294
5295              The vm methods containing 'prime' in their name have a stride of
5296              the largest prime less than 2↑64, allowing to them to thoroughly
5297              step through memory and touch all locations just once while also
5298              doing  without  touching  memory  cells next to each other. This
5299              strategy exercises the cache and page non-locality.
5300
5301              Since the memory being exercised is virtually mapped then  there
5302              is  no  guarantee  of  touching page addresses in any particular
5303              physical order.  These workers should not be used to  test  that
5304              all  the  system's memory is working correctly either, use tools
5305              such as memtest86 instead.
5306
5307              The vm stress methods are intended to exercise memory in ways to
5308              possibly find memory issues and to try to force thermal errors.
5309
5310              Available vm stress methods are described as follows:
5311
5312              Method       Description
5313              all          iterate  over  all  the  vm  stress  methods as
5314                           listed below.
5315              cache-lines  work through memory  in  64  byte  cache  sized
5316                           steps  writing  a  single  byte per cache line.
5317                           Once the write is complete, the memory is  read
5318                           to verify the values are written correctly.
5319              cache-stripe work  through  memory  in  64  byte cache sized
5320                           chunks, writing in ascending address  order  on
5321                           even  offsets  and  descending address order on
5322                           odd offsets.
5323              flip         sequentially work through memory 8 times,  each
5324                           time just one bit in memory flipped (inverted).
5325                           This will effectively invert  each  byte  in  8
5326                           passes.
5327              fwdrev       write  to even addressed bytes in a forward di‐
5328                           rection and odd addressed bytes in reverse  di‐
5329                           rection.  rhe  contents are sanity checked once
5330                           all the addresses have been written to.
5331
5332
5333
5334
5335              galpat-0     galloping pattern zeros. This sets all bits  to
5336                           0  and  flips just 1 in 4096 bits to 1. It then
5337                           checks to see if the 1s are pulled down to 0 by
5338                           their neighbours or of the neighbours have been
5339                           pulled up to 1.
5340              galpat-1     galloping pattern ones. This sets all bits to 1
5341                           and  flips  just  1  in 4096 bits to 0. It then
5342                           checks to see if the 0s are pulled up to  1  by
5343                           their neighbours or of the neighbours have been
5344                           pulled down to 0.
5345              gray         fill the  memory  with  sequential  gray  codes
5346                           (these  only change 1 bit at a time between ad‐
5347                           jacent bytes) and then check if  they  are  set
5348                           correctly.
5349              grayflip     fill  memory  with  adjacent bytes of gray code
5350                           and inverted gray code pairs to change as  many
5351                           bits at a time between adjacent bytes and check
5352                           if these are set correctly.
5353              incdec       work sequentially  through  memory  twice,  the
5354                           first  pass  increments each byte by a specific
5355                           value and the second pass decrements each  byte
5356                           back  to  the  original start value. The incre‐
5357                           ment/decrement value changes on each invocation
5358                           of the stressor.
5359              inc-nybble   initialise  memory to a set value (that changes
5360                           on each invocation of the  stressor)  and  then
5361                           sequentially  work through each byte increment‐
5362                           ing the bottom 4 bits by 1 and the top  4  bits
5363                           by 15.
5364              rand-set     sequentially  work  through  memory  in  64 bit
5365                           chunks setting bytes in the chunk to the same 8
5366                           bit  random value.  The random value changes on
5367                           each chunk.  Check that  the  values  have  not
5368                           changed.
5369              rand-sum     sequentially  set  all  memory to random values
5370                           and then summate the number of bits  that  have
5371                           changed from the original set values.
5372              read64       sequentially  read  memory  using  32  × 64 bit
5373                           reads per bogo loop. Each loop equates  to  one
5374                           bogo  operation.   This  exercises  raw  memory
5375                           reads.
5376              ror          fill memory with a random pattern and then  se‐
5377                           quentially  rotate  64  bits of memory right by
5378                           one  bit,  then  check   the   final   load/ro‐
5379                           tate/stored values.
5380              swap         fill  memory in 64 byte chunks with random pat‐
5381                           terns. Then swap each 64 chunk with a  randomly
5382                           chosen  chunk. Finally, reverse the swap to put
5383                           the chunks back to  their  original  place  and
5384                           check  if  the  data is correct. This exercises
5385                           adjacent and random memory load/stores.
5386              move-inv     sequentially fill memory 64 bits of memory at a
5387                           time  with random values, and then check if the
5388                           memory is set  correctly.   Next,  sequentially
5389                           invert  each  64 bit pattern and again check if
5390                           the memory is set as expected.
5391              modulo-x     fill memory over 23 iterations. Each  iteration
5392                           starts one byte further along from the start of
5393                           the memory and steps along in 23 byte  strides.
5394                           In each stride, the first byte is set to a ran‐
5395                           dom pattern and all other bytes are set to  the
5396                           inverse.   Then it checks see if the first byte
5397                           contains the expected random pattern. This  ex‐
5398                           ercises  cache store/reads as well as seeing if
5399                           neighbouring cells influence each other.
5400              mscan        fill each bit in each byte with 1s  then  check
5401                           these  are set, fill each bit in each byte with
5402                           0s and check these are clear.
5403              prime-0      iterate 8 times by stepping through  memory  in
5404                           very  large  prime strides clearing just on bit
5405                           at a time in every byte. Then check to  see  if
5406                           all bits are set to zero.
5407              prime-1      iterate  8  times by stepping through memory in
5408                           very large prime strides setting just on bit at
5409                           a  time in every byte. Then check to see if all
5410                           bits are set to one.
5411              prime-gray-0 first step through memory in very  large  prime
5412                           strides  clearing  just on bit (based on a gray
5413                           code) in every  byte.  Next,  repeat  this  but
5414                           clear  the  other  7 bits. Then check to see if
5415                           all bits are set to zero.
5416              prime-gray-1 first step through memory in very  large  prime
5417                           strides  setting  just  on bit (based on a gray
5418                           code) in every byte. Next, repeat this but  set
5419                           the other 7 bits. Then check to see if all bits
5420                           are set to one.
5421
5422              rowhammer    try  to  force  memory  corruption  using   the
5423                           rowhammer  memory stressor. This fetches two 32
5424                           bit integers from memory  and  forces  a  cache
5425                           flush on the two addresses multiple times. This
5426                           has been known to force bit  flipping  on  some
5427                           hardware,  especially with lower frequency mem‐
5428                           ory refresh cycles.
5429              walk-0d      for each byte in memory, walk through each data
5430                           line  setting  them  to low (and the others are
5431                           set high) and check that the written  value  is
5432                           as  expected. This checks if any data lines are
5433                           stuck.
5434              walk-1d      for each byte in memory, walk through each data
5435                           line  setting  them to high (and the others are
5436                           set low) and check that the written value is as
5437                           expected.  This  checks  if  any data lines are
5438                           stuck.
5439              walk-0a      in the given memory  mapping,  work  through  a
5440                           range  of  specially  chosen  addresses working
5441                           through address lines to  see  if  any  address
5442                           lines are stuck low. This works best with phys‐
5443                           ical  memory  addressing,  however,  exercising
5444                           these virtual addresses has some value too.
5445              walk-1a      in  the  given  memory  mapping, work through a
5446                           range of  specially  chosen  addresses  working
5447                           through  address  lines  to  see if any address
5448                           lines are stuck  high.  This  works  best  with
5449                           physical memory addressing, however, exercising
5450                           these virtual addresses has some value too.
5451              write64      sequentially write to memory using 32 × 64  bit
5452                           writes  per bogo loop. Each loop equates to one
5453                           bogo  operation.   This  exercises  raw  memory
5454                           writes.    Note  that  memory  writes  are  not
5455                           checked at the end of each test iteration.
5456              write64nt    sequentially write to memory using 32 × 64  bit
5457                           non-temporal  writes  per bogo loop.  Each loop
5458                           equates to one bogo operation.  This  exercises
5459                           cacheless  raw memory writes and is only avail‐
5460                           able on x86 sse2 capable systems built with gcc
5461                           and  clang  compilers.  Note that memory writes
5462                           are not checked at the end of each test  itera‐
5463                           tion.
5464              write1024v   sequentially write to memory using 1 × 1024 bit
5465                           vector write per bogo loop (only  available  if
5466                           the compiler supports vector types).  Each loop
5467                           equates to one bogo operation.  This  exercises
5468                           raw memory writes.  Note that memory writes are
5469                           not checked at the end of each test iteration.
5470              wrrd64128    write to memory in 128 bit  chunks  using  non-
5471                           temporal  writes  (bypassing  the cache).  Each
5472                           chunk is written 4 times to hammer the  memory.
5473                           Then  check to see if the data is correct using
5474                           non-temporal reads if  they  are  available  or
5475                           normal memory reads if not. Only available with
5476                           processors that provide  non-temporal  128  bit
5477                           writes.
5478              zero-one     set  all  memory bits to zero and then check if
5479                           any bits are not zero. Next, set all the memory
5480                           bits to one and check if any bits are not one.
5481
5482       --vm-populate
5483              populate  (prefault)  page  tables for the memory mappings; this
5484              can stress swapping. Only  available  on  systems  that  support
5485              MAP_POPULATE (since Linux 2.5.46).
5486
5487       --vm-addr N
5488              start  N  workers  that exercise virtual memory addressing using
5489              various methods to walk through a memory mapped  address  range.
5490              This will exercise mapped private addresses from 8MB to 64MB per
5491              worker and try to generate cache and TLB inefficient  addressing
5492              patterns. Each method will set the memory to a random pattern in
5493              a write phase and then sanity check this in a read phase.
5494
5495       --vm-addr-ops N
5496              stop N workers after N bogo addressing passes.
5497
5498       --vm-addr-method method
5499              specify a vm address stress method. By default, all  the  stress
5500              methods are exercised sequentially, however one can specify just
5501              one method to be used if required.
5502
5503              Available vm address stress methods are described as follows:
5504
5505              Method   Description
5506              all      iterate over  all  the  vm  stress  methods  as
5507                       listed below.
5508
5509              pwr2     work  through memory addresses in steps of pow‐
5510                       ers of two.
5511              pwr2inv  like pwr2, but with the  all  relevant  address
5512                       bits inverted.
5513              gray     work  through  memory with gray coded addresses
5514                       so that each change of address just  changes  1
5515                       bit compared to the previous address.
5516              grayinv  like  gray,  but  with the all relevant address
5517                       bits inverted, hence all bits change apart from
5518                       1 in the address range.
5519              rev      work through the address range with the bits in
5520                       the address range reversed.
5521              revinv   like rev, but with  all  the  relevant  address
5522                       bits inverted.
5523              inc      work through the address range forwards sequen‐
5524                       tially, byte by byte.
5525              incinv   like inc, but with  all  the  relevant  address
5526                       bits inverted.
5527              dec      work  through  the  address range backwards se‐
5528                       quentially, byte by byte.
5529              decinv   like dec, but with  all  the  relevant  address
5530                       bits inverted.
5531
5532       --vm-rw N
5533              start  N workers that transfer memory to/from a parent/child us‐
5534              ing process_vm_writev(2) and process_vm_readv(2). This  is  fea‐
5535              ture is only supported on Linux.  Memory transfers are only ver‐
5536              ified if the --verify option is enabled.
5537
5538       --vm-rw-ops N
5539              stop vm-rw workers after N memory read/writes.
5540
5541       --vm-rw-bytes N
5542              mmap N bytes per vm-rw worker, the  default  is  16MB.  One  can
5543              specify  the  size as % of total available memory or in units of
5544              Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g.
5545
5546       --vm-segv N
5547              start N workers that create a child process that unmaps its  ad‐
5548              dress space causing a SIGSEGV on return from the unmap.
5549
5550       --vm-segv-ops N
5551              stop after N bogo vm-segv SIGSEGV faults.
5552
5553       --vm-splice N
5554              move  data  from  memory to /dev/null through a pipe without any
5555              copying between kernel address space and user address space  us‐
5556              ing  vmsplice(2)  and  splice(2).   This  is  only available for
5557              Linux.
5558
5559       --vm-splice-ops N
5560              stop after N bogo vm-splice operations.
5561
5562       --vm-splice-bytes N
5563              transfer N bytes per vmsplice call, the default is 64K. One  can
5564              specify  the  size as % of total available memory or in units of
5565              Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g.
5566
5567       --wait N
5568              start N workers that spawn off two  children;  one  spins  in  a
5569              pause(2)  loop,  the  other  continually stops and continues the
5570              first. The controlling process waits on the first  child  to  be
5571              resumed   by  the  delivery  of  SIGCONT  using  waitpid(2)  and
5572              waitid(2).
5573
5574       --wait-ops N
5575              stop after N bogo wait operations.
5576
5577       --watchdog N
5578              start N workers that exercising the /dev/watchdog  watchdog  in‐
5579              terface   by  opening  it,  perform  various  watchdog  specific
5580              ioctl(2) commands on the device and close  it.   Before  closing
5581              the  special  watchdog magic close message is written to the de‐
5582              vice to try and force it to never trip a watchdog  reboot  after
5583              the  stressor has been run.  Note that this stressor needs to be
5584              run as root with the --pathological option and is only available
5585              on Linux.
5586
5587       --watchdog-ops N
5588              stop after N bogo operations on the watchdog device.
5589
5590       --wcs N
5591              start N workers that exercise various libc wide character string
5592              functions on random strings.
5593
5594       --wcs-method wcsfunc
5595              select a specific libc wide character string function to stress.
5596              Available  string  functions to stress are: all, wcscasecmp, wc‐
5597              scat, wcschr, wcscoll, wcscmp, wcscpy, wcslen, wcsncasecmp,  wc‐
5598              sncat,  wcsncmp,  wcsrchr  and wcsxfrm.  The 'all' method is the
5599              default and will exercise all the string methods.
5600
5601       --wcs-ops N
5602              stop after N bogo wide character string operations.
5603
5604       --x86syscall N
5605              start N workers that repeatedly exercise the x86-64 syscall  in‐
5606              struction  to  call  the  getcpu(2), gettimeofday(2) and time(2)
5607              system using the Linux vsyscall handler. Only for Linux.
5608
5609       --x86syscall-ops N
5610              stop after N x86syscall system calls.
5611
5612       --x86syscall-func F
5613              Instead of exercising the 3 syscall system calls, just call  the
5614              syscall  function  F. The function F must be one of getcpu, get‐
5615              timeofday and time.
5616
5617       --xattr N
5618              start N workers that create, update and delete  batches  of  ex‐
5619              tended attributes on a file.
5620
5621       --xattr-ops N
5622              stop after N bogo extended attribute operations.
5623
5624       -y N, --yield N
5625              start  N workers that call sched_yield(2). This stressor ensures
5626              that at least 2 child processes per CPU exercise shield_yield(2)
5627              no  matter  how many workers are specified, thus always ensuring
5628              rapid context switching.
5629
5630       --yield-ops N
5631              stop yield stress workers after  N  sched_yield(2)  bogo  opera‐
5632              tions.
5633
5634       --zero N
5635              start N workers reading /dev/zero.
5636
5637       --zero-ops N
5638              stop zero stress workers after N /dev/zero bogo read operations.
5639
5640       --zlib N
5641              start  N workers compressing and decompressing random data using
5642              zlib. Each worker has two processes, one that compresses  random
5643              data and pipes it to another process that decompresses the data.
5644              This stressor exercises CPU, cache and memory.
5645
5646       --zlib-ops N
5647              stop after N bogo compression operations, each bogo  compression
5648              operation  is a compression of 64K of random data at the highest
5649              compression level.
5650
5651       --zlib-level L
5652              specify the compression level (0..9), where 0 = no  compression,
5653              1 = fastest compression and 9 = best compression.
5654
5655       --zlib-method method
5656              specify the type of random data to send to the zlib library.  By
5657              default, the data stream is created from a random  selection  of
5658              the  different data generation processes.  However one can spec‐
5659              ify just one method to be used if required.  Available zlib data
5660              generation methods are described as follows:
5661
5662              Method      Description
5663              00ff        randomly distributed 0x00 and 0xFF values.
5664              ascii01     randomly distributed ASCII 0 and 1 characters.
5665              asciidigits randomly  distributed ASCII digits in the range
5666                          of 0 and 9.
5667              bcd         packed binary coded decimals, 0..99 packed into
5668                          2 4-bit nybbles.
5669              binary      32 bit random numbers.
5670              brown       8  bit brown noise (Brownian motion/Random Walk
5671                          noise).
5672              double      double precision floating  point  numbers  from
5673                          sin(θ).
5674              fixed       data stream is repeated 0x04030201.
5675              gcr         random values as 4 × 4 bit data turned into 4 ×
5676                          5 bit group  coded  recording  (GCR)  patterns.
5677                          Each  5  bit  GCR  value starts or ends with at
5678                          most one zero  bit  so  that  concatenated  GCR
5679                          codes have no more than two zero bits in a row.
5680              gray        16  bit gray codes generated from an increment‐
5681                          ing counter.
5682              latin       Random latin sentences from a sample  of  Lorem
5683                          Ipsum text.
5684
5685              lehmer      Fast  random  values  generated  using Lehmer's
5686                          generator using a 128 bit multiply.
5687              lfsr32      Values generated from a 32  bit  Galois  linear
5688                          feedback  shift  register  using the polynomial
5689                          x↑32 + x↑31 + x↑29 + x + 1.  This  generates  a
5690                          ring  of   2↑32  -  1 unique values (all 32 bit
5691                          values except for 0).
5692              logmap      Values generated from a logistical map  of  the
5693                          equation  Χn+1 = r ×  Χn × (1 - Χn) where r > ≈
5694                          3.56994567 to produce chaotic data. The  values
5695                          are  scaled  by a large arbitrary value and the
5696                          lower 8 bits of this value are compressed.
5697              lrand48     Uniformly distributed pseudo-random 32 bit val‐
5698                          ues generated from lrand48(3).
5699              morse       Morse  code  generated  from  random latin sen‐
5700                          tences from a sample of Lorem Ipsum text.
5701              nybble      randomly distributed bytes in the range of 0x00
5702                          to 0x0f.
5703              objcode     object  code selected from a random start point
5704                          in the stress-ng text segment.
5705              parity      7 bit binary data with 1 parity bit.
5706              pink        pink noise in the range 0..255 generated  using
5707                          the Gardner method with the McCartney selection
5708                          tree optimization.  Pink  noise  is  where  the
5709                          power  spectral  density  is  inversely propor‐
5710                          tional to the frequency of the signal and hence
5711                          is slightly compressible.
5712              random      segments of the data stream are created by ran‐
5713                          domly calling  the  different  data  generation
5714                          methods.
5715              rarely1     data that has a single 1 in every 32 bits, ran‐
5716                          domly located.
5717              rarely0     data that has a single 0 in every 32 bits, ran‐
5718                          domly located.
5719              rdrand      x86-64  only, generate random data using rdrand
5720                          instruction.
5721              ror32       generate a 32 bit random value, rotate it right
5722                          0  to  7 places and store the rotated value for
5723                          each of the rotations.
5724              text        random ASCII text.
5725              utf8        random 8 bit data encoded to UTF-8.
5726              zero        all zeros, compresses very easily.
5727
5728       --zlib-window-bits W
5729              specify the window bits used to specify the history buffer size.
5730              The  value  is specified as the base two logarithm of the buffer
5731              size (e.g. value 9 is 2↑9 = 512 bytes). Default is 15.
5732
5733               Value
5734              -8-(-15)  raw deflate format.
5735                8-15    zlib format.
5736               24-31    gzip format.
5737               40-47    inflate auto format detection using zlib deflate format.
5738
5739       --zlib-mem-level L
5740              specify the reserved compression state memory for zlib.  Default
5741              is 8.
5742
5743              Value
5744                1    minimum memory usage.
5745                9    maximum memory usage.
5746
5747       --zlib-strategy S
5748              specifies  the strategy to use when deflating data. This is used
5749              to tune the compression algorithm. Default is 0.
5750
5751              Value
5752                0    used for normal data (Z_DEFAULT_STRATEGY).
5753                1    for data generated by a filter or predictor (Z_FILTERED)
5754                2    forces huffman encoding (Z_HUFFMAN_ONLY).
5755                3    Limit match distances to one run-length-encoding (Z_RLE).
5756                4    prevents dynamic huffman codes (Z_FIXED).
5757
5758       --zlib-stream-bytes S
5759              specify the amount of bytes to deflate until deflate should fin‐
5760              ish  the block and return with Z_STREAM_END. One can specify the
5761              size in units of Bytes, KBytes, MBytes and GBytes using the suf‐
5762              fix b, k, m or g.  Default is 0 which creates and endless stream
5763              until stressor ends.
5764
5765              Value
5766                0    creates an endless deflate stream until stressor stops.
5767                n    creates an stream of n bytes over and over again.
5768                     Each block will be closed with Z_STREAM_END.
5769
5770       --zombie N
5771              start N workers that create zombie processes. This will  rapidly
5772              try to create a default of 8192 child processes that immediately
5773              die and wait in a zombie state until they are reaped.  Once  the
5774              maximum  number  of  processes is reached (or fork fails because
5775              one has reached the maximum allowed number of children) the old‐
5776              est  child  is  reaped  and  a  new process is then created in a
5777              first-in first-out manner, and then repeated.
5778
5779       --zombie-ops N
5780              stop zombie stress workers after N bogo zombie operations.
5781
5782       --zombie-max N
5783              try to create as many as N zombie processes.  This  may  not  be
5784              reached if the system limit is less than N.
5785

EXAMPLES

5787       stress-ng --vm 8 --vm-bytes 80% -t 1h
5788
5789              run  8  virtual  memory  stressors  that combined use 80% of the
5790              available memory for 1 hour. Thus each stressor uses 10% of  the
5791              available memory.
5792
5793       stress-ng --cpu 4 --io 2 --vm 1 --vm-bytes 1G --timeout 60s
5794
5795              runs  for  60 seconds with 4 cpu stressors, 2 io stressors and 1
5796              vm stressor using 1GB of virtual memory.
5797
5798       stress-ng --iomix 2 --iomix-bytes 10% -t 10m
5799
5800              runs 2 instances of the mixed I/O stressors using a total of 10%
5801              of the available file system space for 10 minutes. Each stressor
5802              will use 5% of the available file system space.
5803
5804       stress-ng  --cyclic  1  --cyclic-dist  2500  --cyclic-method   clock_ns
5805       --cyclic-prio 100 --cyclic-sleep 10000 --hdd 0 -t 1m
5806
5807              measures  real  time  scheduling  latencies  created  by the hdd
5808              stressor. This uses the high resolution nanosecond clock to mea‐
5809              sure  latencies  during sleeps of 10,000 nanoseconds. At the end
5810              of 1 minute of stressing, the latency distribution with 2500  ns
5811              intervals  will  be  displayed.  NOTE: this must be run with the
5812              CAP_SYS_NICE capability to enable the real  time  scheduling  to
5813              get accurate measurements.
5814
5815       stress-ng --cpu 8 --cpu-ops 800000
5816
5817              runs 8 cpu stressors and stops after 800000 bogo operations.
5818
5819       stress-ng --sequential 2 --timeout 2m --metrics
5820
5821              run  2  simultaneous instances of all the stressors sequentially
5822              one by one, each for 2 minutes and  summarise  with  performance
5823              metrics at the end.
5824
5825       stress-ng --cpu 4 --cpu-method fft --cpu-ops 10000 --metrics-brief
5826
5827              run  4  FFT  cpu stressors, stop after 10000 bogo operations and
5828              produce a summary just for the FFT results.
5829
5830       stress-ng --cpu -1 --cpu-method all -t 1h --cpu-load 90
5831
5832              run cpu stressors on all online CPUs  working  through  all  the
5833              available CPU stressors for 1 hour, loading the CPUs at 90% load
5834              capacity.
5835
5836       stress-ng --cpu 0 --cpu-method all -t 20m
5837
5838              run cpu stressors on all configured CPUs working through all the
5839              available CPU stressors for 20 minutes
5840
5841       stress-ng --all 4 --timeout 5m
5842
5843              run 4 instances of all the stressors for 5 minutes.
5844
5845       stress-ng --random 64
5846
5847              run 64 stressors that are randomly chosen from all the available
5848              stressors.
5849
5850       stress-ng --cpu 64 --cpu-method all --verify -t 10m --metrics-brief
5851
5852              run 64 instances of all the different cpu stressors  and  verify
5853              that the computations are correct for 10 minutes with a bogo op‐
5854              erations summary at the end.
5855
5856       stress-ng --sequential -1 -t 10m
5857
5858              run all the stressors one by one for 10 minutes, with the number
5859              of  instances  of  each  stressor  matching the number of online
5860              CPUs.
5861
5862       stress-ng --sequential 8 --class io -t 5m --times
5863
5864              run all the stressors in the io class one by one for  5  minutes
5865              each, with 8 instances of each stressor running concurrently and
5866              show overall time utilisation statistics at the end of the run.
5867
5868       stress-ng --all -1 --maximize --aggressive
5869
5870              run all the stressors (1 instance of each per online CPU) simul‐
5871              taneously,  maximize  the  settings  (memory sizes, file alloca‐
5872              tions, etc.) and select the most demanding/aggressive options.
5873
5874       stress-ng --random 32 -x numa,hdd,key
5875
5876              run 32 randomly selected stressors and exclude the numa, hdd and
5877              key stressors
5878
5879       stress-ng --sequential 4 --class vm --exclude bigheap,brk,stack
5880
5881              run  4  instances  of the VM stressors one after each other, ex‐
5882              cluding the bigheap, brk and stack stressors
5883
5884       stress-ng --taskset 0,2-3 --cpu 3
5885
5886              run 3 instances of the CPU stressor and pin them to  CPUs  0,  2
5887              and 3.
5888

EXIT STATUS

5890         Status     Description
5891           0        Success.
5892           1        Error; incorrect user options or a fatal resource issue in
5893                    the stress-ng stressor harness (for example, out  of  mem‐
5894                    ory).
5895           2        One or more stressors failed.
5896           3        One or more stressors failed to initialise because of lack
5897                    of resources, for example ENOMEM (no memory),  ENOSPC  (no
5898                    space on file system) or a missing or unimplemented system
5899                    call.
5900           4        One or more stressors were not implemented on  a  specific
5901                    architecture or operating system.
5902           5        A stressor has been killed by an unexpected signal.
5903           6        A  stressor  exited  by exit(2) which was not expected and
5904                    timing metrics could not be gathered.
5905           7        The bogo ops metrics maybe  untrustworthy.  This  is  most
5906                    likely  to  occur  when a stress test is terminated during
5907                    the update of a bogo-ops counter such as when it has  been
5908                    OOM killed. A less likely reason is that the counter ready
5909                    indicator has been corrupted.
5910

BUGS

5912       File bug reports at:
5913         https://github.com/ColinIanKing/stress-ng/issues
5914

SEE ALSO

5916       cpuburn(1), perf(1), stress(1), taskset(1)
5917

AUTHOR

5919       stress-ng was written by Colin Ian King <colin.i.king@gmail.com> and is
5920       a  clean  room  re-implementation  and extension of the original stress
5921       tool by Amos  Waterland.  Thanks  also  for  contributions  from  Abdul
5922       Haleem,  Aboorva  Devarajan,  Adrian  Martin, Adrian Ratiu, André Wild,
5923       Aleksandar N. Kostadinov, Alexander Kanavin,  Alexandru  Ardelean,  Al‐
5924       fonso  Sánchez-Beato, André Wild, Arjan van de Ven, Baruch Siach, Bryan
5925       W. Lewis, Camille Constans, Carlos Santo, Christian Ehrhardt,  Christo‐
5926       pher  Brown,  Chunyu  Hu,  Danilo  Krummrich,  David  Turner, Dominik B
5927       Czarnota, Dorinda Bassey, Eric Lin, Fabien  Malfoy,  Fabrice  Fontaine,
5928       Francis  Laniel,  Iyán  Méndez  Veiga, Helmut Grohne, James Hunt, James
5929       Wang, Jan Luebbe, Jianshen Liu, Jim Rowan, John Kacur, Joseph DeVincen‐
5930       tis,  Jules Maselbas, Khalid Elmously, Khem Raj, Luca Pizzamiglio, Luis
5931       Henriques, Manoj Iyer, Matthew Tippett,  Mauricio  Faria  de  Oliveira,
5932       Maxime  Chevallier, Maya Rashish, Mayuresh Chitale, Mike Koreneff, Paul
5933       Menzel, Piyush Goyal, Ralf Ramsauer, Rob Colclaser, Rosen  Penev,  Sid‐
5934       dhesh  Poyarekar Thadeu Lima de Souza Cascardo, Thia Wyrod, Thinh Tran,
5935       Tim Gardner, Tim Gates, Tim  Orling,  Tommi  Rantala,  Witold  Baryluk,
5936       Zhiyi Sun and others.
5937

NOTES

5939       Sending a SIGALRM, SIGINT or SIGHUP to stress-ng causes it to terminate
5940       all the stressor processes and ensures temporary files and shared  mem‐
5941       ory segments are removed cleanly.
5942
5943       Sending  a  SIGUSR2 to stress-ng will dump out the current load average
5944       and memory statistics.
5945
5946       Note that the stress-ng cpu, io, vm and hdd tests are different  imple‐
5947       mentations of the original stress tests and hence may produce different
5948       stress characteristics.  stress-ng does  not  support  any  GPU  stress
5949       tests.
5950
5951       The  bogo  operations  metrics may change with each release  because of
5952       bug fixes to the code, new features, compiler optimisations or  changes
5953       in system call performance.
5954
5956       Copyright  ©  2013-2021  Canonical Ltd, Copyright © 2021-2022 Colin Ian
5957       King.
5958       This is free software; see the source for copying conditions.  There is
5959       NO  warranty;  not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR
5960       PURPOSE.
5961
5962
5963
5964                                  13 Sep 2022                     STRESS-NG(1)
Impressum