1STRESS-NG(1) General Commands Manual STRESS-NG(1)
2
3
4
6 stress-ng - a tool to load and stress a computer system
7
8
10 stress-ng [OPTION [ARG]] ...
11
12
14 stress-ng will stress test a computer system in various selectable
15 ways. It was designed to exercise various physical subsystems of a com‐
16 puter as well as the various operating system kernel interfaces.
17 stress-ng also has a wide range of CPU specific stress tests that exer‐
18 cise floating point, integer, bit manipulation and control flow.
19
20 stress-ng was originally intended to make a machine work hard and trip
21 hardware issues such as thermal overruns as well as operating system
22 bugs that only occur when a system is being thrashed hard. Use
23 stress-ng with caution as some of the tests can make a system run hot
24 on poorly designed hardware and also can cause excessive system thrash‐
25 ing which may be difficult to stop.
26
27 stress-ng can also measure test throughput rates; this can be useful to
28 observe performance changes across different operating system releases
29 or types of hardware. However, it has never been intended to be used as
30 a precise benchmark test suite, so do NOT use it in this manner.
31
32 Running stress-ng with root privileges will adjust out of memory set‐
33 tings on Linux systems to make the stressors unkillable in low memory
34 situations, so use this judiciously. With the appropriate privilege,
35 stress-ng can allow the ionice class and ionice levels to be adjusted,
36 again, this should be used with care.
37
38 One can specify the number of processes to invoke per type of stress
39 test; specifying a zero value will select the number of processors
40 available as defined by sysconf(_SC_NPROCESSORS_CONF), if that can't be
41 determined then the number of online CPUs is used. If the value is
42 less than zero then the number of online CPUs is used.
43
45 General stress-ng control options:
46
47 --abort
48 this option will force all running stressors to abort (termi‐
49 nate) if any other stressor terminates prematurely because of a
50 failure.
51
52 --aggressive
53 enables more file, cache and memory aggressive options. This may
54 slow tests down, increase latencies and reduce the number of
55 bogo ops as well as changing the balance of user time vs system
56 time used depending on the type of stressor being used.
57
58 -a N, --all N, --parallel N
59 start N instances of all stressors in parallel. If N is less
60 than zero, then the number of CPUs online is used for the number
61 of instances. If N is zero, then the number of configured CPUs
62 in the system is used.
63
64 -b N, --backoff N
65 wait N microseconds between the start of each stress worker
66 process. This allows one to ramp up the stress tests over time.
67
68 --change-cpu
69 this forces child processes of some stressors to change to a
70 different CPU from the parent on startup. Note that during the
71 execution of the stressor the scheduler may choose move the par‐
72 ent onto the same CPU as the child. The stressors affected by
73 this option are client/server style stressors, such as the net‐
74 work stresors (sock, sockmany, udp, etc) or context switching
75 stressors (switch, pipe, etc).
76
77 --class name
78 specify the class of stressors to run. Stressors are classified
79 into one or more of the following classes: cpu, cpu-cache, de‐
80 vice, gpu, io, interrupt, filesystem, memory, network, os, pipe,
81 scheduler and vm. Some stressors fall into just one class. For
82 example the 'get' stressor is just in the 'os' class. Other
83 stressors fall into more than one class, for example, the
84 'lsearch' stressor falls into the 'cpu', 'cpu-cache' and 'mem‐
85 ory' classes as it exercises all these three. Selecting a spe‐
86 cific class will run all the stressors that fall into that class
87 only when run with the --sequential option.
88
89 Specifying a name followed by a question mark (for example
90 --class vm?) will print out all the stressors in that specific
91 class.
92
93 --config
94 print out the configuration used to build stress-ng.
95
96 -n, --dry-run
97 parse options, but do not run stress tests. A no-op.
98
99 --ftrace
100 enable kernel function call tracing (Linux only). This will use
101 the kernel debugfs ftrace mechanism to record all the kernel
102 functions used on the system while stress-ng is running. This
103 is only as accurate as the kernel ftrace output, so there may be
104 some variability on the data reported.
105
106 -h, --help
107 show help.
108
109 --ignite-cpu
110 alter kernel controls to try and maximize the CPU. This requires
111 root privilege to alter various /sys interface controls. Cur‐
112 rently this only works for Intel P-State enabled x86 systems on
113 Linux.
114
115 --interrupts
116 check for any system management interrupts or error interrupts
117 that occur, for example thermal overruns, machine check excep‐
118 tions, etc. Note that the interrupts are accounted to all the
119 concurrently running stressors, so total count for all stressors
120 is over accounted.
121
122 --ionice-class class
123 specify ionice class (only on Linux). Can be idle (default),
124 besteffort, be, realtime, rt.
125
126 --ionice-level level
127 specify ionice level (only on Linux). For idle, 0 is the only
128 possible option. For besteffort or realtime values 0 (highest
129 priority) to 7 (lowest priority). See ionice(1) for more de‐
130 tails.
131
132 --iostat S
133 every S seconds show I/O statistics on the device that stores
134 the stress-ng temporary files. This is either the device of the
135 current working directory or the --temp-path specified path.
136 Currently a Linux only option. The fields output are:
137
138 Column Heading Explanation
139 Inflight number of I/O requests that have been issued to
140 the device driver but have not yet completed
141 Rd K/s read rate in 1024 bytes per second
142 Wr K/s write rate in 1024 bytes per second
143 Dscd K/s discard rate in 1024 bytes per second
144 Rd/s reads per second
145 Wr/s writes per second
146 Dscd/s discards per second
147
148 --job jobfile
149 run stressors using a jobfile. The jobfile is essentially a
150 file containing stress-ng options (without the leading --) with
151 one option per line. Lines may have comments with comment text
152 proceeded by the # character. A simple example is as follows:
153
154 run sequential # run stressors sequentially
155 verbose # verbose output
156 metrics-brief # show metrics at end of run
157 timeout 60s # stop each stressor after 60 seconds
158 #
159 # vm stressor options:
160 #
161 vm 2 # 2 vm stressors
162 vm-bytes 128M # 128MB available memory
163 vm-keep # keep vm mapping
164 vm-populate # populate memory
165 #
166 # memcpy stressor options:
167 #
168 memcpy 5 # 5 memcpy stressors
169
170 The job file introduces the run command that specifies how to
171 run the stressors:
172
173 run sequential - run stressors sequentially
174 run parallel - run stressors together in parallel
175
176 Note that 'run parallel' is the default.
177
178 --keep-files
179 do not remove files and directories created by the stressors.
180 This can be useful for debugging purposes. Not generally recom‐
181 mended as it can fill up a file system.
182
183 -k, --keep-name
184 by default, stress-ng will attempt to change the name of the
185 stress processes according to their functionality; this option
186 disables this and keeps the process names to be the name of the
187 parent process, that is, stress-ng.
188
189 --klog-check
190 check the kernel log for kernel error and warning messages and
191 report these as soon as they are detected. Linux only and re‐
192 quires root capability to read the kernel log.
193
194 --ksm enable kernel samepage merging (Linux only). This is a memory-
195 saving de-duplication feature for merging anonymous (private)
196 pages.
197
198 --log-brief
199 by default stress-ng will report the name of the program, the
200 message type and the process id as a prefix to all output. The
201 --log-brief option will output messages without these fields to
202 produce a less verbose output.
203
204 --log-file filename
205 write messages to the specified log file.
206
207 --log-lockless
208 log messages use a lock to avoid intermingling of blocks of
209 stressor messages, however this may cause contention when emit‐
210 ting a high rate of logging messages in verbose mode and many
211 stressors are running, for example when testing CPU scaling with
212 many processes on many CPUs. This option disables log message
213 locking.
214
215 --maximize
216 overrides the default stressor settings and instead sets these
217 to the maximum settings allowed. These defaults can always be
218 overridden by the per stressor settings options if required.
219
220 --max-fd N
221 set the maximum limit on file descriptors (value or a % of sys‐
222 tem allowed maximum). By default, stress-ng can use all the
223 available file descriptors; this option sets the limit in the
224 range from 10 up to the maximum limit of RLIMIT_NOFILE. One can
225 use a % setting too, e.g. 50% is half the maximum allowed file
226 descriptors. Note that stress-ng will use about 5 of the avail‐
227 able file descriptors so take this into consideration when using
228 this setting.
229
230 --mbind list
231 set strict NUMA memory allocation based on the list of NUMA
232 nodes provided; page allocations will come from the node with
233 sufficient free memory closest to the specified node(s) where
234 the allocation takes place. This uses the Linux set_mempolicy(2)
235 call using the MPOL_BIND mode. The NUMA nodes to be used are
236 specified by a comma separated list of node (0 to N-1). One can
237 specify a range of NUMA nodes using '-', for example: --mbind
238 0,2-3,6,7-11
239
240 --metrics
241 output number of bogo operations in total performed by the
242 stress processes. Note that these are not a reliable metric of
243 performance or throughput and have not been designed to be used
244 for benchmarking whatsoever. Some stressors have additional
245 metrics that are more useful than bogo-ops, and these are gener‐
246 ally more useful for observing how a system behaves when under
247 various kinds of load.
248
249 The following columns of information are output:
250
251 Column Heading Explanation
252 bogo ops number of iterations of the stressor
253 during the run. This is metric of
254 how much overall "work" has been
255 achieved in bogo operations. Do not
256 use this as a reliable measure of
257 throughput for benchmarking.
258 real time (secs) average wall clock duration (in sec‐
259 onds) of the stressor. This is the
260 total wall clock time of all the in‐
261 stances of that particular stressor
262 divided by the number of these
263 stressors being run.
264 usr time (secs) total user time (in seconds) con‐
265 sumed running all the instances of
266 the stressor.
267 sys time (secs) total system time (in seconds) con‐
268 sumed running all the instances of
269 the stressor.
270
271 bogo ops/s (real time) total bogo operations per second
272 based on wall clock run time. The
273 wall clock time reflects the appar‐
274 ent run time. The more processors
275 one has on a system the more the
276 work load can be distributed onto
277 these and hence the wall clock time
278 will reduce and the bogo ops rate
279 will increase. This is essentially
280 the "apparent" bogo ops rate of the
281 system.
282 bogo ops/s (usr+sys time) total bogo operations per second
283 based on cumulative user and system
284 time. This is the real bogo ops
285 rate of the system taking into con‐
286 sideration the actual time execution
287 time of the stressor across all the
288 processors. Generally this will de‐
289 crease as one adds more concurrent
290 stressors due to contention on
291 cache, memory, execution units,
292 buses and I/O devices.
293 CPU used per instance (%) total percentage of CPU used divided
294 by number of stressor instances.
295 100% is 1 full CPU. Some stressors
296 run multiple threads so it is possi‐
297 ble to have a figure greater than
298 100%.
299 RSS Max (KB) resident set size (RSS), the portion
300 of memory (measured in Kilobytes)
301 occupied by a process in main mem‐
302 ory.
303
304 --metrics-brief
305 show shorter list of stressor metrics (no CPU used per in‐
306 stance).
307
308 --minimize
309 overrides the default stressor settings and instead sets these
310 to the minimum settings allowed. These defaults can always be
311 overridden by the per stressor settings options if required.
312
313 --no-madvise
314 from version 0.02.26 stress-ng automatically calls madvise(2)
315 with random advise options before each mmap and munmap to stress
316 the vm subsystem a little harder. The --no-advise option turns
317 this default off.
318
319 --no-oom-adjust
320 disable any form of out-of-memory score adjustments, keep the
321 system defaults. Normally stress-ng will adjust the out-of-mem‐
322 ory scores on stressors to try to create more memory pressure.
323 This option disables the adjustments.
324
325 --no-rand-seed
326 Do not seed the stress-ng pseudo-random number generator with a
327 quasi random start seed, but instead seed it with constant val‐
328 ues. This forces tests to run each time using the same start
329 conditions which can be useful when one requires reproducible
330 stress tests.
331
332 --oom-avoid
333 Attempt to avoid out-of-memory conditions that can lead to the
334 Out-of-Memory (OOM) killer terminating stressors. This checks
335 for low memory scenarios and swapping before making memory allo‐
336 cations and hence adds some overhead to the stressors and will
337 slow down stressor allocation speeds.
338
339 --oom-avoid-bytes N
340 Specify a low memory threshold to avoid making any further mem‐
341 ory allocations. The parameter can be specified as an absolute
342 number of bytes (e.g. 2M for 2MB) or a percentage of the current
343 free memory, e.g. 5% (the default is 2.5%). This option implic‐
344 itly enables --oom-avoid. The option allows the system to have
345 enough free memory to try to avoid the out-of-memory killer ter‐
346 minating processes.
347
348 --oomable
349 Do not respawn a stressor if it gets killed by the Out-of-Memory
350 (OOM) killer. The default behaviour is to restart a new in‐
351 stance of a stressor if the kernel OOM killer terminates the
352 process. This option disables this default behaviour.
353
354 --page-in
355 touch allocated pages that are not in core, forcing them to be
356 paged back in. This is a useful option to force all the allo‐
357 cated pages to be paged in when using the bigheap, mmap and vm
358 stressors. It will severely degrade performance when the memory
359 in the system is less than the allocated buffer sizes. This
360 uses mincore(2) to determine the pages that are not in core and
361 hence need touching to page them back in.
362
363 --pathological
364 enable stressors that are known to hang systems. Some stressors
365 can rapidly consume resources that may hang a system, or perform
366 actions that can lock a system up or cause it to reboot. These
367 stressors are not enabled by default, this option enables them,
368 but you probably don't want to do this. You have been warned.
369 This option applies to the stressors: bad-ioctl, bind-mount,
370 cpu-online, mlockmany, oom-pipe, smi, sysinval and watchdog.
371
372 --perf measure processor and system activity using perf events. Linux
373 only and caveat emptor, according to perf_event_open(2): "Always
374 double-check your results! Various generalized events have had
375 wrong values.". Note that with Linux 4.7 one needs to have
376 CAP_SYS_ADMIN capabilities for this option to work, or adjust
377 /proc/sys/kernel/perf_event_paranoid to below 2 to use this
378 without CAP_SYS_ADMIN.
379
380 --permute N
381 run all permutations of the selected stressors with N instances
382 of the permutated stressors per run. If N is less than zero,
383 then the number of CPUs online is used for the number of in‐
384 stances. If N is zero, then the number of configured CPUs in the
385 system is used. This will perform multiple runs with all the
386 permutations of the stressors. Use this in conjuction with the
387 --with or --class option to specify the stressors to permute.
388
389 -q, --quiet
390 do not show any output.
391
392 -r N, --random N
393 start N random stress workers. If N is 0, then the number of
394 configured processors is used for N.
395
396 --sched scheduler
397 select the named scheduler (only on Linux). To see the list of
398 available schedulers use: stress-ng --sched which
399
400 --sched-prio prio
401 select the scheduler priority level (only on Linux). If the
402 scheduler does not support this then the default priority level
403 of 0 is chosen.
404
405 --sched-period period
406 select the period parameter for deadline scheduler (only on
407 Linux). Default value is 0 (in nanoseconds).
408
409 --sched-runtime runtime
410 select the runtime parameter for deadline scheduler (only on
411 Linux). Default value is 99999 (in nanoseconds).
412
413 --sched-deadline deadline
414 select the deadline parameter for deadline scheduler (only on
415 Linux). Default value is 100000 (in nanoseconds).
416
417 --sched-reclaim
418 use cpu bandwidth reclaim feature for deadline scheduler (only
419 on Linux).
420
421 --seed N
422 set the random number generate seed with a 64 bit value. Allows
423 stressors to use the same random number generator sequences on
424 each invocation.
425
426 --settings
427 show the various option settings.
428
429 --sequential N
430 sequentially run all the stressors one by one for a default of
431 60 seconds. The number of instances of each of the individual
432 stressors to be started is N. If N is less than zero, then the
433 number of CPUs online is used for the number of instances. If N
434 is zero, then the number of CPUs in the system is used. Use the
435 --timeout option to specify the duration to run each stressor.
436
437 --skip-silent
438 silence messages that report that a stressor has been skipped
439 because it requires features not supported by the system, such
440 as unimplemented system calls, missing resources or processor
441 specific features.
442
443 --smart
444 scan the block devices for changes S.M.A.R.T. statistics (Linux
445 only). This requires root privileges to read the Self-Monitor‐
446 ing, Analysis and Reporting Technology data from all block de‐
447 vies and will report any changes in the statistics. One caveat
448 is that device manufacturers provide different sets of data, the
449 exact meaning of the data can be vague and the data may be inac‐
450 curate.
451
452 --sn use scientific notation (e.g. 2.412e+01) for metrics.
453
454 --status N
455 report every N seconds the number of running, exiting, reaped
456 and failed stressors, number of stressors that received SIGARLM
457 termination signal as well as the current run duration.
458
459 --stderr
460 write messages to stderr. With version 0.15.08 output is written
461 to stdout, previously due to a historical oversight output went
462 to stderr. This option allows one to revert to the pre-0.15.08
463 behaviour.
464
465 --stdout
466 all output goes to stdout. This is the new default for version
467 0.15.08. Use the --stderr option for the original behaviour.
468
469 --stressors
470 output the names of the available stressors.
471
472 --syslog
473 log output (except for verbose -v messages) to the syslog.
474
475 --taskset list
476 set CPU affinity based on the list of CPUs provided; stress-ng
477 is bound to just use these CPUs (Linux only). The CPUs to be
478 used are specified by a comma separated list of CPU (0 to N-1).
479 One can specify a range of CPUs using '-', for example:
480 --taskset 0,2-3,6,7-11
481
482 --temp-path path
483 specify a path for stress-ng temporary directories and temporary
484 files; the default path is the current working directory. This
485 path must have read and write access for the stress-ng stress
486 processes.
487
488 --thermalstat S
489 every S seconds show CPU and thermal load statistics. This op‐
490 tion shows average CPU frequency in GHz (average of online-
491 CPUs), the minimum CPU frequency, the maximum CPU frequency,
492 load averages (1 minute, 5 minute and 15 minutes) and available
493 thermal zone temperatures in degrees Centigrade.
494
495 --thrash
496 This can only be used when running on Linux and with root privi‐
497 lege. This option starts a background thrasher process that
498 works through all the processes on a system and tries to page as
499 many pages in the processes as possible. It also periodically
500 drops the page cache, frees reclaimable slab objects and page‐
501 cache. This will cause considerable amount of thrashing of swap
502 on an over-committed system.
503
504 -t N, --timeout T
505 run each stress test for at least T seconds. One can also spec‐
506 ify the units of time in seconds, minutes, hours, days or years
507 with the suffix s, m, h, d or y. Each stressor will be sent a
508 SIGALRM signal at the timeout time, however if the stress test
509 is swapped out, in an uninterruptible system call or performing
510 clean up (such as removing hundreds of test file) it may take a
511 while to finally terminate. A 0 timeout will run stress-ng for
512 ever with no timeout. The default timeout is 24 hours.
513
514 --times
515 show the cumulative user and system times of all the child pro‐
516 cesses at the end of the stress run. The percentage of utilisa‐
517 tion of available CPU time is also calculated from the number of
518 on-line CPUs in the system.
519
520 --timestamp
521 add a timestamp in hours, minutes, seconds and hundredths of a
522 second to the log output.
523
524 --timer-slack N
525 adjust the per process timer slack to N nanoseconds (Linux
526 only). Increasing the timer slack allows the kernel to coalesce
527 timer events by adding some fuzziness to timer expiration times
528 and hence reduce wakeups. Conversely, decreasing the timer
529 slack will increase wakeups. A value of 0 for the timer-slack
530 will set the system default of 50,000 nanoseconds.
531
532 --tz collect temperatures from the available thermal zones on the ma‐
533 chine (Linux only). Some devices may have one or more thermal
534 zones, where as others may have none.
535
536 -v, --verbose
537 show all debug, warnings and normal information output.
538
539 --verify
540 verify results when a test is run. This is not available on all
541 tests. This will sanity check the computations or memory con‐
542 tents from a test run and report to stderr any unexpected fail‐
543 ures.
544
545 --verifiable
546 print the names of stressors that can be verified with the
547 --verify option.
548
549 -V, --version
550 show version of stress-ng, version of toolchain used to build
551 stress-ng and system information.
552
553 --vmstat S
554 every S seconds show statistics about processes, memory, paging,
555 block I/O, interrupts, context switches, disks and cpu activity.
556 The output is similar that to the output from the vmstat(8)
557 utility. Not fully supported on various UNIX systems.
558
559 --with list
560 specify stressors to run when using the --all, --seq or --per‐
561 mute options. For example to run 5 instances of the cpu, hash,
562 nop and vm stressors one after another (sequentially) for 1
563 minute per stressor use:
564
565 stress-ng --seq 5 --with cpu,hash,nop,vm --timeout 1m
566
567 -x, --exclude list
568 specify a list of one or more stressors to exclude (that is, do
569 not run them). This is useful to exclude specific stressors
570 when one selects many stressors to run using the --class option,
571 --sequential, --all and --random options. Example, run the cpu
572 class stressors concurrently and exclude the numa and search
573 stressors:
574
575 stress-ng --class cpu --all 1 -x numa,bsearch,hsearch,lsearch
576
577 -Y, --yaml filename
578 output gathered statistics to a YAML formatted file named 'file‐
579 name'.
580
581
582
583 Stressor specific options:
584
585 Access stressor
586 --access N
587 start N workers that work through various settings of file
588 mode bits (read, write, execute) for the file owner and
589 checks if the user permissions of the file using access(2)
590 and faccessat(2) are sane.
591
592 --access-ops N
593 stop access workers after N bogo access sanity checks.
594
595 Affinity stressor
596 --affinity N
597 start N workers that run 16 processes that rapidly change
598 CPU affinity (only on Linux). Rapidly switching CPU affin‐
599 ity can contribute to poor cache behaviour and high context
600 switch rate.
601
602 --affinity-delay N
603 delay for N nanoseconds before changing affinity to the
604 next CPU. The delay will spin on CPU scheduling yield op‐
605 erations for N nanoseconds before the process is moved to
606 another CPU. The default is 0 nanosconds.
607
608 --affinity-ops N
609 stop affinity workers after N bogo affinity operations.
610
611 --affinity-pin
612 pin all the 16 per stressor processes to a CPU. All 16 pro‐
613 cesses follow the CPU chosen by the main parent stressor,
614 forcing heavy per CPU loading.
615
616 --affinity-rand
617 switch CPU affinity randomly rather than the default of se‐
618 quentially.
619
620 --affinity-sleep N
621 sleep for N nanoseconds before changing affinity to the
622 next CPU.
623
624 Kernel crypto AF_ALG API stressor
625 --af-alg N
626 start N workers that exercise the AF_ALG socket domain by
627 hashing and encrypting various sized random messages. This
628 exercises the available hashes, ciphers, rng and aead
629 crypto engines in the Linux kernel.
630
631 --af-alg-dump
632 dump the internal list representing cryptographic algo‐
633 rithms parsed from the /proc/crypto file to standard output
634 (stdout).
635
636 --af-alg-ops N
637 stop af-alg workers after N AF_ALG messages are hashed.
638
639 Asynchronous I/O stressor (POSIX AIO)
640 --aio N
641 start N workers that issue multiple small asynchronous I/O
642 writes and reads on a relatively small temporary file using
643 the POSIX aio interface. This will just hit the file sys‐
644 tem cache and soak up a lot of user and kernel time in is‐
645 suing and handling I/O requests. By default, each worker
646 process will handle 16 concurrent I/O requests.
647
648 --aio-ops N
649 stop POSIX asynchronous I/O workers after N bogo asynchro‐
650 nous I/O requests.
651
652 --aio-requests N
653 specify the number of POSIX asynchronous I/O requests each
654 worker should issue, the default is 16; 1 to 4096 are al‐
655 lowed.
656
657 Asynchronous I/O stressor (Linux AIO)
658 --aiol N
659 start N workers that issue multiple 4K random asynchronous
660 I/O writes using the Linux aio system calls io_setup(2),
661 io_submit(2), io_getevents(2) and io_destroy(2). By de‐
662 fault, each worker process will handle 16 concurrent I/O
663 requests.
664
665 --aiol-ops N
666 stop Linux asynchronous I/O workers after N bogo asynchro‐
667 nous I/O requests.
668
669 --aiol-requests N
670 specify the number of Linux asynchronous I/O requests each
671 worker should issue, the default is 16; 1 to 4096 are al‐
672 lowed.
673
674 Alarm stressor
675 --alarm N
676 start N workers that exercise alarm(2) with MAXINT, 0 and
677 random alarm and sleep delays that get prematurely inter‐
678 rupted. Before each alarm is scheduled any previous pending
679 alarms are cancelled with zero second alarm calls.
680
681 --alarm-ops N
682 stop after N alarm bogo operations.
683
684 AppArmor stressor
685 --apparmor N
686 start N workers that exercise various parts of the AppArmor
687 interface. Currently one needs root permission to run this
688 particular test. Only available on Linux systems with Ap‐
689 pArmor support and requires the CAP_MAC_ADMIN capability.
690
691 --apparmor-ops
692 stop the AppArmor workers after N bogo operations.
693
694 Atomic stressor
695 --atomic N
696 start N workers that exercise various GCC __atomic_*()
697 built in operations on 8, 16, 32 and 64 bit integers that
698 are shared among the N workers. This stressor is only
699 available for builds using GCC 4.7.4 or higher. The stres‐
700 sor forces many front end cache stalls and cache refer‐
701 ences.
702
703 --atomic-ops N
704 stop the atomic workers after N bogo atomic operations.
705
706 Bad alternative stack stressor
707 --bad-altstack N
708 start N workers that create broken alternative signal
709 stacks for SIGSEGV and SIGBUS handling that in turn create
710 secondary SIGSEGV/SIGBUS errors. A variety of randonly se‐
711 lected nefarious methods are used to create the stacks:
712
713 • Unmapping the alternative signal stack, before triggering
714 the signal handling.
715 • Changing the alternative signal stack to just being read
716 only, write only, execute only.
717 • Using a NULL alternative signal stack.
718 • Using the signal handler object as the alternative signal
719 stack.
720 • Unmapping the alternative signal stack during execution
721 of the signal handler.
722 • Using a read-only text segment for the alternative signal
723 stack.
724 • Using an undersized alternative signal stack.
725 • Using the VDSO as an alternative signal stack.
726 • Using an alternative stack mapped onto /dev/zero.
727 • Using an alternative stack mapped to a zero sized tempo‐
728 rary file to generate a SIGBUS error.
729
730 --bad-altstack-ops N
731 stop the bad alternative stack stressors after N SIGSEGV
732 bogo operations.
733
734 Bad ioctl stressor
735 --bad-ioctl N
736 start N workers that perform a range of illegal bad read
737 ioctls (using _IOR) across the device drivers. This exer‐
738 cises page size, 64 bit, 32 bit, 16 bit and 8 bit reads as
739 well as NULL addresses, non-readable pages and PROT_NONE
740 mapped pages. Currently only for Linux and requires the
741 --pathological option.
742
743 --bad-ioctl-method [ inc | random | random-inc | stride ]
744 select the method of changing the ioctl command (number,
745 type) tuple per iteration, the default is random-inc.
746 Available bad-ioctl methods are described as follows:
747
748 Method Description
749 inc increment ioctl command by 1
750 random use a random ioctl command
751 random-inc increment ioctl command by a random value
752 random-stride increment ioctl command number by 1 and
753 decrement command type by 3
754
755 --bad-ioctl-ops N
756 stop the bad ioctl stressors after N bogo ioctl operations.
757
758 Big heap stressor
759 -B N, --bigheap N
760 start N workers that grow their heaps by reallocating mem‐
761 ory. If the out of memory killer (OOM) on Linux kills the
762 worker or the allocation fails then the allocating process
763 starts all over again. Note that the OOM adjustment for
764 the worker is set so that the OOM killer will treat these
765 workers as the first candidate processes to kill.
766
767 --bigheap-bytes N
768 maximum heap growth as N bytes per bigheap worker. One can
769 specify the size as % of total available memory or in units
770 of Bytes, KBytes, MBytes and GBytes using the suffix b, k,
771 m or g.
772
773 --bigheap-growth N
774 specify amount of memory to grow heap by per iteration.
775 Size can be from 4K to 64MB. Default is 64K.
776
777 --bigheap-mlock
778 attempt to mlock future allocated pages into memory causing
779 more memory pressure. If mlock(MCL_FUTURE) is implemented
780 then this will stop newly allocated pages from being
781 swapped out.
782
783 --bigheap-ops N
784 stop the big heap workers after N bogo allocation opera‐
785 tions are completed.
786
787 Binderfs stressor
788 --binderfs N
789 start N workers that mount, exercise and unmount binderfs.
790 The binder control device is exercised with 256 sequential
791 BINDER_CTL_ADD ioctl calls per loop.
792
793 --binderfs-ops N
794 stop after N binderfs cycles.
795
796 Bind mount stressor
797 --bind-mount N
798 start N workers that repeatedly bind mount / to / inside a
799 user namespace. This can consume resources rapidly, forcing
800 out of memory situations. Do not use this stressor unless
801 you want to risk hanging your machine.
802
803 --bind-mount-ops N
804 stop after N bind mount bogo operations.
805
806 Branch stressor
807 --branch N
808 start N workers that randomly branch to 1024 randomly se‐
809 lected locations and hence exercise the CPU branch predic‐
810 tion logic.
811
812 --branch-ops N
813 stop the branch stressors after N × 1024 branches
814
815 Brk stressor
816 --brk N
817 start N workers that grow the data segment by one page at a
818 time using multiple brk(2) calls. Each successfully allo‐
819 cated new page is touched to ensure it is resident in mem‐
820 ory. If an out of memory condition occurs then the test
821 will reset the data segment to the point before it started
822 and repeat the data segment resizing over again. The
823 process adjusts the out of memory setting so that it may be
824 killed by the out of memory (OOM) killer before other pro‐
825 cesses. If it is killed by the OOM killer then it will be
826 automatically re-started by a monitoring parent process.
827
828 --brk-bytes N
829 maximum brk growth as N bytes per brk worker. One can spec‐
830 ify the size as % of total available memory or in units of
831 Bytes, KBytes, MBytes and GBytes using the suffix b, k, m
832 or g.
833
834 --brk-mlock
835 attempt to mlock future brk pages into memory causing more
836 memory pressure. If mlock(MCL_FUTURE) is implemented then
837 this will stop new brk pages from being swapped out.
838
839 --brk-notouch
840 do not touch each newly allocated data segment page. This
841 disables the default of touching each newly allocated page
842 and hence avoids the kernel from necessarily backing the
843 page with physical memory.
844
845 --brk-ops N
846 stop the brk workers after N bogo brk operations.
847
848 Binary search stressor
849 --bsearch N
850 start N workers that binary search a sorted array of 32 bit
851 integers using bsearch(3). By default, there are 65536 ele‐
852 ments in the array. This is a useful method to exercise
853 random access of memory and processor cache.
854
855 --bsearch-ops N
856 stop the bsearch worker after N bogo bsearch operations are
857 completed.
858
859 --bsearch-size N
860 specify the size (number of 32 bit integers) in the array
861 to bsearch. Size can be from 1K to 4M.
862
863 Cache stressor
864 -C N, --cache N
865 start N workers that perform random wide spread memory read
866 and writes to thrash the CPU cache. The code does not in‐
867 telligently determine the CPU cache configuration and so it
868 may be sub-optimal in producing hit-miss read/write activ‐
869 ity for some processors.
870
871 --cache-cldemote
872 cache line demote (x86 only). This is a no-op for non-x86
873 architectures and older x86 processors that do not support
874 this feature.
875
876 --cache-clflushopt
877 use optimized cache line flush (x86 only). This is a no-op
878 for non-x86 architectures and older x86 processors that do
879 not support this feature.
880
881 --cache-clwb
882 cache line writeback (x86 only). This is a no-op for non-
883 x86 architectures and older x86 processors that do not sup‐
884 port this feature.
885
886 --cache-enable-all
887 where appropriate exercise the cache using cldemote,
888 clflushopt, fence, flush, sfence and prefetch.
889
890 --cache-fence
891 force write serialization on each store operation (x86
892 only). This is a no-op for non-x86 architectures.
893
894 --cache-flush
895 force flush cache on each store operation (x86 only). This
896 is a no-op for non-x86 architectures.
897
898 --cache-level N
899 specify level of cache to exercise (1=L1 cache, 2=L2 cache,
900 3=L3/LLC cache (the default)). If the cache hierarchy can‐
901 not be determined, built-in defaults will apply.
902
903 --cache-no-affinity
904 do not change processor affinity when --cache is in effect.
905
906 --cache-ops N
907 stop cache thrash workers after N bogo cache thrash opera‐
908 tions.
909
910 --cache-prefetch
911 force read prefetch on next read address on architectures
912 that support prefetching.
913
914 --cache-sfence
915 force write serialization on each store operation using the
916 sfence instruction (x86 only). This is a no-op for non-x86
917 architectures.
918
919 --cache-size N
920 override the default cache size setting to N bytes. One can
921 specify the in units of Bytes, KBytes, MBytes and GBytes
922 using the suffix b, k, m or g.
923
924 --cache-ways N
925 specify the number of cache ways to exercise. This allows a
926 subset of the overall cache size to be exercised.
927
928 Cache line stressor
929 --cacheline N
930 start N workers that exercise reading and writing individ‐
931 ual bytes in a shared buffer that is the size of a cache
932 line. Each stressor has 2 running processes that exercise
933 just two bytes that are next to each other. The intent is
934 to try and trigger cacheline corruption, stalls and misses
935 with shared memory accesses. For an N byte sized cacheline,
936 it is recommended to run N / 2 stressor instances.
937
938 --cacheline-affinity
939 frequently change CPU affinity, spread cacheline processes
940 evenly across all online CPUs to try and maximize lower-
941 level cache activity. Attempts to keep adjacent cachelines
942 being exercised by adjacent CPUs.
943
944 --cacheline-method method
945 specify a cacheline stress method. By default, all the
946 stress methods are exercised sequentially, however one can
947 specify just one method to be used if required. Available
948 cacheline stress methods are described as follows:
949
950 Method Description
951 all iterate over all the below cpu stress methods.
952 adjacent increment a specific byte in a cacheline and
953 read the adjacent byte, check for corruption ev‐
954 ery 7 increments.
955 atomicinc atomically increment a specific byte in a cache‐
956 line and check for corruption every 7 incre‐
957 ments.
958 bits write and read back shifted bit patterns into
959 specific byte in a cacheline and check for cor‐
960 ruption.
961 copy copy an adjacent byte to a specific byte in a
962 cacheline.
963 inc increment and read back a specific byte in a
964 cacheline and check for corruption every 7 in‐
965 crements.
966 mix perform a mix of increment, left and right ro‐
967 tates a specific byte in a cacheline and check
968 for corruption.
969 rdfwd64 increment a specific byte in a cacheline and
970 then read in forward direction an entire cache‐
971 line using 64 bit reads.
972 rdints increment a specific byte in a cacheline and
973 then read data at that byte location in natu‐
974 rally aligned locations integer values of size
975 8, 16, 32, 64 and 128 bits.
976
977
978 rdrev64 increment a specific byte in a cacheline and
979 then read in reverse direction an entire cache‐
980 line using 64 bit reads.
981 rdwr read and write the same 8 bit value into a spe‐
982 cific byte in a cacheline and check for corrup‐
983 tion.
984
985 --cacheline-ops N
986 stop cacheline workers after N loops of the byte exercising
987 in a cacheline.
988
989 Process capabilities stressor
990 --cap N
991 start N workers that read per process capabilities via
992 calls to capget(2) (Linux only).
993
994 --cap-ops N
995 stop after N cap bogo operations.
996
997 Cgroup stressor
998 --cgroup N
999 start N workers that mount a cgroup, move a child to the
1000 cgroup, read, write and remove the child from the cgroup
1001 and umount the cgroup per bogo-op iteration. This uses
1002 cgroup v2 and is only available for Linux systems.
1003
1004 --cgroup-ops N
1005 stop after N cgroup bogo operations.
1006
1007 Chattr stressor
1008 --chattr N
1009 start N workers that attempt to exercise file attributes
1010 via the EXT2_IOC_SETFLAGS ioctl. This is intended to be in‐
1011 tentionally racy and exercise a range of chattr attributes
1012 by enabling and disabling them on a file shared amongst the
1013 N chattr stressor processes. (Linux only).
1014
1015 --chattr-ops N
1016 stop after N chattr bogo operations.
1017
1018 Chdir stressor
1019 --chdir N
1020 start N workers that change directory between directories
1021 using chdir(2).
1022
1023 --chdir-dirs N
1024 exercise chdir on N directories. The default is 8192 direc‐
1025 tories, this allows 64 to 65536 directories to be used in‐
1026 stead.
1027
1028 --chdir-ops N
1029 stop after N chdir bogo operations.
1030
1031 Chmod stressor
1032 --chmod N
1033 start N workers that change the file mode bits via chmod(2)
1034 and fchmod(2) on the same file. The greater the value for N
1035 then the more contention on the single file. The stressor
1036 will work through all the combination of mode bits.
1037
1038 --chmod-ops N
1039 stop after N chmod bogo operations.
1040
1041 Chown stressor
1042 --chown N
1043 start N workers that exercise chown(2) on the same file.
1044 The greater the value for N then the more contention on the
1045 single file.
1046
1047 --chown-ops N
1048 stop the chown workers after N bogo chown(2) operations.
1049
1050 Chroot stressor
1051 --chroot N
1052 start N workers that exercise chroot(2) on various valid
1053 and invalid chroot paths. Only available on Linux systems
1054 and requires the CAP_SYS_ADMIN capability.
1055
1056 --chroot-ops N
1057 stop the chroot workers after N bogo chroot(2) operations.
1058
1059 Clock stressor
1060 --clock N
1061 start N workers exercising clocks and POSIX timers. For all
1062 known clock types this will exercise clock_getres(2),
1063 clock_gettime(2) and clock_nanosleep(2). For all known
1064 timers it will create a random duration timer and busy poll
1065 this until it expires. This stressor will cause frequent
1066 context switching.
1067
1068 --clock-ops N
1069 stop clock stress workers after N bogo operations.
1070
1071 Clone stressor
1072 --clone N
1073 start N workers that create clones (via the clone(2) and
1074 clone3() system calls). This will rapidly try to create a
1075 default of 8192 clones that immediately die and wait in a
1076 zombie state until they are reaped. Once the maximum num‐
1077 ber of clones is reached (or clone fails because one has
1078 reached the maximum allowed) the oldest clone thread is
1079 reaped and a new clone is then created in a first-in first-
1080 out manner, and then repeated. A random clone flag is se‐
1081 lected for each clone to try to exercise different clone
1082 operations. The clone stressor is a Linux only option.
1083
1084 --clone-max N
1085 try to create as many as N clone threads. This may not be
1086 reached if the system limit is less than N.
1087
1088 --clone-ops N
1089 stop clone stress workers after N bogo clone operations.
1090
1091 Close stressor
1092 --close N
1093 start N workers that try to force race conditions on clos‐
1094 ing opened file descriptors. These file descriptors have
1095 been opened in various ways to try and exercise different
1096 kernel close handlers.
1097
1098 --close-ops N
1099 stop close workers after N bogo close operations.
1100
1101 Swapcontext stressor
1102 --context N
1103 start N workers that run three threads that use swapcon‐
1104 text(3) to implement the thread-to-thread context switch‐
1105 ing. This exercises rapid process context saving and
1106 restoring and is bandwidth limited by register and memory
1107 save and restore rates.
1108
1109 --context-ops N
1110 stop context workers after N bogo context switches. In
1111 this stressor, 1 bogo op is equivalent to 1000 swapcontext
1112 calls.
1113
1114 Copy file stressor
1115 --copy-file N
1116 start N stressors that copy a file using the Linux
1117 copy_file_range(2) system call. 128 KB chunks of data are
1118 copied from random locations from one file to random loca‐
1119 tions to a destination file. By default, the files are 256
1120 MB in size. Data is sync'd to the filesystem after each
1121 copy_file_range(2) call.
1122
1123 --copy-file-bytes N
1124 copy file size, the default is 256 MB. One can specify the
1125 size as % of free space on the file system or in units of
1126 Bytes, KBytes, MBytes and GBytes using the suffix b, k, m
1127 or g.
1128
1129 --copy-file-ops N
1130 stop after N copy_file_range() calls.
1131
1132 CPU stressor
1133 -c N, --cpu N
1134 start N workers exercising the CPU by sequentially working
1135 through all the different CPU stress methods. Instead of
1136 exercising all the CPU stress methods, one can specify a
1137 specific CPU stress method with the --cpu-method option.
1138
1139 -l P, --cpu-load P
1140 load CPU with P percent loading for the CPU stress workers.
1141 0 is effectively a sleep (no load) and 100 is full loading.
1142 The loading loop is broken into compute time (load%) and
1143 sleep time (100% - load%). Accuracy depends on the overall
1144 load of the processor and the responsiveness of the sched‐
1145 uler, so the actual load may be different from the desired
1146 load. Note that the number of bogo CPU operations may not
1147 be linearly scaled with the load as some systems employ CPU
1148 frequency scaling and so heavier loads produce an increased
1149 CPU frequency and greater CPU bogo operations.
1150
1151 Note: This option only applies to the --cpu stressor option
1152 and not to all of the cpu class of stressors.
1153
1154 --cpu-load-slice S
1155 note - this option is only useful when --cpu-load is less
1156 than 100%. The CPU load is broken into multiple busy and
1157 idle cycles. Use this option to specify the duration of a
1158 busy time slice. A negative value for S specifies the num‐
1159 ber of iterations to run before idling the CPU (e.g. -30
1160 invokes 30 iterations of a CPU stress loop). A zero value
1161 selects a random busy time between 0 and 0.5 seconds. A
1162 positive value for S specifies the number of milliseconds
1163 to run before idling the CPU (e.g. 100 keeps the CPU busy
1164 for 0.1 seconds). Specifying small values for S lends to
1165 small time slices and smoother scheduling. Setting
1166 --cpu-load as a relatively low value and --cpu-load-slice
1167 to be large will cycle the CPU between long idle and busy
1168 cycles and exercise different CPU frequencies. The thermal
1169 range of the CPU is also cycled, so this is a good mecha‐
1170 nism to exercise the scheduler, frequency scaling and pas‐
1171 sive/active thermal cooling mechanisms.
1172
1173 Note: This option only applies to the --cpu stressor option
1174 and not to all of the cpu class of stressors.
1175
1176 --cpu-method method
1177 specify a cpu stress method. By default, all the stress
1178 methods are exercised sequentially, however one can specify
1179 just one method to be used if required. Available cpu
1180 stress methods are described as follows:
1181
1182 Method Description
1183 all iterate over all the below cpu stress
1184 methods
1185 ackermann Ackermann function: compute A(3, 7),
1186 where:
1187 A(m, n) = n + 1 if m = 0;
1188 A(m - 1, 1) if m > 0 and n = 0;
1189 A(m - 1, A(m, n - 1)) if m > 0 and n > 0
1190 apery calculate Apery's constant ζ(3); the sum
1191 of 1/(n ↑ 3) to a precision of 1.0x10↑14
1192
1193
1194 bitops various bit operations from bithack,
1195 namely: reverse bits, parity check, bit
1196 count, round to nearest power of 2
1197 callfunc recursively call 8 argument C function to
1198 a depth of 1024 calls and unwind
1199 cfloat 1000 iterations of a mix of floating
1200 point complex operations
1201 cdouble 1000 iterations of a mix of double float‐
1202 ing point complex operations
1203 clongdouble 1000 iterations of a mix of long double
1204 floating point complex operations
1205 collatz compute the 1348 steps in the collatz se‐
1206 quence starting from number 989345275647.
1207 Where f(n) = n / 2 (for even n) and f(n)
1208 = 3n + 1 (for odd n).
1209 correlate perform a 8192 × 512 correlation of ran‐
1210 dom doubles
1211 crc16 compute 1024 rounds of CCITT CRC16 on
1212 random data
1213 decimal32 1000 iterations of a mix of 32 bit deci‐
1214 mal floating point operations (GCC only)
1215 decimal64 1000 iterations of a mix of 64 bit deci‐
1216 mal floating point operations (GCC only)
1217 decimal128 1000 iterations of a mix of 128 bit deci‐
1218 mal floating point operations (GCC only)
1219 dither Floyd-Steinberg dithering of a 1024 × 768
1220 random image from 8 bits down to 1 bit of
1221 depth
1222 div8 50,000 8 bit unsigned integer divisions
1223 div16 50,000 16 bit unsigned integer divisions
1224 div32 50,000 32 bit unsigned integer divisions
1225 div64 50,000 64 bit unsigned integer divisions
1226 div128 50,000 128 bit unsigned integer divisions
1227 double 1000 iterations of a mix of double preci‐
1228 sion floating point operations
1229 euler compute e using n = (1 + (1 ÷ n)) ↑ n
1230 explog iterate on n = exp(log(n) ÷ 1.00002)
1231 factorial find factorials from 1..150 using Stir‐
1232 ling's and Ramanujan's approximations
1233 fibonacci compute Fibonacci sequence of 0, 1, 1, 2,
1234 5, 8...
1235 fft 4096 sample Fast Fourier Transform
1236 fletcher16 1024 rounds of a naïve implementation of
1237 a 16 bit Fletcher's checksum
1238 float 1000 iterations of a mix of floating
1239 point operations
1240 float16 1000 iterations of a mix of 16 bit float‐
1241 ing point operations
1242 float32 1000 iterations of a mix of 32 bit float‐
1243 ing point operations
1244 float64 1000 iterations of a mix of 64 bit float‐
1245 ing point operations
1246 float80 1000 iterations of a mix of 80 bit float‐
1247 ing point operations
1248 float128 1000 iterations of a mix of 128 bit
1249 floating point operations
1250 floatconversion perform 65536 iterations of floating
1251 point conversions between float, double
1252 and long double floating point variables.
1253 gamma calculate the Euler-Mascheroni constant γ
1254 using the limiting difference between the
1255 harmonic series (1 + 1/2 + 1/3 + 1/4 +
1256 1/5 ... + 1/n) and the natural logarithm
1257 ln(n), for n = 80000.
1258 gcd compute GCD of integers
1259 gray calculate binary to gray code and gray
1260 code back to binary for integers from 0
1261 to 65535
1262
1263
1264
1265
1266 hamming compute Hamming H(8,4) codes on 262144
1267 lots of 4 bit data. This turns 4 bit data
1268 into 8 bit Hamming code containing 4 par‐
1269 ity bits. For data bits d1..d4, parity
1270 bits are computed as:
1271 p1 = d2 + d3 + d4
1272 p2 = d1 + d3 + d4
1273 p3 = d1 + d2 + d4
1274 p4 = d1 + d2 + d3
1275 hanoi solve a 21 disc Towers of Hanoi stack us‐
1276 ing the recursive solution
1277 hyperbolic compute sinh(θ) × cosh(θ) + sinh(2θ) +
1278 cosh(3θ) for float, double and long dou‐
1279 ble hyperbolic sine and cosine functions
1280 where θ = 0 to 2π in 1500 steps
1281 idct 8 × 8 IDCT (Inverse Discrete Cosine
1282 Transform).
1283 int8 1000 iterations of a mix of 8 bit integer
1284 operations.
1285 int16 1000 iterations of a mix of 16 bit inte‐
1286 ger operations.
1287 int32 1000 iterations of a mix of 32 bit inte‐
1288 ger operations.
1289 int64 1000 iterations of a mix of 64 bit inte‐
1290 ger operations.
1291 int128 1000 iterations of a mix of 128 bit inte‐
1292 ger operations (GCC only).
1293 int32float 1000 iterations of a mix of 32 bit inte‐
1294 ger and floating point operations.
1295 int32double 1000 iterations of a mix of 32 bit inte‐
1296 ger and double precision floating point
1297 operations.
1298 int32longdouble 1000 iterations of a mix of 32 bit inte‐
1299 ger and long double precision floating
1300 point operations.
1301 int64float 1000 iterations of a mix of 64 bit inte‐
1302 ger and floating point operations.
1303 int64double 1000 iterations of a mix of 64 bit inte‐
1304 ger and double precision floating point
1305 operations.
1306 int64longdouble 1000 iterations of a mix of 64 bit inte‐
1307 ger and long double precision floating
1308 point operations.
1309 int128float 1000 iterations of a mix of 128 bit inte‐
1310 ger and floating point operations (GCC
1311 only).
1312 int128double 1000 iterations of a mix of 128 bit inte‐
1313 ger and double precision floating point
1314 operations (GCC only).
1315 int128longdouble 1000 iterations of a mix of 128 bit inte‐
1316 ger and long double precision floating
1317 point operations (GCC only).
1318 int128decimal32 1000 iterations of a mix of 128 bit inte‐
1319 ger and 32 bit decimal floating point op‐
1320 erations (GCC only).
1321 int128decimal64 1000 iterations of a mix of 128 bit inte‐
1322 ger and 64 bit decimal floating point op‐
1323 erations (GCC only).
1324 int128decimal128 1000 iterations of a mix of 128 bit inte‐
1325 ger and 128 bit decimal floating point
1326 operations (GCC only).
1327 intconversion perform 65536 iterations of integer con‐
1328 versions between int16, int32 and int64
1329 variables.
1330 ipv4checksum compute 1024 rounds of the 16 bit ones'
1331 complement IPv4 checksum.
1332 jmp Simple unoptimised compare >, <, == and
1333 jmp branching.
1334
1335
1336
1337
1338 lfsr32 16384 iterations of a 32 bit Galois lin‐
1339 ear feedback shift register using the
1340 polynomial x↑32 + x↑31 + x↑29 + x + 1.
1341 This generates a ring of 2↑32 - 1 unique
1342 values (all 32 bit values except for 0).
1343 ln2 compute ln(2) based on series:
1344 1 - 1/2 + 1/3 - 1/4 + 1/5 - 1/6 ...
1345 logmap 16384 iterations computing chaotic double
1346 precision values using the logistic map
1347 Χn+1 = r × Χn × (1 - Χn) where r > ≈
1348 3.56994567
1349 longdouble 1000 iterations of a mix of long double
1350 precision floating point operations.
1351 loop simple empty loop.
1352 matrixprod matrix product of two 128 × 128 matrices
1353 of double floats. Testing on 64 bit x86
1354 hardware shows that this is provides a
1355 good mix of memory, cache and floating
1356 point operations and is probably the best
1357 CPU method to use to make a CPU run hot.
1358 nsqrt compute sqrt() of long doubles using New‐
1359 ton-Raphson.
1360 omega compute the omega constant defined by
1361 Ωe↑Ω = 1 using efficient iteration of
1362 Ωn+1 = (1 + Ωn) / (1 + e↑Ωn).
1363 parity compute parity using various methods from
1364 the Standford Bit Twiddling Hacks. Meth‐
1365 ods employed are: the naïve way, the
1366 naïve way with the Brian Kernigan bit
1367 counting optimisation, the multiply way,
1368 the parallel way, the lookup table ways
1369 (2 variations) and using the
1370 __builtin_parity function.
1371 phi compute the Golden Ratio ϕ using series.
1372 pi compute π using the Srinivasa Ramanujan
1373 fast convergence algorithm.
1374 prime find the first 10000 prime numbers using
1375 a slightly optimised brute force naïve
1376 trial division search.
1377 psi compute ψ (the reciprocal Fibonacci con‐
1378 stant) using the sum of the reciprocals
1379 of the Fibonacci numbers.
1380 queens compute all the solutions of the classic
1381 8 queens problem for board sizes 1..11.
1382 rand 16384 iterations of rand(), where rand is
1383 the MWC pseudo random number generator.
1384 The MWC random function concatenates two
1385 16 bit multiply-with-carry generators:
1386 x(n) = 36969 × x(n - 1) + carry,
1387 y(n) = 18000 × y(n - 1) + carry mod 2 ↑
1388 16
1389
1390 and has period of around 2 ↑ 60.
1391 rand48 16384 iterations of drand48(3) and
1392 lrand48(3).
1393 rgb convert RGB to YUV and back to RGB (CCIR
1394 601).
1395 sieve find the first 10000 prime numbers using
1396 the sieve of Eratosthenes.
1397 stats calculate minimum, maximum, arithmetic
1398 mean, geometric mean, harmoninc mean and
1399 standard deviation on 250 randomly gener‐
1400 ated positive double precision values.
1401 sqrt compute sqrt(rand()), where rand is the
1402 MWC pseudo random number generator.
1403 trig compute sin(θ) × cos(θ) + sin(2θ) +
1404 cos(3θ) for float, double and long double
1405 sine and cosine functions where θ = 0 to
1406 2π in 1500 steps.
1407
1408
1409
1410 union perform integer arithmetic on a mix of
1411 bit fields in a C union. This exercises
1412 how well the compiler and CPU can perform
1413 integer bit field loads and stores.
1414 zeta compute the Riemann Zeta function ζ(s)
1415 for s = 2.0..10.0
1416
1417 Note that some of these methods try to exercise the CPU
1418 with computations found in some real world use cases. How‐
1419 ever, the code has not been optimised on a per-architecture
1420 basis, so may be a sub-optimal compared to hand-optimised
1421 code used in some applications. They do try to represent
1422 the typical instruction mixes found in these use cases.
1423
1424 --cpu-old-metrics
1425 as of version V0.14.02 the cpu stressor now normalizes each
1426 of the cpu stressor method bogo-op counters to try and en‐
1427 sure a similar bogo-op rate for all the methods to avoid
1428 the shorter running (and faster) methods from skewing the
1429 bogo-op rates when using the default "all" method. This is
1430 based on a reference Intel i5-8350U processor and hence the
1431 bogo-ops normalizing factors will be skew somewhat on dif‐
1432 ferent CPUs, but so significantly as the original bogo-op
1433 counter rates. To disable the normalization and fall back
1434 to the original metrics, use this option.
1435
1436 --cpu-ops N
1437 stop cpu stress workers after N bogo operations.
1438
1439 CPU onlining stressor
1440 --cpu-online N
1441 start N workers that put randomly selected CPUs offline and
1442 online. This Linux only stressor requires root privilege to
1443 perform this action. By default the first CPU (CPU 0) is
1444 never offlined as this has been found to be problematic on
1445 some systems and can result in a shutdown.
1446
1447 --cpu-online-affinity
1448 move the stressor worker to the CPU that will be next of‐
1449 flined.
1450
1451 --cpu-online-all
1452 The default is to never offline the first CPU. This option
1453 will offline and online all the CPUs including CPU 0. This
1454 may cause some systems to shutdown.
1455
1456 --cpu-online-ops N
1457 stop after offline/online operations.
1458
1459 Crypt stressor
1460 --crypt N
1461 start N workers that encrypt a 16 character random password
1462 using crypt(3). The password is encrypted using MD5,
1463 SHA-256 and SHA-512 encryption methods.
1464
1465 --crypt-ops N
1466 stop after N bogo encryption operations.
1467
1468 Cyclic stressor
1469 --cyclic N
1470 start N workers that exercise the real time FIFO or Round
1471 Robin schedulers with cyclic nanosecond sleeps. Normally
1472 one would just use 1 worker instance with this stressor to
1473 get reliable statistics. By default this stressor measures
1474 the first 10 thousand latencies and calculates the mean,
1475 mode, minimum, maximum latencies along with various latency
1476 percentiles for the just the first cyclic stressor in‐
1477 stance. One has to run this stressor with CAP_SYS_NICE ca‐
1478 pability to enable the real time scheduling policies. The
1479 FIFO scheduling policy is the default.
1480
1481 --cyclic-dist N
1482 calculate and print a latency distribution with the inter‐
1483 val of N nanoseconds. This is helpful to see where the la‐
1484 tencies are clustering.
1485
1486 --cyclic-method [ clock_ns | itimer | poll | posix_ns | pselect |
1487 usleep ]
1488 specify the cyclic method to be used, the default is
1489 clock_ns. The available cyclic methods are as follows:
1490
1491 Method Description
1492 clock_ns sleep for the specified time using the
1493 clock_nanosleep(2) high resolution nanosleep and
1494 the CLOCK_REALTIME real time clock.
1495 itimer wakeup a paused process with a CLOCK_REALTIME
1496 itimer signal.
1497 poll delay for the specified time using a poll delay
1498 loop that checks for time changes using
1499 clock_gettime(2) on the CLOCK_REALTIME clock.
1500 posix_ns sleep for the specified time using the POSIX
1501 nanosleep(2) high resolution nanosleep.
1502 pselect sleep for the specified time using pselect(2)
1503 with null file descriptors.
1504 usleep sleep to the nearest microsecond using usleep(2).
1505
1506 --cyclic-ops N
1507 stop after N sleeps.
1508
1509 --cyclic-policy [ fifo | rr ]
1510 specify the desired real time scheduling policy, ff (first-
1511 in, first-out) or rr (round-robin).
1512
1513 --cyclic-prio P
1514 specify the scheduling priority P. Range from 1 (lowest) to
1515 100 (highest).
1516
1517 --cyclic-samples N
1518 measure N samples. Range from 1 to 100000000 samples.
1519
1520 --cyclic-sleep N
1521 sleep for N nanoseconds per test cycle using
1522 clock_nanosleep(2) with the CLOCK_REALTIME timer. Range
1523 from 1 to 1000000000 nanoseconds.
1524
1525 Daemon stressor
1526 --daemon N
1527 start N workers that each create a daemon that dies immedi‐
1528 ately after creating another daemon and so on. This effec‐
1529 tively works through the process table with short lived
1530 processes that do not have a parent and are waited for by
1531 init. This puts pressure on init to do rapid child reap‐
1532 ing. The daemon processes perform the usual mix of calls
1533 to turn into typical UNIX daemons, so this artificially
1534 mimics very heavy daemon system stress.
1535
1536 --daemon-ops N
1537 stop daemon workers after N daemons have been created.
1538
1539 --daemon-wait
1540 wait for daemon child processes rather than let init handle
1541 the waiting. Enabling this option will reduce the daemon
1542 fork rate because of the synchronous wait delays.
1543
1544 Datagram congestion control protocol (DCCP) stressor
1545 --dccp N
1546 start N workers that send and receive data using the Data‐
1547 gram Congestion Control Protocol (DCCP) (RFC4340). This in‐
1548 volves a pair of client/server processes performing rapid
1549 connect, send and receives and disconnects on the local
1550 host.
1551
1552 --dccp-domain D
1553 specify the domain to use, the default is ipv4. Currently
1554 ipv4 and ipv6 are supported.
1555
1556 --dccp-if NAME
1557 use network interface NAME. If the interface NAME does not
1558 exist, is not up or does not support the domain then the
1559 loopback (lo) interface is used as the default.
1560
1561 --dccp-msgs N
1562 send N messages per connect, send/receive, disconnect iter‐
1563 ation. The default is 10000 messages. If N is too small
1564 then the rate is throttled back by the overhead of dccp
1565 socket connect and disconnects.
1566
1567 --dccp-port P
1568 start DCCP at port P. For N dccp worker processes, ports P
1569 to P - 1 are used.
1570
1571 --dccp-ops N
1572 stop dccp stress workers after N bogo operations.
1573
1574 --dccp-opts [ send | sendmsg | sendmmsg ]
1575 by default, messages are sent using send(2). This option
1576 allows one to specify the sending method using send(2),
1577 sendmsg(2) or sendmmsg(2). Note that sendmmsg is only
1578 available for Linux systems that support this system call.
1579
1580 Mutex using Dekker algorithm stressor
1581 --dekker N
1582 start N workers that exercises mutex exclusion between two
1583 processes using shared memory with the Dekker Algorithm.
1584 Where possible this uses memory fencing and falls back to
1585 using GCC __sync_synchronize if they are not available. The
1586 stressors contain simple mutex and memory coherency sanity
1587 checks.
1588
1589 --dekker-ops N
1590 stop dekker workers after N mutex operations.
1591
1592 Dentry stressor
1593 -D N, --dentry N
1594 start N workers that create and remove directory entries.
1595 This should create file system meta data activity. The di‐
1596 rectory entry names are suffixed by a gray-code encoded
1597 number to try to mix up the hashing of the namespace.
1598
1599 --dentry-ops N
1600 stop denty thrash workers after N bogo dentry operations.
1601
1602 --dentry-order [ forward | reverse | stride | random ]
1603 specify unlink order of dentries, can be one of forward,
1604 reverse, stride or random. By default, dentries are un‐
1605 linked in random order. The forward order will unlink them
1606 from first to last, reverse order will unlink them from
1607 last to first, stride order will unlink them by stepping
1608 around order in a quasi-random pattern and random order
1609 will randomly select one of forward, reverse or stride or‐
1610 ders.
1611
1612 --dentries N
1613 create N dentries per dentry thrashing loop, default is
1614 2048.
1615
1616 /dev stressor
1617 --dev N
1618 start N workers that exercise the /dev devices. Each worker
1619 runs 5 concurrent threads that perform open(2), fstat(2),
1620 lseek(2), poll(2), fcntl(2), mmap(2), munmap(2), fsync(2)
1621 and close(2) on each device. Note that watchdog devices
1622 are not exercised.
1623
1624 --dev-file filename
1625 specify the device file to exercise, for example,
1626 /dev/null. By default the stressor will work through all
1627 the device files it can fine, however, this option allows a
1628 single device file to be exercised.
1629
1630 --dev-ops N
1631 stop dev workers after N bogo device exercising operations.
1632
1633 /dev/shm stressor
1634 --dev-shm N
1635 start N workers that fallocate large files in /dev/shm and
1636 then mmap these into memory and touch all the pages. This
1637 exercises pages being moved to/from the buffer cache. Linux
1638 only.
1639
1640 --dev-shm-ops N
1641 stop after N bogo allocation and mmap /dev/shm operations.
1642
1643 Directories stressor
1644 --dir N
1645 start N workers that create, rename and remove directories
1646 using mkdir, rename and rmdir.
1647
1648 --dir-dirs N
1649 exercise dir on N directories. The default is 8192 directo‐
1650 ries, this allows 64 to 65536 directories to be used in‐
1651 stead.
1652
1653 --dir-ops N
1654 stop directory thrash workers after N bogo directory opera‐
1655 tions.
1656
1657 Deep directories stressor
1658 --dirdeep N
1659 start N workers that create a depth-first tree of directo‐
1660 ries to a maximum depth as limited by PATH_MAX or ENAMETOO‐
1661 LONG (which ever occurs first). By default, each level of
1662 the tree contains one directory, but this can be increased
1663 to a maximum of 10 sub-trees using the --dirdeep-dir op‐
1664 tion. To stress inode creation, a symlink and a hardlink
1665 to a file at the root of the tree is created in each level.
1666
1667 --dirdeep-bytes N
1668 allocated file size, the default is 0. One can specify the
1669 size as % of free space on the file system or in units of
1670 Bytes, KBytes, MBytes and GBytes using the suffix b, k, m
1671 or g. Used in conjunction with the --dirdeep-files option.
1672
1673 --dirdeep-dirs N
1674 create N directories at each tree level. The default is
1675 just 1 but can be increased to a maximum of 36 per level.
1676
1677 --dirdeep-files N
1678 create N files at each tree level. The default is 0 with
1679 the file size specified by the --dirdeep-bytes option.
1680
1681 --dirdeep-inodes N
1682 consume up to N inodes per dirdeep stressor while creating
1683 directories and links. The value N can be the number of in‐
1684 odes or a percentage of the total available free inodes on
1685 the filesystem being used.
1686
1687 --dirdeep-ops N
1688 stop directory depth workers after N bogo directory opera‐
1689 tions.
1690
1691 Maximum files creation in a directory stressor
1692 --dirmany N
1693 start N stressors that create as many files in a directory
1694 as possible and then remove them. The file creation phase
1695 stops when an error occurs (for example, out of inodes, too
1696 many files, quota reached, etc.) and then the files are re‐
1697 moved. This cycles until the run time is reached or the
1698 file creation count bogo-ops metric is reached. This is a
1699 much faster and light weight directory exercising stressor
1700 compared to the dentry stressor.
1701
1702 --dirmany-bytes N
1703 allocated file size, the default is 0. One can specify the
1704 size as % of free space on the file system or in units of
1705 Bytes, KBytes, MBytes and GBytes using the suffix b, k, m
1706 or g.
1707
1708 --dirmany-ops N
1709 stop dirmany stressors after N empty files have been cre‐
1710 ated.
1711
1712 Dnotify stressor
1713 --dnotify N
1714 start N workers performing file system activities such as
1715 making/deleting files/directories, renaming files, etc. to
1716 stress exercise the various dnotify events (Linux only).
1717
1718 --dnotify-ops N
1719 stop inotify stress workers after N dnotify bogo opera‐
1720 tions.
1721
1722 Dup stressor
1723 --dup N
1724 start N workers that perform dup(2) and then close(2) oper‐
1725 ations on /dev/zero. The maximum opens at one time is sys‐
1726 tem defined, so the test will run up to this maximum, or
1727 65536 open file descriptors, which ever comes first.
1728
1729 --dup-ops N
1730 stop the dup stress workers after N bogo open operations.
1731
1732 Dynamic libraries loading stressor
1733 --dynlib N
1734 start N workers that dynamically load and unload various
1735 shared libraries. This exercises memory mapping and dynamic
1736 code loading and symbol lookups. See dlopen(3) for more de‐
1737 tails of this mechanism.
1738
1739 --dynlib-ops N
1740 stop workers after N bogo load/unload cycles.
1741
1742 Eigen C++ matrix library stressor
1743 --eigen N
1744 start N workers that exercise the Eigen C++ matrix library
1745 for 2D matrix addition, multiplication, determinant, in‐
1746 verse and transpose operations. on long double, double and
1747 float matrices. This currently is only available for
1748 gcc/g++ builds.
1749
1750 --eigen-method method
1751 select the floating point method to use, available methods
1752 are:
1753
1754 Method Description
1755 all iterate over all the Eigen 2D ma‐
1756 trix operations
1757 add-longdouble addition of two matrices of long
1758 double floating point values T{
1759 add-doubleeT{ addition of two ma‐
1760 trices of double floating point
1761 values
1762 add-float addition of two matrices of float‐
1763 ing point values
1764 determinant-longdouble determinant of matrix of long dou‐
1765 ble floating point values
1766 determinant-double determinant of matrix of double
1767 floating point values
1768 determinant-float determinant of matrix of floating
1769 point values
1770 inverse-longdouble inverse of matrix of long double
1771 floating point values
1772 inverse-double inverse of matrix of double float‐
1773 ing point values
1774 inverse-float inverse of matrix of floating
1775 point values
1776 multiply-longdouble mutiplication of two matrices of
1777 long double floating point values
1778 multiply-doublee mutiplication of two matrices of
1779 double floating point values
1780 multiply-float mutiplication of two matrices of
1781 floating point values
1782 transpose-longdouble transpose of matrix of long double
1783 floating point values
1784 transpose-double transpose of matrix of double
1785 floating point values
1786 transpose-float transpose of matrix of floating
1787 point values
1788
1789 --eigen-ops N
1790 stop after N Eigen matrix computations
1791
1792 --eigen-size N
1793 specify the 2D matrix size N × N. The default is a 32 × 32
1794 matrix.
1795
1796 EFI variables stressor
1797 --efivar N
1798 start N workers that exercise the Linux
1799 /sys/firmware/efi/efivars and /sys/firmware/efi/vars inter‐
1800 faces by reading the EFI variables. This is a Linux only
1801 stress test for platforms that support the EFI vars inter‐
1802 face and may require the CAP_SYS_ADMIN capability.
1803
1804 --efivar-ops N
1805 stop the efivar stressors after N EFI variable read opera‐
1806 tions.
1807
1808 Non-functional system call (ENOSYS) stressor
1809 --enosys N
1810 start N workers that exercise non-functional system call
1811 numbers. This calls a wide range of system call numbers to
1812 see if it can break a system where these are not wired up
1813 correctly. It also keeps track of system calls that exist
1814 (ones that don't return ENOSYS) so that it can focus on
1815 purely finding and exercising non-functional system calls.
1816 This stressor exercises system calls from 0 to
1817 __NR_syscalls + 1024, random system calls within con‐
1818 strained in the ranges of 0 to 2↑8, 2↑16, 2↑24, 2↑32, 2↑40,
1819 2↑48, 2↑56 and 2↑64 bits, high system call numbers and var‐
1820 ious other bit patterns to try to get wide coverage. To
1821 keep the environment clean, each system call being tested
1822 runs in a child process with reduced capabilities.
1823
1824 --enosys-ops N
1825 stop after N bogo enosys system call attempts
1826
1827 Environment variables stressor
1828 --env N
1829 start N workers that creates numerous large environment
1830 variables to try to trigger out of memory conditions using
1831 setenv(3). If ENOMEM occurs then the environment is emp‐
1832 tied and another memory filling retry occurs. The process
1833 is restarted if it is killed by the Out Of Memory (OOM)
1834 killer.
1835
1836 --env-ops N
1837 stop after N bogo setenv/unsetenv attempts.
1838
1839 Epoll stressor
1840 --epoll N
1841 start N workers that perform various related socket stress
1842 activity using epoll_wait(2) to monitor and handle new con‐
1843 nections. This involves client/server processes performing
1844 rapid connect, send/receives and disconnects on the local
1845 host. Using epoll allows a large number of connections to
1846 be efficiently handled, however, this can lead to the con‐
1847 nection table filling up and blocking further socket con‐
1848 nections, hence impacting on the epoll bogo op stats. For
1849 ipv4 and ipv6 domains, multiple servers are spawned on mul‐
1850 tiple ports. The epoll stressor is for Linux only.
1851
1852 --epoll-domain D
1853 specify the domain to use, the default is unix (aka local).
1854 Currently ipv4, ipv6 and unix are supported.
1855
1856 --epoll-ops N
1857 stop epoll workers after N bogo operations.
1858
1859 --epoll-port P
1860 start at socket port P. For N epoll worker processes, ports
1861 P to (P * 4) - 1 are used for ipv4, ipv6 domains and ports
1862 P to P - 1 are used for the unix domain.
1863
1864 --epoll-sockets N
1865 specify the maximum number of concurrently open sockets al‐
1866 lowed in server. Setting a high value impacts on memory
1867 usage and may trigger out of memory conditions.
1868
1869 Event file descriptor (eventfd) stressor
1870 --eventfd N
1871 start N parent and child worker processes that read and
1872 write 8 byte event messages between them via the eventfd
1873 mechanism (Linux only).
1874
1875 --eventfd-nonblock
1876 enable EFD_NONBLOCK to allow non-blocking on the event file
1877 descriptor. This will cause reads and writes to return with
1878 EAGAIN rather the blocking and hence causing a high rate of
1879 polling I/O.
1880
1881 --eventfd-ops N
1882 stop eventfd workers after N bogo operations.
1883
1884 Exec processes stressor
1885 --exec N
1886 start N workers continually forking children that exec
1887 stress-ng and then exit almost immediately. If a system has
1888 pthread support then 1 in 4 of the exec's will be from in‐
1889 side a pthread to exercise exec'ing from inside a pthread
1890 context.
1891
1892 --exec-fork-method [ clone | fork | rfork | spawn | vfork ]
1893 select the process creation method using clone(2), fork(2),
1894 BSD rfork(2), posix_spawn(3) or vfork(2). Note that vfork
1895 will only exec programs using execve due to the constraints
1896 on the shared stack between the parent and the child
1897 process.
1898
1899 --exec-max P
1900 create P child processes that exec stress-ng and then wait
1901 for them to exit per iteration. The default is 4096; higher
1902 values may create many temporary zombie processes that are
1903 waiting to be reaped. One can potentially fill up the
1904 process table using high values for --exec-max and --exec.
1905
1906 --exec-method [ all | execve | execveat ]
1907 select the exec system call to use; all will perform a ran‐
1908 dom choice between execve(2) and execveat(2), execve will
1909 use execve(2) and execveat will use execveat(2) if it is
1910 available.
1911
1912 --exec-no-pthread
1913 do not use pthread_create(3).
1914
1915 --exec-ops N
1916 stop exec stress workers after N bogo operations.
1917
1918 Exiting pthread groups stressor
1919 --exit-group N
1920 start N workers that create 16 pthreads and terminate the
1921 pthreads and the controlling child process using
1922 exit_group(2). (Linux only stressor).
1923
1924 --exit-group-ops N
1925 stop after N iterations of pthread creation and deletion
1926 loops.
1927
1928 File space allocation (fallocate) stressor
1929 -F N, --fallocate N
1930 start N workers continually fallocating (preallocating file
1931 space) and ftruncating (file truncating) temporary files.
1932 If the file is larger than the free space, fallocate will
1933 produce an ENOSPC error which is ignored by this stressor.
1934
1935 --fallocate-bytes N
1936 allocated file size, the default is 1 GB. One can specify
1937 the size as % of free space on the file system or in units
1938 of Bytes, KBytes, MBytes and GBytes using the suffix b, k,
1939 m or g.
1940
1941 --fallocate-ops N
1942 stop fallocate stress workers after N bogo fallocate opera‐
1943 tions.
1944
1945 Filesystem notification (fanotify) stressor
1946 --fanotify N
1947 start N workers performing file system activities such as
1948 creating, opening, writing, reading and unlinking files to
1949 exercise the fanotify event monitoring interface (Linux
1950 only). Each stressor runs a child process to generate file
1951 events and a parent process to read file events using fan‐
1952 otify. Has to be run with CAP_SYS_ADMIN capability.
1953
1954 --fanotify-ops N
1955 stop fanotify stress workers after N bogo fanotify events.
1956
1957 CPU branching instruction cache stressor
1958 --far-branch N
1959 start N workers that exercise calls to tens of thousands of
1960 functions that are relatively far from the caller. All
1961 functions are 1 op instructions that return to the caller.
1962 The functions are placed in pages that are memory mapped
1963 with a wide spread of fixed virtual addresses. Function
1964 calls are pre-shuffled to create a randomized mix of ad‐
1965 dresses to call. This stresses the instruction cache and
1966 any instruction TLBs.
1967
1968 --far-branch-ops N
1969 stop after N far branch bogo-ops. One full cycle of calling
1970 all the tens of thousands of functions equates to one bogo-
1971 op.
1972
1973 --far-branch-pages N
1974 specify the number of pages to allocate for far branch
1975 functions. The number for functions per page depends on the
1976 processor architecture, for example, x86 will have 4096 x 1
1977 byte return instructions per 4K page, where as SPARC64 will
1978 have only 512 x 8 byte return instructions per 4K page.
1979
1980 Page fault stressor
1981 --fault N
1982 start N workers that generates minor and major page faults.
1983
1984 --fault-ops N
1985 stop the page fault workers after N bogo page fault opera‐
1986 tions.
1987
1988 Fcntl stressor
1989 --fcntl N
1990 start N workers that perform fcntl(2) calls with various
1991 commands. The exercised commands (if available) are:
1992 F_DUPFD, F_DUPFD_CLOEXEC, F_GETFD, F_SETFD, F_GETFL,
1993 F_SETFL, F_GETOWN, F_SETOWN, F_GETOWN_EX, F_SETOWN_EX,
1994 F_GETSIG, F_SETSIG, F_GETLK, F_SETLK, F_SETLKW,
1995 F_OFD_GETLK, F_OFD_SETLK and F_OFD_SETLKW.
1996
1997 --fcntl-ops N
1998 stop the fcntl workers after N bogo fcntl operations.
1999
2000 File extent (fiemap) stressor
2001 --fiemap N
2002 start N workers that each create a file with many randomly
2003 changing extents and has 4 child processes per worker that
2004 gather the extent information using the FS_IOC_FIEMAP
2005 ioctl(2).
2006
2007 --fiemap-bytes N
2008 specify the size of the fiemap'd file in bytes. One can
2009 specify the size as % of free space on the file system or
2010 in units of Bytes, KBytes, MBytes and GBytes using the suf‐
2011 fix b, k, m or g. Larger files will contain more extents,
2012 causing more stress when gathering extent information.
2013
2014 --fiemap-ops N
2015 stop after N fiemap bogo operations.
2016
2017 FIFO named pipe stressor
2018 --fifo N
2019 start N workers that exercise a named pipe by transmitting
2020 64 bit integers.
2021
2022 --fifo-data-size N
2023 set the byte size of the fifo write/reads, default is 8,
2024 range 8..4096.
2025
2026 --fifo-ops N
2027 stop fifo workers after N bogo pipe write operations.
2028
2029 --fifo-readers N
2030 for each worker, create N fifo reader workers that read the
2031 named pipe using simple blocking reads. Default is 4, range
2032 1..64.
2033
2034 File I/O control (ioctl) stressor
2035 --file-ioctl N
2036 start N workers that exercise various file specific
2037 ioctl(2) calls. This will attempt to use the FIONBIO, FIOQ‐
2038 SIZE, FIGETBSZ, FIOCLEX, FIONCLEX, FIONBIO, FIOASYNC, FIOQ‐
2039 SIZE, FIFREEZE, FITHAW, FICLONE, FICLONERANGE, FIONREAD,
2040 FIONWRITE and FS_IOC_RESVSP ioctls if these are defined.
2041
2042 --file-ioctl-ops N
2043 stop file-ioctl workers after N file ioctl bogo operations.
2044
2045 Filename stressor
2046 --filename N
2047 start N workers that exercise file creation using various
2048 length filenames containing a range of allowed filename
2049 characters. This will try to see if it can exceed the file
2050 system allowed filename length was well as test various
2051 filename lengths between 1 and the maximum allowed by the
2052 file system.
2053
2054 --filename-ops N
2055 stop filename workers after N bogo filename tests.
2056
2057 --filename-opts opt
2058 use characters in the filename based on option 'opt'. Valid
2059 options are:
2060
2061 Option Description
2062 probe default option, probe the file system for valid
2063 allowed characters in a file name and use these
2064 posix use characters as specified by The Open Group Base
2065 Specifications Issue 7, POSIX.1-2008, 3.278 Porta‐
2066 ble Filename Character Set
2067 ext use characters allowed by the ext2, ext3, ext4
2068 file systems, namely any 8 bit character apart
2069 from NUL and /
2070
2071 File locking (flock) stressor
2072 --flock N
2073 start N workers locking on a single file.
2074
2075 --flock-ops N
2076 stop flock stress workers after N bogo flock operations.
2077
2078 Cache flushing stressor
2079 --flush-cache N
2080 start N workers that flush the data and instruction cache
2081 (where possible). Some architectures may not support cache
2082 flushing on either cache, in which case these become no-
2083 ops.
2084
2085 --flush-cache-ops N
2086 stop after N cache flush iterations.
2087
2088 Fused Multiply/Add floating point operations (fma) stressor
2089 --fma N
2090 start N workers that exercise single and double precision
2091 floating point multiplication and add operations on arrays
2092 of 512 floating point values. More modern processors (In‐
2093 tel Haswell, AMD Bulldozer and Piledriver) and modern C
2094 compilers these will be performed by fused-multiply-add
2095 (fma3) opcodes. Operations used are:
2096
2097 a = a × b + c
2098 a = b × a + c
2099 a = b × c + a
2100
2101 --fma-ops N
2102 stop after N bogo-loops of the 3 above operations on 512
2103 single and double precision floating point numbers.
2104
2105 Process forking stressor
2106 -f N, --fork N
2107 start N workers continually forking children that immedi‐
2108 ately exit.
2109
2110 --fork-max P
2111 create P child processes and then wait for them to exit per
2112 iteration. The default is just 1; higher values will create
2113 many temporary zombie processes that are waiting to be
2114 reaped. One can potentially fill up the process table using
2115 high values for --fork-max and --fork.
2116
2117 --fork-ops N
2118 stop fork stress workers after N bogo operations.
2119
2120 --fork-vm
2121 enable detrimental performance virtual memory advice using
2122 madvise on all pages of the forked process. Where possible
2123 this will try to set every page in the new process with us‐
2124 ing madvise MADV_MERGEABLE, MADV_WILLNEED, MADV_HUGEPAGE
2125 and MADV_RANDOM flags. Linux only.
2126
2127 Heavy process forking stressor
2128 --forkheavy N
2129 start N workers that fork child processes from a parent
2130 that has thousands of allocated system resources. The fork
2131 becomes a heavyweight operations as it has to duplicate the
2132 resource references of the parent. Each stressor instance
2133 creates and reaps up to 4096 child processes that are cre‐
2134 ated and reaped in a first-in first-out manner.
2135
2136 --forkheavy-allocs N
2137 attempt N resource allocation loops per stressor instance.
2138 Resources include pipes, file descriptors, memory mappings,
2139 pthreads, timers, ptys, semaphores, message queues and tem‐
2140 porary files. These create heavyweight processes that are
2141 more time expensive to fork from. Default is 16384.
2142
2143 --forkheavy-mlock
2144 attempt to mlock future allocated pages into memory causing
2145 more memory pressure. If mlock(MCL_FUTURE) is implemented
2146 then this will stop new brk pages from being swapped out.
2147
2148 --forkheavy-ops N
2149 stop after N fork calls.
2150
2151 --forkheavy-procs N
2152 attempt to fork N processes per stressor. The default is
2153 4096 processes.
2154
2155 Floating point operations stressor
2156 --fp N start N workers that exercise addition, multiplication and
2157 division operations on a range of floating point types. For
2158 each type, 8 floating point values are operated upon 65536
2159 times in a loop per bogo op.
2160
2161 --fp-method method
2162 select the floating point method to use, available methods
2163 are:
2164
2165 Method Description
2166 all iterate over all the following floating point methods:
2167 float128add 128 bit floating point add
2168 float80add 80 bit floating point add
2169 float64add 64 bit floating point add
2170 float32add 32 bit binary32 floating point add
2171 floatadd floating point add
2172 doubleadd double precision floating point add
2173 ldoubleadd long double precision floating point add
2174 float128mul 128 bit floating point multiply
2175 float80mul 80 bit floating point multiply
2176 float64mul 64 bit floating point multiply
2177 float32mul 32 bit binary32 floating point multiply
2178 floatmul floating point multiply
2179 doublemul double precision floating point multiply
2180 ldoublemul long double precision floating point multiply
2181 float128div 128 bit floating point divide
2182 float80div 80 bit floating point divide
2183 float64div 64 bit floating point divide
2184 float32div 32 bit binary32 floating point divide
2185 floatdiv floating point divide
2186 doublediv double precision floating point divide
2187 ldoublediv long double precision floating point divide
2188
2189 Note that some of these floating point methods may not be
2190 available on some systems.
2191
2192 --fp-ops N
2193 stop after N floating point bogo ops. Note that bogo-ops
2194 are counted for just standard float, double and long double
2195 floating point types.
2196
2197 Floating point exception stressor
2198 --fp-error N
2199 start N workers that generate floating point exceptions.
2200 Computations are performed to force and check for the
2201 FE_DIVBYZERO, FE_INEXACT, FE_INVALID, FE_OVERFLOW and
2202 FE_UNDERFLOW exceptions. EDOM and ERANGE errors are also
2203 checked.
2204
2205 --fp-error-ops N
2206 stop after N bogo floating point exceptions.
2207
2208 File punch and hole filling stressor
2209 --fpunch N
2210 start N workers that punch and fill holes in a 16 MB file
2211 using five concurrent processes per stressor exercising on
2212 the same file. Where available, this uses fallocate(2) FAL‐
2213 LOC_FL_KEEP_SIZE, FALLOC_FL_PUNCH_HOLE, FAL‐
2214 LOC_FL_ZERO_RANGE, FALLOC_FL_COLLAPSE_RANGE and FAL‐
2215 LOC_FL_INSERT_RANGE to make and fill holes across the file
2216 and breaks it into multiple extents.
2217
2218 --fpunch-ops N
2219 stop fpunch workers after N punch and fill bogo operations.
2220
2221 File size limit stressor
2222 --fsize N
2223 start N workers that exercise file size limits (via setr‐
2224 limit RLIMIT_FSIZE) with file sizes that are fixed, random
2225 and powers of 2. The files are truncated and allocated to
2226 trigger SIGXFSZ signals.
2227
2228 --fsize-ops N
2229 stop after N bogo file size test iterations.
2230
2231 File stats (fstat) stressor
2232 --fstat N
2233 start N workers fstat'ing files in a directory (default is
2234 /dev).
2235
2236 --fstat-dir directory
2237 specify the directory to fstat to override the default of
2238 /dev. All the files in the directory will be fstat'd re‐
2239 peatedly.
2240
2241 --fstat-ops N
2242 stop fstat stress workers after N bogo fstat operations.
2243
2244 /dev/full stressor
2245 --full N
2246 start N workers that exercise /dev/full. This attempts to
2247 write to the device (which should always get error ENOSPC),
2248 to read from the device (which should always return a buf‐
2249 fer of zeros) and to seek randomly on the device (which
2250 should always succeed). (Linux only).
2251
2252 --full-ops N
2253 stop the stress full workers after N bogo I/O operations.
2254
2255 Function argument passing stressor
2256 --funccall N
2257 start N workers that call functions of 1 through to 9 argu‐
2258 ments. By default all functions with a range of argument
2259 types are called, however, this can be changed using the
2260 --funccall-method option. This exercises stack function ar‐
2261 gument passing and re-ordering on the stack and in regis‐
2262 ters.
2263
2264 --funccall-ops N
2265 stop the funccall workers after N bogo function call opera‐
2266 tions. Each bogo operation is 1000 calls of functions of 1
2267 through to 9 arguments of the chosen argument type.
2268
2269 --funccall-method method
2270 specify the method of funccall argument type to be used.
2271 The default is all the types but can be one of bool, uint8,
2272 uint16, uint32, uint64, uint128, float, double, longdouble,
2273 cfloat (complex float), cdouble (complex double), clongdou‐
2274 ble (complex long double), float16, float32, float64,
2275 float80, float128, decimal32, decimal64 and decimal128.
2276 Note that some of these types are only available with spe‐
2277 cific architectures and compiler versions.
2278
2279 Function return stressor
2280 --funcret N
2281 start N workers that pass and return by value various small
2282 to large data types.
2283
2284 --funcret-ops N
2285 stop the funcret workers after N bogo function call opera‐
2286 tions.
2287
2288 --funcret-method method
2289 specify the method of funcret argument type to be used. The
2290 default is uint64_t but can be one of uint8 uint16 uint32
2291 uint64 uint128 float double longdouble float80 float128
2292 decimal32 decimal64 decimal128 uint8x32 uint8x128
2293 uint64x128.
2294
2295 Fast mutex (futex) stressor
2296 --futex N
2297 start N workers that rapidly exercise the futex system
2298 call. Each worker has two processes, a futex waiter and a
2299 futex waker. The waiter waits with a very small timeout to
2300 stress the timeout and rapid polled futex waiting. This is
2301 a Linux specific stress option.
2302
2303 --futex-ops N
2304 stop futex workers after N bogo successful futex wait oper‐
2305 ations.
2306
2307 Fetching data from kernel stressor
2308 --get N
2309 start N workers that call system calls that fetch data from
2310 the kernel, currently these are: getpid, getppid, getcwd,
2311 getgid, getegid, getuid, getgroups, getpgrp, getpgid, get‐
2312 priority, getresgid, getresuid, getrlimit, prlimit,
2313 getrusage, getsid, gettid, getcpu, gettimeofday, uname, ad‐
2314 jtimex, sysfs. Some of these system calls are OS specific.
2315
2316 --get-ops N
2317 stop get workers after N bogo get operations.
2318
2319 Virtual filesystem directories stressor (Linux)
2320 --getdent N
2321 start N workers that recursively read directories /proc,
2322 /dev/, /tmp, /sys and /run using getdents and getdents64
2323 (Linux only).
2324
2325 --getdent-ops N
2326 stop getdent workers after N bogo getdent bogo operations.
2327
2328 Random data (getrandom) stressor
2329 --getrandom N
2330 start N workers that get 8192 random bytes from the
2331 /dev/urandom pool using the getrandom(2) system call
2332 (Linux) or getentropy(2) (OpenBSD).
2333
2334 --getrandom-ops N
2335 stop getrandom workers after N bogo get operations.
2336
2337 CPU pipeline and branch prediction stressor
2338 --goto N
2339 start N workers that perform 1024 forward branches (to next
2340 instruction) or backward branches (to previous instruction)
2341 for each bogo operation loop. By default, every 1024
2342 branches the direction is randomly chosen to be forward or
2343 backward. This stressor exercises suboptimal pipelined ex‐
2344 ecution and branch prediction logic.
2345
2346 --goto-direction [ forward | backward | random ]
2347 select the branching direction in the stressor loop, for‐
2348 ward for forward only branching, backward for a backward
2349 only branching, random for a random choice of forward or
2350 random branching every 1024 branches.
2351
2352 --goto-ops N
2353 stop goto workers after N bogo loops of 1024 branch in‐
2354 structions.
2355
2356 2D GPU stressor
2357 --gpu N
2358 start N worker that exercise the GPU. This specifies a 2-D
2359 texture image that allows the elements of an image array to
2360 be read by shaders, and render primitives using an opengl
2361 context.
2362
2363 --gpu-devnode DEVNAME
2364 specify the device node name of the GPU device, the default
2365 is /dev/dri/renderD128.
2366
2367 --gpu-frag N
2368 specify shader core usage per pixel, this sets N loops in
2369 the fragment shader.
2370
2371 --gpu-ops N
2372 stop gpu workers after N render loop operations.
2373
2374 --gpu-tex-size N
2375 specify upload texture N × N, by default this value is 4096
2376 × 4096.
2377
2378 --gpu-xsize X
2379 use a framebuffer size of X pixels. The default is 256 pix‐
2380 els.
2381
2382 --gpu-ysize Y
2383 use a framebuffer size of Y pixels. The default is 256 pix‐
2384 els.
2385
2386 --gpu-upload N
2387 specify upload texture N times per frame, the default value
2388 is 1.
2389
2390 Handle stressor
2391 --handle N
2392 start N workers that exercise the name_to_handle_at(2) and
2393 open_by_handle_at(2) system calls. (Linux only).
2394
2395 --handle-ops N
2396 stop after N handle bogo operations.
2397
2398 String hashing stressor
2399 --hash N
2400 start N workers that exercise various hashing functions.
2401 Random strings from 1 to 128 bytes are hashed and the hash‐
2402 ing rate and chi squared is calculated from the number of
2403 hashes performed over a period of time. The chi squared
2404 value is the goodness-of-fit measure, it is the actual dis‐
2405 tribution of items in hash buckets versus the expected dis‐
2406 tribution of items. Typically a chi squared value of
2407 0.95..1.05 indicates a good hash distribution.
2408
2409 --hash-method method
2410 specify the hashing method to use, by default all the hash‐
2411 ing methods are cycled through. Methods available are:
2412
2413 Method Description
2414 all cycle through all the hashing methods
2415 adler32 Mark Adler checksum, a modification of the
2416 Fletcher checksum
2417 coffin xor and 5 bit rotate left hash
2418 coffin32 xor and 5 bit rotate left hash with 32 bit
2419 fetch optimization
2420 crc32c compute CRC32C (Castagnoli CRC32) integer hash
2421 djb2a Dan Bernstein hash using the xor variant
2422 fnv1a FNV-1a Fowler-Noll-Vo hash using the xor then
2423 multiply variant
2424 jenkin Jenkin's integer hash
2425 kandr Kernighan and Richie's multiply by 31 and add
2426 hash from "The C Programming Language", 2nd
2427 Edition
2428 knuth Donald E. Knuth's hash from "The Art Of Com‐
2429 puter Programming", Volume 3, chapter 6.4
2430 loselose Kernighan and Richie's simple hash from "The C
2431 Programming Language", 1st Edition
2432 mid5 xor shift hash of the middle 5 characters of
2433 the string. Designed by Colin Ian King
2434 muladd32 simple multiply and add hash using 32 bit math
2435 and xor folding of overflow
2436 muladd64 simple multiply and add hash using 64 bit math
2437 and xor folding of overflow
2438 mulxror32 32 bit multiply, xor and rotate right. Mangles
2439 32 bits where possible. Designed by Colin Ian
2440 King
2441
2442
2443 mulxror64 64 bit multiply, xor and rotate right. 64 Bit
2444 version of mulxror32
2445 murmur3_32 murmur3_32 hash, Austin Appleby's Murmur3 hash,
2446 32 bit variant
2447 nhash exim's nhash.
2448 pjw a non-cryptographic hash function created by
2449 Peter J. Weinberger of AT&T Bell Labs, used in
2450 UNIX ELF object files
2451 sdbm sdbm hash as used in the SDBM database and GNU
2452 awk
2453 sedgwick simple hash from Robert Sedgwick's C program‐
2454 ming book
2455 sobel Justin Sobel's bitwise shift hash
2456 x17 multiply by 17 and add. The multiplication can
2457 be optimized down to a fast right shift by 4
2458 and add on some architectures
2459 xor simple rotate shift and xor of values
2460 xorror32 32 bit exclusive-or with right rotate hash, a
2461 fast string hash, designed by Colin Ian King
2462 xorror64 64 bit version of xorror32
2463 xxhash the "Extremely fast" hash in non-streaming mode
2464
2465 --hash-ops N
2466 stop after N hashing rounds
2467
2468 File-system stressor
2469 -d N, --hdd N
2470 start N workers continually writing, reading and removing
2471 temporary files. The default mode is to stress test sequen‐
2472 tial writes and reads. With the --aggressive option en‐
2473 abled without any --hdd-opts options the hdd stressor will
2474 work through all the --hdd-opt options one by one to cover
2475 a range of I/O options.
2476
2477 --hdd-bytes N
2478 write N bytes for each hdd process, the default is 1 GB.
2479 One can specify the size as % of free space on the file
2480 system or in units of Bytes, KBytes, MBytes and GBytes us‐
2481 ing the suffix b, k, m or g.
2482
2483 --hdd-opts list
2484 specify various stress test options as a comma separated
2485 list. Options are as follows:
2486
2487 Option Description
2488 direct try to minimize cache effects of the I/O.
2489 File I/O writes are performed directly from
2490 user space buffers and synchronous transfer
2491 is also attempted. To guarantee synchro‐
2492 nous I/O, also use the sync option.
2493 dsync ensure output has been transferred to un‐
2494 derlying hardware and file metadata has
2495 been updated (using the O_DSYNC open flag).
2496 This is equivalent to each write(2) being
2497 followed by a call to fdatasync(2). See
2498 also the fdatasync option.
2499 fadv-dontneed advise kernel to expect the data will not
2500 be accessed in the near future.
2501 fadv-noreuse advise kernel to expect the data to be ac‐
2502 cessed only once.
2503 fadv-normal advise kernel there are no explicit access
2504 pattern for the data. This is the default
2505 advice assumption.
2506 fadv-rnd advise kernel to expect random access pat‐
2507 terns for the data.
2508 fadv-seq advise kernel to expect sequential access
2509 patterns for the data.
2510 fadv-willneed advise kernel to expect the data to be ac‐
2511 cessed in the near future.
2512 fsync flush all modified in-core data after each
2513 write to the output device using an ex‐
2514 plicit fsync(2) call.
2515 fdatasync similar to fsync, but do not flush the mod‐
2516 ified metadata unless metadata is required
2517 for later data reads to be handled cor‐
2518 rectly. This uses an explicit fdatasync(2)
2519 call.
2520 iovec use readv/writev multiple buffer I/Os
2521 rather than read/write. Instead of 1
2522 read/write operation, the buffer is broken
2523 into an iovec of 16 buffers.
2524
2525
2526 noatime do not update the file last access time‐
2527 stamp, this can reduce metadata writes.
2528 sync ensure output has been transferred to un‐
2529 derlying hardware (using the O_SYNC open
2530 flag). This is equivalent to a each
2531 write(2) being followed by a call to
2532 fsync(2). See also the fsync option.
2533 rd-rnd read data randomly. By default, written
2534 data is not read back, however, this option
2535 will force it to be read back randomly.
2536 rd-seq read data sequentially. By default, written
2537 data is not read back, however, this option
2538 will force it to be read back sequentially.
2539 syncfs write all buffered modifications of file
2540 metadata and data on the filesystem that
2541 contains the hdd worker files.
2542 utimes force update of file timestamp which may
2543 increase metadata writes.
2544 wr-rnd write data randomly. The wr-seq option can‐
2545 not be used at the same time.
2546 wr-seq write data sequentially. This is the de‐
2547 fault if no write modes are specified.
2548
2549 Note that some of these options are mutually exclusive, for exam‐
2550 ple, there can be only one method of writing or reading. Also,
2551 fadvise flags may be mutually exclusive, for example fadv-willneed
2552 cannot be used with fadv-dontneed.
2553
2554 --hdd-ops N
2555 stop hdd stress workers after N bogo operations.
2556
2557 --hdd-write-size N
2558 specify size of each write in bytes. Size can be from 1
2559 byte to 4MB.
2560
2561 BSD heapsort stressor
2562 --heapsort N
2563 start N workers that sort 32 bit integers using the BSD
2564 heapsort.
2565
2566 --heapsort-ops N
2567 stop heapsort stress workers after N bogo heapsorts.
2568
2569 --heapsort-size N
2570 specify number of 32 bit integers to sort, default is
2571 262144 (256 × 1024).
2572
2573 High resolution timer stressor
2574 --hrtimers N
2575 start N workers that exercise high resolution times at a
2576 high frequency. Each stressor starts 32 processes that run
2577 with random timer intervals of 0..499999 nanoseconds. Run‐
2578 ning this stressor with appropriate privilege will run
2579 these with the SCHED_RR policy.
2580
2581 --hrtimers-adjust
2582 enable automatic timer rate adjustment to try to maximize
2583 the hrtimer frequency. The signal rate is measured every
2584 0.1 seconds and the hrtimer delay is adjusted to try and
2585 set the optimal hrtimer delay to generate the highest
2586 hrtimer rates.
2587
2588 --hrtimers-ops N
2589 stop hrtimers stressors after N timer event bogo operations
2590
2591 Hashtable searching (hsearch) stressor
2592 --hsearch N
2593 start N workers that search a 80% full hash table using
2594 hsearch(3). By default, there are 8192 elements inserted
2595 into the hash table. This is a useful method to exercise
2596 access of memory and processor cache.
2597
2598 --hsearch-ops N
2599 stop the hsearch workers after N bogo hsearch operations
2600 are completed.
2601
2602 --hsearch-size N
2603 specify the number of hash entries to be inserted into the
2604 hash table. Size can be from 1K to 4M.
2605
2606 CPU instruction cache load stressor
2607 --icache N
2608 start N workers that stress the instruction cache by forc‐
2609 ing instruction cache reloads.
2610
2611 --icache-ops N
2612 stop the icache workers after N bogo icache operations are
2613 completed.
2614
2615 ICMP flooding stressor
2616 --icmp-flood N
2617 start N workers that flood localhost with randonly sized
2618 ICMP ping packets. This stressor requires the CAP_NET_RAW
2619 capbility.
2620
2621 --icmp-flood-ops N
2622 stop icmp flood workers after N ICMP ping packets have been
2623 sent.
2624
2625 Idle pages stressor (Linux)
2626 --idle-scan N
2627 start N workers that scan the idle page bitmap across a
2628 range of physical pages. This sets and checks for idle
2629 pages via the idle page tracking interface /sys/ker‐
2630 nel/mm/page_idle/bitmap. This is for Linux only.
2631
2632 --idle-scan-ops N
2633 stop after N bogo page scan operations. Currently one bogo
2634 page scan operation is equivalent to setting and checking
2635 64 physical pages.
2636
2637 --idle-page N
2638 start N workers that walks through every page exercising
2639 the Linux /sys/kernel/mm/page_idle/bitmap interface. Re‐
2640 quires CAP_SYS_RESOURCE capability.
2641
2642 --idle-page-ops N
2643 stop after N bogo idle page operations.
2644
2645 Inode ioctl flags stressor
2646 --inode-flags N
2647 start N workers that exercise inode flags using the
2648 FS_IOC_GETFLAGS and FS_IOC_SETFLAGS ioctl(2). This attempts
2649 to apply all the available inode flags onto a directory and
2650 file even if the underlying file system may not support
2651 these flags (errors are just ignored). Each worker runs 4
2652 threads that exercise the flags on the same directory and
2653 file to try to force races. This is a Linux only stressor,
2654 see ioctl_iflags(2) for more details.
2655
2656 --inode-flags-ops N
2657 stop the inode-flags workers after N ioctl flag setting at‐
2658 tempts.
2659
2660 Inotify stressor
2661 --inotify N
2662 start N workers performing file system activities such as
2663 making/deleting files/directories, moving files, etc. to
2664 stress exercise the various inotify events (Linux only).
2665
2666 --inotify-ops N
2667 stop inotify stress workers after N inotify bogo opera‐
2668 tions.
2669
2670 Data synchronization (sync) stressor
2671 -i N, --io N
2672 start N workers continuously calling sync(2) to commit buf‐
2673 fer cache to disk. This can be used in conjunction with
2674 the --hdd stressor. This is a legacy stressor that is com‐
2675 patible with the orignal stress tool.
2676
2677 --io-ops N
2678 stop io stress workers after N bogo operations.
2679
2680 IO mixing stressor
2681 --iomix N
2682 start N workers that perform a mix of sequential, random
2683 and memory mapped read/write operations as well as random
2684 copy file read/writes, forced sync'ing and (if run as root)
2685 cache dropping. Multiple child processes are spawned to
2686 all share a single file and perform different I/O opera‐
2687 tions on the same file.
2688
2689 --iomix-bytes N
2690 write N bytes for each iomix worker process, the default is
2691 1 GB. One can specify the size as % of free space on the
2692 file system or in units of Bytes, KBytes, MBytes and GBytes
2693 using the suffix b, k, m or g.
2694
2695 --iomix-ops N
2696 stop iomix stress workers after N bogo iomix I/O opera‐
2697 tions.
2698
2699 Ioport stressor (x86 Linux)
2700 --ioport N
2701 start N workers than perform bursts of 16 reads and 16
2702 writes of ioport 0x80 (x86 Linux systems only). I/O per‐
2703 formed on x86 platforms on port 0x80 will cause delays on
2704 the CPU performing the I/O.
2705
2706 --ioport-ops N
2707 stop the ioport stressors after N bogo I/O operations
2708
2709 --ioport-opts [ in | out | inout ]
2710 to be performed. The default is both in and out. specify
2711 if port reads in, port read writes out or reads and writes
2712 are
2713
2714 IO scheduling class and priority stressor
2715 --ioprio N
2716 start N workers that exercise the ioprio_get(2) and io‐
2717 prio_set(2) system calls (Linux only).
2718
2719 --ioprio-ops N
2720 stop after N io priority bogo operations.
2721
2722 Io-uring stressor
2723 --io-uring N
2724 start N workers that perform iovec write and read I/O oper‐
2725 ations using the Linux io-uring interface. On each bogo-
2726 loop 1024 × 512 byte writes and 1024 × reads are performed
2727 on a temporary file.
2728
2729 --io-uring-ops
2730 stop after N rounds of write and reads.
2731
2732 Ipsec multi-buffer cryptographic stressor
2733 --ipsec-mb N
2734 start N workers that perform cryptographic processing using
2735 the highly optimized Intel Multi-Buffer Crypto for IPsec
2736 library. Depending on the features available, SSE, AVX, AVX
2737 and AVX512 CPU features will be used on data encrypted by
2738 SHA, DES, CMAC, CTR, HMAC MD5, HMAC SHA1 and HMAC SHA512
2739 cryptographic routines. This is only available for x86-64
2740 modern Intel CPUs.
2741
2742 --ipsec-mb-feature [ sse | avx | avx2 | avx512 ]
2743 Just use the specified processor CPU feature. By default,
2744 all the available features for the CPU are exercised.
2745
2746 --ipsec-mb-jobs N
2747 Process N multi-block rounds of cryptographic processing
2748 per iteration. The default is 256.
2749
2750 --ipsec-mb-method [ all | cmac | ctr | des | hmac-md5 | hmac-sha1
2751 | hmac-sha512 | sha ]
2752 Select the ipsec-mb crypto/integrity method.
2753
2754 --ipsec-mb-ops N
2755 stop after N rounds of processing of data using the crypto‐
2756 graphic routines.
2757
2758 System interval timer stressor
2759 --itimer N
2760 start N workers that exercise the system interval timers.
2761 This sets up an ITIMER_PROF itimer that generates a SIGPROF
2762 signal. The default frequency for the itimer is 1 MHz,
2763 however, the Linux kernel will set this to be no more that
2764 the jiffy setting, hence high frequency SIGPROF signals are
2765 not normally possible. A busy loop spins on getitimer(2)
2766 calls to consume CPU and hence decrement the itimer based
2767 on amount of time spent in CPU and system time.
2768
2769 --itimer-freq F
2770 run itimer at F Hz; range from 1 to 1000000 Hz. Normally
2771 the highest frequency is limited by the number of jiffy
2772 ticks per second, so running above 1000 Hz is difficult to
2773 attain in practice.
2774
2775 --itimer-ops N
2776 stop itimer stress workers after N bogo itimer SIGPROF sig‐
2777 nals.
2778
2779 --itimer-rand
2780 select an interval timer frequency based around the inter‐
2781 val timer frequency +/- 12.5% random jitter. This tries to
2782 force more variability in the timer interval to make the
2783 scheduling less predictable.
2784
2785 Jpeg compression stressor
2786 --jpeg N
2787 start N workers that use jpeg compression on a machine gen‐
2788 erated plasma field image. The default image is a plasma
2789 field, however different image types may be selected. The
2790 starting raster line is changed on each compression itera‐
2791 tion to cycle around the data.
2792
2793 --jpeg-height H
2794 use a RGB sample image height of H pixels. The default is
2795 512 pixels.
2796
2797 --jpeg-image [ brown | flat | gradient | noise | plasma | xstripes
2798 ]
2799 select the source image type to be compressed. Available
2800 image types are:
2801
2802 Type Description
2803 brown brown noise, red and green values vary by a 3 bit
2804 value, blue values vary by a 2 bit value.
2805 flat a single random colour for the entire image.
2806 gradient linear gradient of the red, green and blue compo‐
2807 nents across the width and height of the image.
2808 noise random white noise for red, green, blue values.
2809 plasma plasma field with smooth colour transitions and
2810 hard boundary edges.
2811 xstripes a random colour for each horizontal line.
2812
2813 --jpeg-ops N
2814 stop after N jpeg compression operations.
2815
2816 --jpeg-quality Q
2817 use the compression quality Q. The range is 1..100 (1 low‐
2818 est, 100 highest), with a default of 95
2819
2820 --jpeg-width H
2821 use a RGB sample image width of H pixels. The default is
2822 512 pixels.
2823
2824 Judy array stressor
2825 --judy N
2826 start N workers that insert, search and delete 32 bit inte‐
2827 gers in a Judy array using a predictable yet sparse array
2828 index. By default, there are 131072 integers used in the
2829 Judy array. This is a useful method to exercise random ac‐
2830 cess of memory and processor cache.
2831
2832 --judy-ops N
2833 stop the judy workers after N bogo judy operations are com‐
2834 pleted.
2835
2836 --judy-size N
2837 specify the size (number of 32 bit integers) in the Judy
2838 array to exercise. Size can be from 1K to 4M 32 bit inte‐
2839 gers.
2840
2841 Kcmp stressor (Linux)
2842 --kcmp N
2843 start N workers that use kcmp(2) to compare parent and
2844 child processes to determine if they share kernel re‐
2845 sources. Supported only for Linux and requires
2846 CAP_SYS_PTRACE capability.
2847
2848 --kcmp-ops N
2849 stop kcmp workers after N bogo kcmp operations.
2850
2851 Kernel key management stressor
2852 --key N
2853 start N workers that create and manipulate keys using
2854 add_key(2) and ketctl(2). As many keys are created as the
2855 per user limit allows and then the following keyctl com‐
2856 mands are exercised on each key: KEYCTL_SET_TIMEOUT,
2857 KEYCTL_DESCRIBE, KEYCTL_UPDATE, KEYCTL_READ, KEYCTL_CLEAR
2858 and KEYCTL_INVALIDATE.
2859
2860 --key-ops N
2861 stop key workers after N bogo key operations.
2862
2863 Process signals stressor
2864 --kill N
2865 start N workers sending SIGUSR1 kill signals to a SIG_IGN
2866 signal handler in the stressor and SIGUSR1 kill signal to a
2867 child stressor with a SIGUSR1 handler. Most of the process
2868 time will end up in kernel space.
2869
2870 --kill-ops N
2871 stop kill workers after N bogo kill operations.
2872
2873 Syslog stressor (Linux)
2874 --klog N
2875 start N workers exercising the kernel syslog(2) system
2876 call. This will attempt to read the kernel log with vari‐
2877 ous sized read buffers. Linux only.
2878
2879 --klog-ops N
2880 stop klog workers after N syslog operations.
2881
2882 KVM stressor
2883 --kvm N
2884 start N workers that create, run and destroy a minimal vir‐
2885 tual machine. The virtual machine reads, increments and
2886 writes to port 0x80 in a spin loop and the stressor handles
2887 the I/O transactions. Currently for x86 and Linux only.
2888
2889 --kvm-ops N
2890 stop kvm stressors after N virtual machines have been cre‐
2891 ated, run and destroyed.
2892
2893 CPU L1 cache stressor
2894 --l1cache N
2895 start N workers that exercise the CPU level 1 cache with
2896 reads and writes. A cache aligned buffer that is twice the
2897 level 1 cache size is read and then written in level 1
2898 cache set sized steps over each level 1 cache set. This is
2899 designed to exercise cache block evictions. The bogo-op
2900 count measures the number of million cache lines touched.
2901 Where possible, the level 1 cache geometry is determined
2902 from the kernel, however, this is not possible on some ar‐
2903 chitectures or kernels, so one may need to specify these
2904 manually. One can specify 3 out of the 4 cache geometric
2905 parameters, these are as follows:
2906
2907 --l1cache-line-size N
2908 specify the level 1 cache line size (in bytes)
2909
2910 --l1cache-method [ forward | random | reverse ]
2911 select the method of exercising a l1cache sized buffer. The
2912 default is a forward scan, random picks random bytes to ex‐
2913 ercise, reverse scans in reverse.
2914
2915 --l1cache-mlock
2916 attempt to mlock the l1cache size buffer into memory to
2917 prevent it from being swapped out.
2918
2919 --l1cache-ops N
2920 specify the number of cache read/write bogo-op loops to run
2921
2922 --l1cache-sets N
2923 specify the number of level 1 cache sets
2924
2925 --l1cache-size N
2926 specify the level 1 cache size (in bytes)
2927
2928 --l1cache-ways N
2929 specify the number of level 1 cache ways
2930
2931 Landlock stressor (Linux >= 5.13)
2932 --landlock N
2933 start N workers that exercise Linux 5.13 landlocking. A
2934 range of landlock_create_ruleset flags are exercised with a
2935 read only file rule to see if a directory can be accessed
2936 and a read-write file create can be blocked. Each ruleset
2937 attempt is exercised in a new child context and this is the
2938 limiting factor on the speed of the stressor.
2939
2940 --landlock-ops N
2941 stop the landlock stressors after N landlock ruleset bogo
2942 operations.
2943
2944 File lease stressor
2945 --lease N
2946 start N workers locking, unlocking and breaking leases via
2947 the fcntl(2) F_SETLEASE operation. The parent processes
2948 continually lock and unlock a lease on a file while a user
2949 selectable number of child processes open the file with a
2950 non-blocking open to generate SIGIO lease breaking notifi‐
2951 cations to the parent. This stressor is only available if
2952 F_SETLEASE, F_WRLCK and F_UNLCK support is provided by fc‐
2953 ntl(2).
2954
2955 --lease-breakers N
2956 start N lease breaker child processes per lease worker.
2957 Normally one child is plenty to force many SIGIO lease
2958 breaking notification signals to the parent, however, this
2959 option allows one to specify more child processes if re‐
2960 quired.
2961
2962 --lease-ops N
2963 stop lease workers after N bogo operations.
2964
2965 LED stressor (Linux)
2966 --led N
2967 start N workers that exercise the /sys/class/leds inter‐
2968 faces to set LED brightness levels and the various trigger
2969 settings. This needs to be run with root privilege to be
2970 able to write to these settings successfully. Non-root
2971 privilege will ignore failed writes.
2972
2973 --led-ops N
2974 stop after N interfaces are exercised.
2975
2976 Hardlink stressor
2977 --link N
2978 start N workers creating and removing hardlinks.
2979
2980 --link-ops N
2981 stop link stress workers after N bogo operations.
2982
2983 --link-sync
2984 sync dirty data and metadata to disk.
2985
2986 List data structures stressor
2987 --list N
2988 start N workers that exercise list data structures. The de‐
2989 fault is to add, find and remove 5,000 64 bit integers into
2990 circleq (doubly linked circle queue), list (doubly linked
2991 list), slist (singly linked list), slistt (singly linked
2992 list using tail), stailq (singly linked tail queue) and
2993 tailq (doubly linked tail queue) lists. The intention of
2994 this stressor is to exercise memory and cache with the var‐
2995 ious list operations.
2996
2997 --list-method [ all | circleq | list | slist | stailq | tailq ]
2998 specify the list to be used. By default, all the list meth‐
2999 ods are used (the 'all' option).
3000
3001 --list-ops N
3002 stop list stressors after N bogo ops. A bogo op covers the
3003 addition, finding and removing all the items into the
3004 list(s).
3005
3006 --list-size N
3007 specify the size of the list, where N is the number of 64
3008 bit integers to be added into the list.
3009
3010 Last level of cache stressor
3011 --llc-affinity N
3012 start N workers that exercise the last level of cache (LLC)
3013 by read/write activity across a LLC sized buffer and then
3014 changing CPU affinity after each round of read/writes. This
3015 can cause non-local memory stalls and LLC read/write
3016 misses.
3017
3018 --llc-affinity-mlock
3019 attempt to mlock the LLC sized buffer into memory to pre‐
3020 vent it from being swapped out.
3021
3022 --llc-affinity-ops N
3023 stop after N rounds of LLC read/writes.
3024
3025 Load average (loadavg) stressor
3026 --loadavg N
3027 start N workers that attempt to create thousands of
3028 pthreads that run at the lowest nice priority to force very
3029 high load averages. Linux systems will also perform some
3030 I/O writes as pending I/O is also factored into system load
3031 accounting.
3032
3033 --loadavg-max N
3034 set the maximum number of pthreads to create to N. N may be
3035 reduced if there is as system limit on the number of
3036 pthreads that can be created.
3037
3038 --loadavg-ops N
3039 stop loadavg workers after N bogo scheduling yields by the
3040 pthreads have been reached.
3041
3042 Lock and increment memory stressor (x86 and ARM)
3043 --lockbus N
3044 start N workers that rapidly lock and increment 64 bytes of
3045 randomly chosen memory from a 16MB mmap'd region (Intel x86
3046 and ARM CPUs only). This will cause cacheline misses and
3047 stalling of CPUs.
3048
3049 --lockbus-nosplit
3050 disable split locks that lock across cache line boundaries.
3051
3052 --lockbus-ops N
3053 stop lockbus workers after N bogo operations.
3054
3055 POSIX lock (F_SETLK/F_GETLK) stressor
3056 --locka N
3057 start N workers that randomly lock and unlock regions of a
3058 file using the POSIX advisory locking mechanism (see fc‐
3059 ntl(2), F_SETLK, F_GETLK). Each worker creates a 1024 KB
3060 file and attempts to hold a maximum of 1024 concurrent
3061 locks with a child process that also tries to hold 1024
3062 concurrent locks. Old locks are unlocked in a first-in,
3063 first-out basis.
3064
3065 --locka-ops N
3066 stop locka workers after N bogo locka operations.
3067
3068 POSIX lock (lockf) stressor
3069 --lockf N
3070 start N workers that randomly lock and unlock regions of a
3071 file using the POSIX lockf(3) locking mechanism. Each
3072 worker creates a 64 KB file and attempts to hold a maximum
3073 of 1024 concurrent locks with a child process that also
3074 tries to hold 1024 concurrent locks. Old locks are unlocked
3075 in a first-in, first-out basis.
3076
3077 --lockf-nonblock
3078 instead of using blocking F_LOCK lockf(3) commands, use
3079 non-blocking F_TLOCK commands and re-try if the lock
3080 failed. This creates extra system call overhead and CPU
3081 utilisation as the number of lockf workers increases and
3082 should increase locking contention.
3083
3084 --lockf-ops N
3085 stop lockf workers after N bogo lockf operations.
3086
3087 POSIX lock (F_OFD_SETLK/F_OFD_GETLK) stressor
3088 --lockofd N
3089 start N workers that randomly lock and unlock regions of a
3090 file using the Linux open file description locks (see fc‐
3091 ntl(2), F_OFD_SETLK, F_OFD_GETLK). Each worker creates a
3092 1024 KB file and attempts to hold a maximum of 1024 concur‐
3093 rent locks with a child process that also tries to hold
3094 1024 concurrent locks. Old locks are unlocked in a first-
3095 in, first-out basis.
3096
3097 --lockofd-ops N
3098 stop lockofd workers after N bogo lockofd operations.
3099
3100 Long jump (longjmp) stressor
3101 --longjmp N
3102 start N workers that exercise setjmp(3)/longjmp(3) by rapid
3103 looping on longjmp calls.
3104
3105 --longjmp-ops N
3106 stop longjmp stress workers after N bogo longjmp operations
3107 (1 bogo op is 1000 longjmp calls).
3108
3109 Loopback stressor (Linux)
3110 --loop N
3111 start N workers that exercise the loopback control device.
3112 This creates 2MB loopback devices, expands them to 4MB,
3113 performs some loopback status information get and set oper‐
3114 ations and then destoys them. Linux only and requires
3115 CAP_SYS_ADMIN capability.
3116
3117 --loop-ops N
3118 stop after N bogo loopback creation/deletion operations.
3119
3120 Linear search stressor
3121 --lsearch N
3122 start N workers that linear search a unsorted array of 32
3123 bit integers using lsearch(3). By default, there are 8192
3124 elements in the array. This is a useful method to exercise
3125 sequential access of memory and processor cache.
3126
3127 --lsearch-ops N
3128 stop the lsearch workers after N bogo lsearch operations
3129 are completed.
3130
3131 --lsearch-size N
3132 specify the size (number of 32 bit integers) in the array
3133 to lsearch. Size can be from 1K to 4M.
3134
3135 Madvise stressor
3136 --madvise N
3137 start N workers that apply random madvise(2) advise set‐
3138 tings on pages of a 4MB file backed shared memory mapping.
3139
3140 --madvise-hwpoison
3141 enable MADV_HWPOISON page poisoning (if available, only
3142 when run as root). This will page poison a few pages and
3143 will cause kernel error messages to be reported.
3144
3145 --madvise-ops N
3146 stop madvise stressors after N bogo madvise operations.
3147
3148 Memory allocation stressor
3149 --malloc N
3150 start N workers continuously calling malloc(3), calloc(3),
3151 realloc(3), posix_memalign(3), aligned_alloc(3), mema‐
3152 lign(3) and free(3). By default, up to 65536 allocations
3153 can be active at any point, but this can be altered with
3154 the --malloc-max option. Allocation, reallocation and
3155 freeing are chosen at random; 50% of the time memory is al‐
3156 location (via one of malloc, calloc or realloc, posix_mema‐
3157 lign, aligned_alloc, memalign) and 50% of the time alloca‐
3158 tions are free'd. Allocation sizes are also random, with
3159 the maximum allocation size controlled by the --mal‐
3160 loc-bytes option, the default size being 64K. The worker
3161 is re-started if it is killed by the out of memory (OOM)
3162 killer.
3163
3164 --malloc-bytes N
3165 maximum per allocation/reallocation size. Allocations are
3166 randomly selected from 1 to N bytes. One can specify the
3167 size as % of total available memory or in units of Bytes,
3168 KBytes, MBytes and GBytes using the suffix b, k, m or g.
3169 Large allocation sizes cause the memory allocator to use
3170 mmap(2) rather than expanding the heap using brk(2).
3171
3172 --malloc-max N
3173 maximum number of active allocations allowed. Allocations
3174 are chosen at random and placed in an allocation slot. Be‐
3175 cause about 50%/50% split between allocation and freeing,
3176 typically half of the allocation slots are in use at any
3177 one time.
3178
3179 --malloc-mlock
3180 attempt to mlock the allocations into memory to prevent
3181 them from being swapped out.
3182
3183 --malloc-ops N
3184 stop after N malloc bogo operations. One bogo operations
3185 relates to a successful malloc(3), calloc(3) or realloc(3).
3186
3187 --malloc-pthreads N
3188 specify number of malloc stressing concurrent pthreads to
3189 run. The default is 0 (just one main process, no pthreads).
3190 This option will do nothing if pthreads are not supported.
3191
3192 --malloc-thresh N
3193 specify the threshold where malloc uses mmap(2) instead of
3194 sbrk(2) to allocate more memory. This is only available on
3195 systems that provide the GNU C mallopt(3) tuning function.
3196
3197 --malloc-touch
3198 touch every allocated page to force pages to be populated
3199 in memory. This will increase the memory pressure and exer‐
3200 cise the virtual memory harder. By default the malloc
3201 stressor will madvise pages into memory or use mincore to
3202 check for non-resident memory pages and try to force them
3203 into memory; this option aggressively forces pages to be
3204 memory resident.
3205
3206 --malloc-zerofree
3207 zero allocated memory before free'ing. This can be useful
3208 in touching broken allocations and triggering failures.
3209 Also useful for forcing extra cache/memory writes.
3210
3211 2D Matrix stressor
3212 --matrix N
3213 start N workers that perform various matrix operations on
3214 floating point values. Testing on 64 bit x86 hardware shows
3215 that this provides a good mix of memory, cache and floating
3216 point operations and is an excellent way to make a CPU run
3217 hot.
3218
3219 By default, this will exercise all the matrix stress meth‐
3220 ods one by one. One can specify a specific matrix stress
3221 method with the --matrix-method option.
3222
3223 --matrix-method method
3224 specify a matrix stress method. Available matrix stress
3225 methods are described as follows:
3226
3227 Method Description
3228 all iterate over all the below matrix stress methods
3229 add add two N × N matrices
3230 copy copy one N × N matrix to another
3231 div divide an N × N matrix by a scalar
3232 frobenius Frobenius product of two N × N matrices
3233 hadamard Hadamard product of two N × N matrices
3234 identity create an N × N identity matrix
3235 mean arithmetic mean of two N × N matrices
3236 mult multiply an N × N matrix by a scalar
3237 negate negate an N × N matrix
3238 prod product of two N × N matrices
3239 sub subtract one N × N matrix from another N × N ma‐
3240 trix
3241 square multiply an N × N matrix by itself
3242 trans transpose an N × N matrix
3243 zero zero an N × N matrix
3244
3245 --matrix-ops N
3246 stop matrix stress workers after N bogo operations.
3247
3248 --matrix-size N
3249 specify the N × N size of the matrices. Smaller values re‐
3250 sult in a floating point compute throughput bound stressor,
3251 where as large values result in a cache and/or memory band‐
3252 width bound stressor.
3253
3254 --matrix-yx
3255 perform matrix operations in order y by x rather than the
3256 default x by y. This is suboptimal ordering compared to the
3257 default and will perform more data cache stalls.
3258
3259 3D Matrix stressor
3260 --matrix-3d N
3261 start N workers that perform various 3D matrix operations
3262 on floating point values. Testing on 64 bit x86 hardware
3263 shows that this provides a good mix of memory, cache and
3264 floating point operations and is an excellent way to make a
3265 CPU run hot.
3266
3267 By default, this will exercise all the 3D matrix stress
3268 methods one by one. One can specify a specific 3D matrix
3269 stress method with the --matrix-3d-method option.
3270
3271 --matrix-3d-method method
3272 specify a 3D matrix stress method. Available 3D matrix
3273 stress methods are described as follows:
3274
3275 Method Description
3276 all iterate over all the below matrix stress methods
3277 add add two N × N × N matrices
3278 copy copy one N × N × N matrix to another
3279 div divide an N × N × N matrix by a scalar
3280 frobenius Frobenius product of two N × N × N matrices
3281 hadamard Hadamard product of two N × N × N matrices
3282 identity create an N × N × N identity matrix
3283 mean arithmetic mean of two N × N × N matrices
3284 mult multiply an N × N × N matrix by a scalar
3285 negate negate an N × N × N matrix
3286 sub subtract one N × N × N matrix from another N × N
3287 × N matrix
3288 trans transpose an N × N × N matrix
3289 zero zero an N × N × N matrix
3290
3291 --matrix-3d-ops N
3292 stop the 3D matrix stress workers after N bogo operations.
3293
3294 --matrix-3d-size N
3295 specify the N × N × N size of the matrices. Smaller values
3296 result in a floating point compute throughput bound stres‐
3297 sor, where as large values result in a cache and/or memory
3298 bandwidth bound stressor.
3299
3300 --matrix-3d-zyx
3301 perform matrix operations in order z by y by x rather than
3302 the default x by y by z. This is suboptimal ordering com‐
3303 pared to the default and will perform more data cache
3304 stalls.
3305
3306 Memory contention stressor
3307 --mcontend N
3308 start N workers that produce memory contention read/write
3309 patterns. Each stressor runs with 5 threads that read and
3310 write to two different mappings of the same underlying
3311 physical page. Various caching operations are also exer‐
3312 cised to cause sub-optimal memory access patterns. The
3313 threads also randomly change CPU affinity to exercise CPU
3314 and memory migration stress.
3315
3316 --mcontend-ops N
3317 stop mcontend stressors after N bogo read/write operations.
3318
3319 Memory barrier stressor (Linux)
3320 --membarrier N
3321 start N workers that exercise the membarrier system call
3322 (Linux only).
3323
3324 --membarrier-ops N
3325 stop membarrier stress workers after N bogo membarrier op‐
3326 erations.
3327
3328 Memory copy (memcpy) stressor
3329 --memcpy N
3330 start N workers that copies data to and from a buffer using
3331 memcpy(3) and then move the data in the buffer with mem‐
3332 move(3) with 3 different alignments. This will exercise the
3333 data cache and memory copying.
3334
3335 --memcpy-method [ all | libc | builtin | naive | naive_o0 ..
3336 naive_o3 ]
3337 specify a memcpy copying method. Available memcpy methods
3338 are described as follows:
3339
3340 Method Description
3341 all use libc, builtin and naïve methods
3342 libc use libc memcpy and memmove functions, this is
3343 the default
3344 builtin use the compiler built in optimized memcpy and
3345 memmove functions
3346 naive use naïve byte by byte copying and memory moving
3347 build with default compiler optimization flags
3348 naive_o0 use unoptimized naïve byte by byte copying and
3349 memory moving
3350 naive_o1 use unoptimized naïve byte by byte copying and
3351 memory moving with -O1 optimization
3352 naive_o2 use optimized naïve byte by byte copying and mem‐
3353 ory moving build with -O2 optimization and where
3354 possible use CPU specific optimizations
3355 naive_o3 use optimized naïve byte by byte copying and mem‐
3356 ory moving build with -O3 optimization and where
3357 possible use CPU specific optimizations
3358
3359 --memcpy-ops N
3360 stop memcpy stress workers after N bogo memcpy operations.
3361
3362 Anonymous file (memfd) stressor
3363 --memfd N
3364 start N workers that create allocations of 1024 pages using
3365 memfd_create(2) and ftruncate(2) for allocation and mmap(2)
3366 to map the allocation into the process address space.
3367 (Linux only).
3368
3369 --memfd-bytes N
3370 allocate N bytes per memfd stress worker, the default is
3371 256MB. One can specify the size in as % of total available
3372 memory or in units of Bytes, KBytes, MBytes and GBytes us‐
3373 ing the suffix b, k, m or g.
3374
3375 --memfd-fds N
3376 create N memfd file descriptors, the default is 256. One
3377 can select 8 to 4096 memfd file descriptions with this op‐
3378 tion.
3379
3380 --memfd-mlock
3381 attempt to mlock mmap'd pages into memory causing more mem‐
3382 ory pressure by preventing pages from swapped out.
3383
3384 --memfd-ops N
3385 stop after N memfd-create(2) bogo operations.
3386
3387 Memory hotplug stressor (Linux)
3388 --memhotplug N
3389 start N workers that offline and online memory hotplug re‐
3390 gions. Linux only and requires CAP_SYS_ADMIN capabilities.
3391
3392 --memhotplug-ops N
3393 stop memhotplug stressors after N memory offline and online
3394 bogo operations.
3395
3396 Memory read/write stressor
3397 --memrate N
3398 start N workers that exercise a buffer with 1024, 512, 256,
3399 128, 64, 32, 16 and 8 bit reads and writes. 1024, 512 and
3400 256 reads and writes are available with compilers that sup‐
3401 port integer vectors. x86-64 cpus that support uncached
3402 (non-temporal "nt") writes also exercise 128, 64 and 32
3403 writes providing higher write rates than the normal cached
3404 writes. x86-64 also exercises repeated string stores using
3405 64, 32, 16 and 8 bit writes. CPUs that support prefetching
3406 reads also exercise 64 prefetched "pf" reads. This memory
3407 stressor allows one to also specify the maximum read and
3408 write rates. The stressors will run at maximum speed if no
3409 read or write rates are specified.
3410
3411 --memrate-bytes N
3412 specify the size of the memory buffer being exercised. The
3413 default size is 256MB. One can specify the size in units of
3414 Bytes, KBytes, MBytes and GBytes using the suffix b, k, m
3415 or g, or cache sizes with L1, L2, L3 or LLC (lower level
3416 cache size).
3417
3418 --memrate-flush
3419 flush cache between each memory exercising test to remove
3420 caching benefits in memory rate metrics.
3421
3422 --memrate-ops N
3423 stop after N bogo memrate operations.
3424
3425 --memrate-rd-mbs N
3426 specify the maximum allowed read rate in MB/sec. The actual
3427 read rate is dependent on scheduling jitter and memory ac‐
3428 cesses from other running processes.
3429
3430 --memrate-wr-mbs N
3431 specify the maximum allowed read rate in MB/sec. The actual
3432 write rate is dependent on scheduling jitter and memory ac‐
3433 cesses from other running processes.
3434
3435 Memory trash stressor
3436 --memthrash N
3437 start N workers that thrash and exercise a 16MB buffer in
3438 various ways to try and trip thermal overrun. Each stres‐
3439 sor will start 1 or more threads. The number of threads is
3440 chosen so that there will be at least 1 thread per CPU.
3441 Note that the optimal choice for N is a value that divides
3442 into the number of CPUs.
3443
3444 --memthrash-method method
3445 specify a memthrash stress method. Available memthrash
3446 stress methods are described as follows:
3447
3448 Method Description
3449 all iterate over all the below memthrash methods
3450 chunk1 memset 1 byte chunks of random data into ran‐
3451 dom locations
3452 chunk8 memset 8 byte chunks of random data into ran‐
3453 dom locations
3454 chunk64 memset 64 byte chunks of random data into ran‐
3455 dom locations
3456 chunk256 memset 256 byte chunks of random data into
3457 random locations
3458 chunkpage memset page size chunks of random data into
3459 random locations
3460 copy128 copy 128 byte chunks from chunk N + 1 to chunk
3461 N with streaming reads and writes with 128 bit
3462 memory accesses where possible.
3463 flip flip (invert) all bits in random locations
3464 flush flush cache line in random locations
3465 lock lock randomly choosing locations (Intel x86
3466 and ARM CPUs only)
3467 matrix treat memory as a 2 × 2 matrix and swap random
3468 elements
3469 memmove copy all the data in buffer to the next memory
3470 location
3471 memset memset the memory with random data
3472 memset64 memset the memory with a random 64 bit value
3473 in 64 byte chunks using non-temporal stores if
3474 possible or normal stores as a fallback
3475
3476 memsetstosd memset the memory using x86 32 bit rep stosd
3477 instruction (x86 only)
3478 mfence stores with write serialization
3479 numa memory bind pages across numa nodes
3480 prefetch prefetch data at random memory locations
3481 random randomly run any of the memthrash methods ex‐
3482 cept for 'random' and 'all'
3483 spinread spin loop read the same random location 2↑19
3484 times
3485 spinwrite spin loop write the same random location 2↑19
3486 times
3487 swap step through memory swapping bytes in steps of
3488 65 and 129 byte strides
3489 swap64 work through memory swapping adjacent 64 byte
3490 chunks
3491 swapfwdrev swap 64 bit values from start to end and work
3492 towards the middle and then from end to start
3493 and work towards the middle.
3494 tlb work through memory in sub-optimial strides of
3495 prime multiples of the cache line size with
3496 reads and then writes to cause Translation
3497 Lookaside Buffer (TLB) misses.
3498
3499 --memthrash-ops N
3500 stop after N memthrash bogo operations.
3501
3502 BSD mergesort stressor
3503 --mergesort N
3504 start N workers that sort 32 bit integers using the BSD
3505 mergesort.
3506
3507 --mergesort-ops N
3508 stop mergesort stress workers after N bogo mergesorts.
3509
3510 --mergesort-size N
3511 specify number of 32 bit integers to sort, default is
3512 262144 (256 × 1024).
3513
3514 File metadata mix
3515 --metamix N
3516 start N workers that generate a file metadata mix of opera‐
3517 tions. Each stressor runs 16 concurrent processes that each
3518 exercise a file's metadata with sequences of open, 256
3519 lseeks and writes, fdatasync, close, fsync and then stat,
3520 open, 256 lseeks, reads, occasional file memory mapping,
3521 close, unlink and lstat.
3522
3523 --metamix-bytes N
3524 set the size of metamix files, the default is 1 MB. One can
3525 specify the size as % of free space on the file system or
3526 in units of Bytes, KBytes, MBytes and GBytes using the suf‐
3527 fix b, k, m or g.
3528
3529 --metamix-ops N
3530 stop the metamix stressor after N bogo metafile operations.
3531
3532 Resident memory (mincore) stressor
3533 --mincore N
3534 start N workers that walk through all of memory 1 page at a
3535 time checking if the page mapped and also is resident in
3536 memory using mincore(2). It also maps and unmaps a page to
3537 check if the page is mapped or not using mincore(2).
3538
3539 --mincore-ops N
3540 stop after N mincore bogo operations. One mincore bogo op
3541 is equivalent to a 300 mincore(2) calls. --mincore-random
3542 instead of walking through pages sequentially, select pages
3543 at random. The chosen address is iterated over by shifting
3544 it right one place and checked by mincore until the address
3545 is less or equal to the page size.
3546
3547 Misaligned read/write stressor
3548 --misaligned N
3549 start N workers that perform misaligned read and writes. By
3550 default, this will exercise 128 bit misaligned read and
3551 writes in 8 × 16 bits, 4 × 32 bits, 2 × 64 bits and 1 × 128
3552 bits at the start of a page boundary, at the end of a page
3553 boundary and over a cache boundary. Misaligned read and
3554 writes operate at 1 byte offset from the natural alignment
3555 of the data type. On some architectures this can cause SIG‐
3556 BUS, SIGILL or SIGSEGV, these are handled and the mis‐
3557 aligned stressor method causing the error is disabled.
3558
3559 --misaligned-method method
3560 Available misaligned stress methods are described as fol‐
3561 lows:
3562
3563 Method Description
3564
3565 all iterate over all the following misaligned methods
3566 int16rd 8 × 16 bit integer reads
3567 int16wr 8 × 16 bit integer writes
3568 int16inc 8 × 16 bit integer increments
3569 int16atomic 8 × 16 bit atomic integer increments
3570 int32rd 4 × 32 bit integer reads
3571 int32wr 4 × 32 bit integer writes
3572 int32wtnt 4 × 32 bit non-temporal stores (x86 only)
3573 int32inc 4 × 32 bit integer increments
3574 int32atomic 4 × 32 bit atomic integer increments
3575 int64rd 2 × 64 bit integer reads
3576 int64wr 2 × 64 bit integer writes
3577 int64wtnt 4 × 64 bit non-temporal stores (x86 only)
3578 int64inc 2 × 64 bit integer increments
3579 int64atomic 2 × 64 bit atomic integer increments
3580 int128rd 1 × 128 bit integer reads
3581 int128wr 1 × 128 bit integer writes
3582 int128inc 1 × 128 bit integer increments
3583 int128atomic 1 × 128 bit atomic integer increments
3584
3585 Note that some of these options (128 bit integer and/or atomic op‐
3586 erations) may not be available on some systems.
3587
3588 --misaligned-ops N
3589 stop after N misaligned bogo operation. A misaligned bogo
3590 op is equivalent
3591 to 65536 × 128 bit reads or writes.
3592
3593 Mknod/unlink stressor
3594 --mknod N
3595 start N workers that create and remove fifos, empty files
3596 and named sockets using mknod and unlink.
3597
3598 --mknod-ops N
3599 stop directory thrash workers after N bogo mknod opera‐
3600 tions.
3601
3602 Mapped memory pages lock/unlock stressor
3603 --mlock N
3604 start N workers that lock and unlock memory mapped pages
3605 using mlock(2), munlock(2), mlockall(2) and munlockall(2).
3606 This is achieved by the mapping of three contiguous pages
3607 and then locking the second page, hence ensuring non-con‐
3608 tiguous pages are locked . This is then repeated until the
3609 maximum allowed mlocks or a maximum of 262144 mappings are
3610 made. Next, all future mappings are mlocked and the worker
3611 attempts to map 262144 pages, then all pages are munlocked
3612 and the pages are unmapped.
3613
3614 --mlock-ops N
3615 stop after N mlock bogo operations.
3616
3617 --mlockmany N
3618 start N workers that fork off a default of 1024 child pro‐
3619 cesses in total; each child will attempt to anonymously
3620 mmap and mlock the maximum allowed mlockable memory size.
3621 The stress test attempts to avoid swapping by tracking low
3622 memory and swap allocations (but some swapping may occur).
3623 Once either the maximum number of child process is reached
3624 or all mlockable in-core memory is locked then child pro‐
3625 cesses are killed and the stress test is repeated.
3626
3627 --mlockmany-ops N
3628 stop after N mlockmany (mmap and mlock) operations.
3629
3630 --mlockmany-procs N
3631 set the number of child processes to create per stressor.
3632 The default is to start a maximum of 1024 child processes
3633 in total across all the stressors. This option allows the
3634 setting of N child processes per stressor.
3635
3636 Memory mapping (mmap/munmap) stressor
3637 --mmap N
3638 start N workers continuously calling mmap(2)/munmap(2).
3639 The initial mapping is a large chunk (size specified by
3640 --mmap-bytes) followed by pseudo-random 4K unmappings, then
3641 pseudo-random 4K mappings, and then linear 4K unmappings.
3642 Note that this can cause systems to trip the kernel OOM
3643 killer on Linux systems if not enough physical memory and
3644 swap is not available. The MAP_POPULATE option is used to
3645 populate pages into memory on systems that support this.
3646 By default, anonymous mappings are used, however, the
3647 --mmap-file and --mmap-async options allow one to perform
3648 file based mappings if desired.
3649
3650 --mmap-async
3651 enable file based memory mapping and use asynchronous
3652 msync'ing on each page, see --mmap-file.
3653
3654 --mmap-bytes N
3655 allocate N bytes per mmap stress worker, the default is
3656 256MB. One can specify the size as % of total available
3657 memory or in units of Bytes, KBytes, MBytes and GBytes us‐
3658 ing the suffix b, k, m or g.
3659
3660 --mmap-file
3661 enable file based memory mapping and by default use syn‐
3662 chronous msync'ing on each page.
3663
3664 --mmap-mlock
3665 attempt to mlock mmap'd pages into memory causing more mem‐
3666 ory pressure by preventing pages from swapped out.
3667
3668 --mmap-mmap2
3669 use mmap2 for 4K page aligned offsets if mmap2 is avail‐
3670 able, otherwise fall back to mmap.
3671
3672 --mmap-mprotect
3673 change protection settings on each page of memory. Each
3674 time a page or a group of pages are mapped or remapped then
3675 this option will make the pages read-only, write-only,
3676 exec-only, and read-write.
3677
3678 --mmap-odirect
3679 enable file based memory mapping and use O_DIRECT direct
3680 I/O.
3681
3682 --mmap-ops N
3683 stop mmap stress workers after N bogo operations.
3684
3685 --mmap-osync
3686 enable file based memory mapping and used O_SYNC synchro‐
3687 nous I/O integrity completion.
3688
3689 Random memory map/unmap stressor
3690 --mmapaddr N
3691 start N workers that memory map pages at a random memory
3692 location that is not already mapped. On 64 bit machines
3693 the random address is randomly chosen 32 bit or 64 bit ad‐
3694 dress. If the mapping works a second page is memory mapped
3695 from the first mapped address. The stressor exercises
3696 mmap/munmap, mincore and segfault handling.
3697
3698 --mmapaddr-mlock
3699 attempt to mlock mmap'd pages into memory causing more mem‐
3700 ory pressure by preventing pages from swapped out.
3701
3702 --mmapaddr-ops N
3703 stop after N random address mmap bogo operations.
3704
3705 Forked memory map stressor
3706 --mmapfork N
3707 start N workers that each fork off 32 child processes, each
3708 of which tries to allocate some of the free memory left in
3709 the system (and trying to avoid any swapping). The child
3710 processes then hint that the allocation will be needed with
3711 madvise(2) and then memset it to zero and hint that it is
3712 no longer needed with madvise before exiting. This pro‐
3713 duces significant amounts of VM activity, a lot of cache
3714 misses and with minimal swapping.
3715
3716 --mmapfork-ops N
3717 stop after N mmapfork bogo operations.
3718
3719 Fixed address memory map stressor
3720 --mmapfixed N
3721 start N workers that perform fixed address allocations from
3722 the top virtual address down to 128K. The allocated sizes
3723 are from 1 page to 8 pages and various random mmap flags
3724 are used MAP_SHARED/MAP_PRIVATE, MAP_LOCKED, MAP_NORESERVE,
3725 MAP_POPULATE. If successfully map'd then the allocation is
3726 remap'd first to a large range of addresses based on a ran‐
3727 dom start and finally an address that is several pages
3728 higher in memory. Mappings and remappings are madvised with
3729 random madvise options to further exercise the mappings.
3730
3731 --mmapfixed-mlock
3732 attempt to mlock mmap'd pages into memory causing more mem‐
3733 ory pressure by preventing pages from swapped out.
3734
3735 --mmapfixed-ops N
3736 stop after N mmapfixed memory mapping bogo operations.
3737
3738 Huge page memory mapping stressor
3739 --mmaphuge N
3740 start N workers that attempt to mmap a set of huge pages
3741 and large huge page sized mappings. Successful mappings are
3742 madvised with MADV_NOHUGEPAGE and MADV_HUGEPAGE settings
3743 and then 1/64th of the normal small page size pages are
3744 touched. Finally, an attempt to unmap a small page size
3745 page at the end of the mapping is made (these may fail on
3746 huge pages) before the set of pages are unmapped. By de‐
3747 fault 8192 mappings are attempted per round of mappings or
3748 until swapping is detected.
3749
3750 --mmaphuge-file
3751 attempt to mmap on a 16MB temporary file and random 4K off‐
3752 sets. If this fails, anonymous mappings are used instead.
3753
3754 --mmaphuge-mlock
3755 attempt to mlock mmap'd huge pages into memory causing more
3756 memory pressure by preventing pages from swapped out.
3757
3758 --mmaphuge-mmaps N
3759 set the number of huge page mappings to attempt in each
3760 round of mappings. The default is 8192 mappings.
3761
3762 --mmaphuge-ops N
3763 stop after N mmaphuge bogo operations
3764
3765 Maximum memory mapping per process stressor
3766 --mmapmany N
3767 start N workers that attempt to create the maximum allowed
3768 per-process memory mappings. This is achieved by mapping 3
3769 contiguous pages and then unmapping the middle page hence
3770 splitting the mapping into two. This is then repeated until
3771 the maximum allowed mappings or a maximum of 262144 map‐
3772 pings are made.
3773
3774 --mmapmany-mlock
3775 attempt to mlock mmap'd huge pages into memory causing more
3776 memory pressure by preventing pages from swapped out.
3777
3778 --mmapmany-ops N
3779 stop after N mmapmany bogo operations
3780
3781 Kernel module loading stressor (Linux)
3782 --module N
3783 start N workers that use finit_module() to load the module
3784 specified or the hello test module, if is available. There
3785 are different ways to test loading modules. Using modprobe
3786 calls in a loop, using the kernel kernel module autoloader,
3787 and this stress-ng module stressor. To stress tests mod‐
3788 probe we can simply run the userspace modprobe program in a
3789 loop. To stress test the kernel module autoloader we can
3790 stress tests using the upstream kernel tools/testing/self‐
3791 tests/kmod/kmod.sh. This ends up calling modprobe in the
3792 end, and it has its own caps built-in to self protect the
3793 kernel from too many requests at the same time. The
3794 userspace modprobe call will also prevent calls if the same
3795 module exists already. The stress-ng modules stressor is
3796 designed to help stress test the finit_module() system call
3797 even if the module is already loaded, testing races that
3798 are otherwise hard to reproduce.
3799
3800 --module-name NAME
3801 NAME of the module to use, for example: test_module, xfs,
3802 ext4. By default test_module is used so CONFIG_TEST_LKM
3803 must be enabled in the kernel. The module dependencies
3804 must be loaded prior to running these stressor tests, as
3805 this stresses running finit_module() not using modprobe.
3806
3807 --module-no-modver
3808 ignore module modversions when using finit_module().
3809
3810 --module-no-vermag
3811 ignore module versions when using finit_module().
3812
3813 --module-no-unload
3814 do not unload the module right after loading it with
3815 finit_module().
3816
3817 --module-ops N
3818 stop after N module load/unload cycles
3819
3820 Multi-precision floating operations (mpfr) stressor
3821 --mpfr N
3822 start N workers that exercise multi-precision floating
3823 point operations using the GNU Multi-Precision Floating
3824 Point Reliable library (mpfr). Operations computed are as
3825 follows:
3826
3827 Method Description
3828 apery calculate Apery's constant ζ(3); the sum of 1/(n ↑
3829 3).
3830 cosine compute cos(θ) for θ = 0 to 2π in 100 steps.
3831
3832 euler compute e using n = (1 + (1 ÷ n)) ↑ n.
3833 exp compute 1000 exponentials.
3834 log computer 1000 natural logarithms.
3835 omega compute the omega constant defined by Ωe↑Ω = 1 us‐
3836 ing efficient iteration of Ωn+1 = (1 + Ωn) / (1 +
3837 e↑Ωn).
3838 phi compute the Golden Ratio ϕ using series.
3839 sine compute sin(θ) for θ = 0 to 2π in 100 steps.
3840 nsqrt compute square root using Newton-Raphson.
3841
3842 --mpfr-ops N
3843 stop workers after N iterations of various multi-precision
3844 floating point operations.
3845
3846 --mpfr-precision N
3847 specify the precision in binary digits of the floating
3848 point operations. The default is 1000 bits, the allowed
3849 range is 32 to 1000000 (very slow).
3850
3851 Memory protection stressor
3852 --mprotect N
3853 start N workers that exercise changing page protection set‐
3854 tings and access memory after each change. 8 processes per
3855 worker contend with each other changing page proection set‐
3856 tings on a shared memory region of just a few pages to
3857 cause TLB flushes. A read and write to the pages can cause
3858 segmentation faults and these are handled by the stressor.
3859 All combinations of page protection settings are exercised
3860 including invalid combinations.
3861
3862 --mprotect-ops N
3863 stop after N mprotect calls.
3864
3865 POSIX message queue stressor (Linux)
3866 --mq N start N sender and receiver processes that continually send
3867 and receive messages using POSIX message queues. (Linux
3868 only).
3869
3870 --mq-ops N
3871 stop after N bogo POSIX message send operations completed.
3872
3873 --mq-size N
3874 specify size of POSIX message queue. The default size is 10
3875 messages and most Linux systems this is the maximum allowed
3876 size for normal users. If the given size is greater than
3877 the allowed message queue size then a warning is issued and
3878 the maximum allowed size is used instead.
3879
3880 Memory remap stressor (Linux)
3881 --mremap N
3882 start N workers continuously calling mmap(2), mremap(2) and
3883 munmap(2). The initial anonymous mapping is a large chunk
3884 (size specified by --mremap-bytes) and then iteratively
3885 halved in size by remapping all the way down to a page size
3886 and then back up to the original size. This worker is only
3887 available for Linux.
3888
3889 --mremap-bytes N
3890 initially allocate N bytes per remap stress worker, the de‐
3891 fault is 256MB. One can specify the size in units of Bytes,
3892 KBytes, MBytes and GBytes using the suffix b, k, m or g.
3893
3894 --mremap-mlock
3895 attempt to mlock remap'd pages into memory causing more
3896 memory pressure by preventing pages from swapped out.
3897
3898 --mremap-ops N
3899 stop mremap stress workers after N bogo operations.
3900
3901 System V message IPC stressor
3902 --msg N
3903 start N sender and receiver processes that continually send
3904 and receive messages using System V message IPC.
3905
3906 --msg-bytes N
3907 specify the size of the message being sent and received.
3908 Range 4 to 8192 bytes, default is 4 bytes.
3909
3910 --msg-ops N
3911 stop after N bogo message send operations completed.
3912
3913 --msg-types N
3914 select the quality of message types (mtype) to use. By de‐
3915 fault, msgsnd sends messages with a mtype of 1, this option
3916 allows one to send messages types in the range 1..N to ex‐
3917 ercise the message queue receive ordering. This will also
3918 impact throughput performance.
3919
3920 Synchronize file with memory map (msync) stressor
3921 --msync N
3922 start N stressors that msync data from a file backed memory
3923 mapping from memory back to the file and msync modified
3924 data from the file back to the mapped memory. This exer‐
3925 cises the msync(2) MS_SYNC and MS_INVALIDATE sync opera‐
3926 tions.
3927
3928 --msync-bytes N
3929 allocate N bytes for the memory mapped file, the default is
3930 256MB. One can specify the size as % of total available
3931 memory or in units of Bytes, KBytes, MBytes and GBytes us‐
3932 ing the suffix b, k, m or g.
3933
3934 --msync-ops N
3935 stop after N msync bogo operations completed.
3936
3937 Synchronize file with memory map (msync) coherency stressor
3938 --msyncmany N
3939 start N stressors that memory map up to 32768 pages on the
3940 same page of a temporary file, change the first 32 bits in
3941 a page and msync the data back to the file. The other
3942 32767 pages are examined to see if the 32 bit check value
3943 is msync'd back to these pages.
3944
3945 --msyncmany-ops N
3946 stop after N msync calls in the msyncmany stressors are
3947 completed.
3948
3949 Unmapping shared non-executable memory stressor (Linux)
3950 --munmap N
3951 start N stressors that exercise unmapping of shared non-ex‐
3952 ecutable mapped regions of child processes (Linux only).
3953 The unmappings map shared memory regions page by page with
3954 a prime sized stride that creates many temporary mapping
3955 holes. One the unmappings are complete the child will exit
3956 and a new one is started. Note that this may trigger seg‐
3957 mentation faults in the child process, these are handled
3958 where possible by forcing the child process to call
3959 _exit(2).
3960
3961 --munmap-ops N
3962 stop after N page unmappings.
3963
3964 Pthread mutex stressor
3965 --mutex N
3966 start N stressors that exercise pthread mutex locking and
3967 unlocking. If run with enough privilege then the FIFO
3968 scheduler is used and a random priority between 0 and 80%
3969 of the maximum FIFO priority level is selected for the
3970 locking operation. The minimum FIFO priority level is se‐
3971 lected for the critical mutex section and unlocking opera‐
3972 tion to exercise random inverted priority scheduling.
3973
3974 --mutex-affinity
3975 enable random CPU affinity changing between mutex lock and
3976 unlock.
3977
3978 --mutex-ops N
3979 stop after N bogo mutex lock/unlock operations.
3980
3981 --mutex-procs N
3982 By default 2 threads are used for locking/unlocking on a
3983 single mutex. This option allows the default to be changed
3984 to 2 to 64 concurrent threads.
3985
3986 High resolution and scheduler stressor via nanosleep calls
3987 --nanosleep N
3988 start N workers that each run pthreads that call nanosleep
3989 with random delays from 1 to 2↑18 nanoseconds. This should
3990 exercise the high resolution timers and scheduler.
3991
3992 --nanosleep-method [ all | cstate | random | ns | us | ms ]
3993 select the nanosleep sleep duration method. By default,
3994 cstate residency durations (if they exist) and random dura‐
3995 tions are used. This option allows one to select one of
3996 the three methods:
3997
3998 Method Description
3999 all use cstate and random nanosecond durations.
4000 cstate use cstate nanosecond durations. It is recommended
4001 to also use --nanosleep-threads 1 to exercise less
4002 conconcurrent nanosleeps to allow CPUs to drop into
4003 deep C states.
4004 random use random nanosecond durations between 1 and 2^18
4005 nanoseconds.
4006 ns use 1ns (nanosecond) nanosleeps
4007 us use 1us (microsecond) nanosleeps
4008 ms use 1ms (millisecond) nanosleeps
4009
4010 --nanosleep-ops N
4011 stop the nanosleep stressor after N bogo nanosleep opera‐
4012 tions.
4013
4014 --nanosleep-threads N
4015 specify the number of concurrent pthreads to run per stres‐
4016 sor. The default is 8 and the allowed range is 1 to 1024.
4017
4018 Network device ioctl stressor
4019 --netdev N
4020 start N workers that exercise various netdevice ioctl com‐
4021 mands across all the available network devices. The ioctls
4022 exercised by this stressor are as follows: SIOCGIFCONF,
4023 SIOCGIFINDEX, SIOCGIFNAME, SIOCGIFFLAGS, SIOCGIFADDR,
4024 SIOCGIFNETMASK, SIOCGIFMETRIC, SIOCGIFMTU, SIOCGIFHWADDR,
4025 SIOCGIFMAP and SIOCGIFTXQLEN. See netdevice(7) for more de‐
4026 tails of these ioctl commands.
4027
4028 --netdev-ops N
4029 stop after N netdev bogo operations completed.
4030
4031 Netlink stressor (Linux)
4032 --netlink-proc N
4033 start N workers that spawn child processes and monitor
4034 fork/exec/exit process events via the proc netlink connec‐
4035 tor. Each event received is counted as a bogo op. This
4036 stressor can only be run on Linux and requires CAP_NET_AD‐
4037 MIN capability.
4038
4039 --netlink-proc-ops N
4040 stop the proc netlink connector stressors after N bogo ops.
4041
4042 --netlink-task N
4043 start N workers that collect task statistics via the
4044 netlink taskstats interface. This stressor can only be run
4045 on Linux and requires CAP_NET_ADMIN capability.
4046
4047 --netlink-task-ops N
4048 stop the taskstats netlink connector stressors after N bogo
4049 ops.
4050
4051 Nice stressor
4052 --nice N
4053 start N cpu consuming workers that exercise the available
4054 nice levels. Each iteration forks off a child process that
4055 runs through the all the nice levels running a busy loop
4056 for 0.1 seconds per level and then exits.
4057
4058 --nice-ops N
4059 stop after N nice bogo nice loops
4060
4061 NO-OP CPU instruction stressor
4062 --nop N
4063 start N workers that consume cpu cycles issuing no-op in‐
4064 structions. This stressor is available if the assembler
4065 supports the "nop" instruction.
4066
4067 --nop-instr INSTR
4068 use alternative nop instruction INSTR. For x86 CPUs INSTR
4069 can be one of nop, pause, nop2 (2 byte nop) through to
4070 nop15 (15 byte nop). For ARM CPUs, INSTR can be one of nop
4071 or yield. For PPC64 CPUs, INSTR can be one of nop, mdoio,
4072 mdoom or yield. For S390 CPUs, INSTR can be one of nop or
4073 nopr. For other processors, INSTR is only nop. The random
4074 INSTR option selects a randon mix of the available nop in‐
4075 structions. If the chosen INSTR generates an SIGILL signal,
4076 then the stressor falls back to the vanilla nop instruc‐
4077 tion.
4078
4079 --nop-ops N
4080 stop nop workers after N no-op bogo operations. Each bogo-
4081 operation is equivalent to 256 loops of 256 no-op instruc‐
4082 tions.
4083
4084 /dev/null stressor
4085 --null N
4086 start N workers that exercise /dev/null with writes, lseek,
4087 ioctl, fcntl, fallocate and fdatasync. For just /dev/null
4088 write benchmarking use the --null-write option.
4089
4090 --null-ops N
4091 stop null stress workers after N /dev/null bogo operations.
4092
4093 --null-write
4094 just write to /dev/null with 4K writes with no additional
4095 exercising on /dev/null.
4096
4097 Migrate memory pages over NUMA nodes stressor
4098 --numa N
4099 start N workers that migrate stressors and a 4MB memory
4100 mapped buffer around all the available NUMA nodes. This
4101 uses migrate_pages(2) to move the stressors and mbind(2)
4102 and move_pages(2) to move the pages of the mapped buffer.
4103 After each move, the buffer is written to force activity
4104 over the bus which results cache misses. This test will
4105 only run on hardware with NUMA enabled and more than 1 NUMA
4106 node.
4107
4108 --numa-ops N
4109 stop NUMA stress workers after N bogo NUMA operations.
4110
4111 Large Pipe stressor
4112 --oom-pipe N
4113 start N workers that create as many pipes as allowed and
4114 exercise expanding and shrinking the pipes from the largest
4115 pipe size down to a page size. Data is written into the
4116 pipes and read out again to fill the pipe buffers. With the
4117 --aggressive mode enabled the data is not read out when the
4118 pipes are shrunk, causing the kernel to OOM processes ag‐
4119 gressively. Running many instances of this stressor will
4120 force kernel to OOM processes due to the many large pipe
4121 buffer allocations.
4122
4123 --oom-pipe-ops N
4124 stop after N bogo pipe expand/shrink operations.
4125
4126 Illegal instructions stressors
4127 --opcode N
4128 start N workers that fork off children that execute ran‐
4129 domly generated executable code. This will generate issues
4130 such as illegal instructions, bus errors, segmentation
4131 faults, traps, floating point errors that are handled
4132 gracefully by the stressor.
4133
4134 --opcode-method [ inc | mixed | random | text ]
4135 select the opcode generation method. By default, random
4136 bytes are used to generate the executable code. This option
4137 allows one to select one of the three methods:
4138
4139 Method Description
4140 inc use incrementing 32 bit opcode patterns from
4141 0x00000000 to 0xfffffff inclusive.
4142 mixed use a mix of incrementing 32 bit opcode patterns
4143 and random 32 bit opcode patterns that are also in‐
4144 verted, encoded with gray encoding and bit re‐
4145 versed.
4146 random generate opcodes using random bytes from a mwc ran‐
4147 dom generator.
4148 text copies random chunks of code from the stress-ng
4149 text segment and randomly flips single bits in a
4150 random choice of 1/8th of the code.
4151
4152 --opcode-ops N
4153 stop after N attempts to execute illegal code.
4154
4155 Opening file (open) stressor
4156 -o N, --open N
4157 start N workers that perform open(2) and then close(2) op‐
4158 erations on /dev/zero. The maximum opens at one time is
4159 system defined, so the test will run up to this maximum, or
4160 65536 open file descriptors, which ever comes first.
4161
4162 --open-fd
4163 run a child process that scans /proc/$PID/fd and attempts
4164 to open the files that the stressor has opened. This exer‐
4165 cises racing open/close operations on the proc interface.
4166
4167 --open-max N
4168 try to open a maximum of N files (or up to the maximum per-
4169 process open file system limit). The value can be the num‐
4170 ber of files or a percentage of the maximum per-process
4171 open file system limit.
4172
4173 --open-ops N
4174 stop the open stress workers after N bogo open operations.
4175
4176 Page table and TLB stressor
4177 --pagemove N
4178 start N workers that mmap a memory region (default 4 MB)
4179 and then shuffle pages to the virtual address of the previ‐
4180 ous page. Each page shuffle uses 3 mremap operations to
4181 move a page. This exercises page tables and Translation
4182 Lookaside Buffer (TLB) flushing.
4183
4184 --pagemove-bytes
4185 specify the size of the memory mapped region to be exer‐
4186 cised. One can specify the size as % of total available
4187 memory or in units of Bytes, KBytes, MBytes and GBytes us‐
4188 ing the suffix b, k, m or g.
4189
4190 --pagemove-mlock
4191 attempt to mlock mmap'd and mremap'd pages into memory
4192 causing more memory pressure by preventing pages from
4193 swapped out.
4194
4195 --pagemove-ops N
4196 stop after N pagemove shuffling operations, where suffling
4197 all the pages in the mmap'd region is equivalent to 1 bogo-
4198 operation.
4199
4200 Memory page swapping stressor
4201 --pageswap N
4202 start N workers that exercise page swap in and swap out.
4203 Pages are allocated and paged out using madvise MADV_PAGE‐
4204 OUT. One the maximum per process number of mmaps are
4205 reached or 65536 pages are allocated the pages are read to
4206 page them back in and unmapped in reverse mapping order.
4207
4208 --pageswap-ops N
4209 stop after N page allocation bogo operations.
4210
4211 PCI sysfs stressor (Linux)
4212 --pci N
4213 exercise PCI sysfs by running N workers that read data (and
4214 mmap/unmap PCI config or PCI resource files). Linux only.
4215 Running as root will allow config and resource mmappings to
4216 be read and exercises PCI I/O mapping.
4217
4218 --pci-ops N
4219 stop pci stress workers after N PCI subdirectory exercising
4220 operations.
4221
4222 Personality stressor
4223 --personality N
4224 start N workers that attempt to set personality and get all
4225 the available personality types (process execution domain
4226 types) via the personality(2) system call. (Linux only).
4227
4228 --personality-ops N
4229 stop personality stress workers after N bogo personality
4230 operations.
4231
4232 Mutex using Peterson algorithm stressor
4233 --peterson N
4234 start N workers that exercises mutex exclusion between two
4235 processes using shared memory with the Peterson Algorithm.
4236 Where possible this uses memory fencing and falls back to
4237 using GCC __sync_synchronize if they are not available. The
4238 stressors contain simple mutex and memory coherency sanity
4239 checks.
4240
4241 --peterson-ops N
4242 stop peterson workers after N mutex operations.
4243
4244 Page map stressor
4245 --physpage N
4246 start N workers that use /proc/self/pagemap and
4247 /proc/kpagecount to determine the physical page and page
4248 count of a virtual mapped page and a page that is shared
4249 among all the stressors. Linux only and requires the
4250 CAP_SYS_ADMIN capabilities.
4251
4252 --physpage-mtrr
4253 enable setting various memory type rage register (MTRR)
4254 types on physical pages (Linux and x86 only).
4255
4256 --physpage-ops N
4257 stop physpage stress workers after N bogo physical address
4258 lookups.
4259
4260 Process signals (pidfd_send_signal) stressor
4261 --pidfd N
4262 start N workers that exercise signal sending via the
4263 pidfd_send_signal system call. This stressor creates child
4264 processes and checks if they exist and can be stopped,
4265 restarted and killed using the pidfd_send_signal system
4266 call.
4267
4268 --pidfd-ops N
4269 stop pidfd stress workers after N child processes have been
4270 created, tested and killed with pidfd_send_signal.
4271
4272 Localhost ICMP (ping) stressor
4273 --ping-sock N
4274 start N workers that send small randomized ICMP messages to
4275 the localhost across a range of ports (1024..65535) using a
4276 "ping" socket with an AF_INET domain, a SOCK_DGRAM socket
4277 type and an IPPROTO_ICMP protocol.
4278
4279 --ping-sock-ops N
4280 stop the ping-sock stress workers after N ICMP messages are
4281 sent.
4282
4283 Large pipe stressor
4284 -p N, --pipe N
4285 start N workers that perform large pipe writes and reads to
4286 exercise pipe I/O. This exercises memory write and reads
4287 as well as context switching. Each worker has two pro‐
4288 cesses, a reader and a writer.
4289
4290 --pipe-data-size N
4291 specifies the size in bytes of each write to the pipe
4292 (range from 4 bytes to 4096 bytes). Setting a small data
4293 size will cause more writes to be buffered in the pipe,
4294 hence reducing the context switch rate between the pipe
4295 writer and pipe reader processes. Default size is the page
4296 size.
4297
4298 --pipe-ops N
4299 stop pipe stress workers after N bogo pipe write opera‐
4300 tions.
4301
4302 --pipe-vmsplice
4303 use vmsplice(2) to splice data pages to/from pipe. Requires
4304 pipe packet mode using O_DIRECT and buffer twice the size
4305 of the pipe to ensure verification data sequences.
4306
4307 --pipe-size N
4308 specifies the size of the pipe in bytes (for systems that
4309 support the F_SETPIPE_SZ fcntl() command). Setting a small
4310 pipe size will cause the pipe to fill and block more fre‐
4311 quently, hence increasing the context switch rate between
4312 the pipe writer and the pipe reader processes. As of ver‐
4313 sion 0.15.11 the default size is 4096 bytes.
4314
4315 Shared pipe stressor
4316 --pipeherd N
4317 start N workers that pass a 64 bit token counter to/from
4318 100 child processes over a shared pipe. This forces a high
4319 context switch rate and can trigger a "thundering herd" of
4320 wakeups on processes that are blocked on pipe waits.
4321
4322 --pipeherd-ops N
4323 stop pipe stress workers after N bogo pipe write opera‐
4324 tions.
4325
4326 --pipeherd-yield
4327 force a scheduling yield after each write, this increases
4328 the context switch rate.
4329
4330 Memory protection key mechanism stressor (Linux)
4331 --pkey N
4332 start N workers that change memory protection using a pro‐
4333 tection key (pkey) and the pkey_mprotect call (Linux only).
4334 This will try to allocate a pkey and use this for the page
4335 protection, however, if this fails then the special pkey -1
4336 will be used (and the kernel will use the normal mprotect
4337 mechanism instead). Various page protection mixes of
4338 read/write/exec/none will be cycled through on randomly
4339 chosen pre-allocated pages.
4340
4341 --pkey-ops N
4342 stop after N pkey_mprotect page protection cycles.
4343
4344 Stress-ng plugin stressor
4345 --plugin N
4346 start N workers that run user provided stressor functions
4347 loaded from a shared library. The shared library can con‐
4348 tain one or more stressor functions prefixed with stress_
4349 in their name. By default the plugin stressor will find all
4350 functions prefixed with stress_ in their name and exercise
4351 these one by one in a round-robin loop, but a specific
4352 stressor can be selected using the --plugin-method option.
4353 The stressor function takes no parameters and returns 0 for
4354 success and non-zero for failure (and will terminate the
4355 plugin stressor). Each time a stressor function is executed
4356 the bogo-op counter is incremented by one. The following
4357 example performs 10,000 nop instructions per bogo-op:
4358
4359 int stress_example(void)
4360 {
4361 int i;
4362
4363 for (i = 0; i < 10000; i++) {
4364 __volatile__ __asm__("nop");
4365 }
4366 return 0; /* Success */
4367 }
4368
4369 and compile the source into a shared library as, for exam‐
4370 ple:
4371
4372 gcc -fpic -shared -o example.so example.c
4373
4374 and run as using:
4375
4376 stress-ng --plugin 1 --plugin-so ./example.so
4377
4378 --plugin-method function
4379 run a specific stressor function, specify the name without
4380 the leading stress_ prefix.
4381
4382 --plugin-ops N
4383 stop after N iterations of the user provided stressor func‐
4384 tion(s).
4385
4386 --plugin-so name
4387 specify the shared library containing the user provided
4388 stressor function(s).
4389
4390 Polling stressor
4391 -P N, --poll N
4392 start N workers that perform zero timeout polling via the
4393 poll(2), ppoll(2), select(2), pselect(2) and sleep(3)
4394 calls. This wastes system and user time doing nothing.
4395
4396 --poll-fds N
4397 specify the number of file descriptors to poll/ppoll/se‐
4398 lect/pselect on. The maximum number for select/pselect is
4399 limited by FD_SETSIZE and the upper maximum is also limited
4400 by the maximum number of pipe open descriptors allowed.
4401
4402 --poll-ops N
4403 stop poll stress workers after N bogo poll operations.
4404
4405 Prctl stressor
4406 --prctl N
4407 start N workers that exercise the majority of the prctl(2)
4408 system call options. Each batch of prctl calls is performed
4409 inside a new child process to ensure the limit of prctl is
4410 contained inside a new process every time. Some prctl op‐
4411 tions are architecture specific, however, this stressor
4412 will exercise these even if they are not implemented.
4413
4414 --prctl-ops N
4415 stop prctl workers after N batches of prctl calls
4416
4417 L3 cache prefetching stressor
4418 --prefetch N
4419 start N workers that benchmark prefetch and non-prefetch
4420 reads of a L3 cache sized buffer. The buffer is read with
4421 loops of 8 × 64 bit reads per iteration. In the prefetch
4422 cases, data is prefetched ahead of the current read posi‐
4423 tion by various sized offsets, from 64 bytes to 8K to find
4424 the best memory read throughput. The stressor reports the
4425 non-prefetch read rate and the best prefetched read rate.
4426 It also reports the prefetch offset and an estimate of the
4427 amount of time between the prefetch issue and the actual
4428 memory read operation. These statistics will vary from run-
4429 to-run due to system noise and CPU frequency scaling.
4430
4431 --prefetch-l3-size N
4432 specify the size of the l3 cache
4433
4434 --prefetch-method N
4435 select the prefetching method. Available methods are:
4436
4437 Method Description
4438 builtin Use the __builtin_prefetch(3) function for
4439 prefetching. This is the default.
4440 builtinl0 Use the __builtin_prefetch(3) function for
4441 prefetching, with a locality 0 hint.
4442 builtinl3 Use the __builtin_prefetch(3) function for
4443 prefetching, with a locality 3 hint.
4444 dcbt Use the ppc64 dcbt instruction to fetch data
4445 into the L1 cache (ppc64 only).
4446 dcbtst Use the ppc64 dcbtst instruction to fetch data
4447 into the L1 cache (ppc64 only).
4448 prefetcht0 Use the x86 prefetcht0 instruction to prefetch
4449 data into all levels of the cache hierarchy
4450 (x86 only).
4451 prefetcht1 Use the x86 prefetcht1 instruction (temporal
4452 data with respect to first level cache) to
4453 prefetch data into level 2 cache and higher
4454 (x86 only).
4455 prefetcht2 Use the x86 prefetcht2 instruction (temporal
4456 data with respect to second level cache) to
4457 prefetch data into level 2 cache and higher
4458 (x86 only).
4459 prefetchnta Use the x86 prefetchnta instruction (non-tem‐
4460 poral data with respect to all cache levels)
4461 into a location close to the processor, mini‐
4462 mizing cache pollution (x86 only).
4463
4464 --prefetch-ops N
4465 stop prefetch stressors after N benchmark operations
4466
4467 Privileged CPU instructions stressor
4468 --priv-instr N
4469 start N workers that exercise various architecture specific
4470 privileged instructions that cannot be executed by
4471 userspace programs. These instructions will be trapped and
4472 processed by SIGSEGV or SIGILL signal handlers.
4473
4474 --priv-instr-ops N
4475 stop priv-instr stressors after N rounds of executing priv‐
4476 ileged instructions.
4477
4478 /proc stressor
4479 --procfs N
4480 start N workers that read files from /proc and recursively
4481 read files from /proc/self (Linux only).
4482
4483 --procfs-ops N
4484 stop procfs reading after N bogo read operations. Note,
4485 since the number of entries may vary between kernels, this
4486 bogo ops metric is probably very misleading.
4487
4488 Pthread stressor
4489 --pthread N
4490 start N workers that iteratively creates and terminates
4491 multiple pthreads (the default is 1024 pthreads per
4492 worker). In each iteration, each newly created pthread
4493 waits until the worker has created all the pthreads and
4494 then they all terminate together.
4495
4496 --pthread-max N
4497 create N pthreads per worker. If the product of the number
4498 of pthreads by the number of workers is greater than the
4499 soft limit of allowed pthreads then the maximum is re-ad‐
4500 justed down to the maximum allowed.
4501
4502 --pthread-ops N
4503 stop pthread workers after N bogo pthread create opera‐
4504 tions.
4505
4506 Ptrace stressor
4507 --ptrace N
4508 start N workers that fork and trace system calls of a child
4509 process using ptrace(2).
4510
4511 --ptrace-ops N
4512 stop ptracer workers after N bogo system calls are traced.
4513
4514 Pseudo-terminals (pty) stressor
4515 --pty N
4516 start N workers that repeatedly attempt to open pseudoter‐
4517 minals and perform various pty ioctls upon the ptys before
4518 closing them.
4519
4520 --pty-max N
4521 try to open a maximum of N pseudoterminals, the default is
4522 65536. The allowed range of this setting is 8..65536.
4523
4524 --pty-ops N
4525 stop pty workers after N pty bogo operations.
4526
4527 Qsort stressor
4528 -Q, --qsort N
4529 start N workers that sort 32 bit integers using qsort.
4530
4531 --qsort-method [ qsort-libc | qsort-bm ]
4532 select either the libc implementation of qsort or the J. L.
4533 Bentley and M. D. McIlroy implementation of qsort. The de‐
4534 fault is the libc implementation.
4535
4536 --qsort-ops N
4537 stop qsort stress workers after N bogo qsorts.
4538
4539 --qsort-size N
4540 specify number of 32 bit integers to sort, default is
4541 262144 (256 × 1024).
4542
4543 Quota stressor
4544 --quota N
4545 start N workers that exercise the Q_GETQUOTA, Q_GETFMT,
4546 Q_GETINFO, Q_GETSTATS and Q_SYNC quotactl(2) commands on
4547 all the available mounted block based file systems. Re‐
4548 quires CAP_SYS_ADMIN capability to run.
4549
4550 --quota-ops N
4551 stop quota stress workers after N bogo quotactl operations.
4552
4553 Process scheduler stressor
4554 --race-sched N
4555 start N workers that exercise rapid changing CPU affinity
4556 child processes both from the controlling stressor and by
4557 the child processes. Child processes are created and termi‐
4558 nated rapidly with the aim to create race conditions where
4559 affinity changing occurs during process run states.
4560
4561 --race-sched-method [ all | next | prev | rand | randinc | sync‐
4562 next | syncprev ]
4563 Select the method moving a process to a specific CPU.
4564 Available methods are described as follows:
4565
4566 Method Description
4567 all iterate over all the race-sched methods as listed
4568 below:
4569 next move a process to the next CPU, wrap around to
4570 zero when maximum CPU is reached.
4571 prev move a process to the previous CPU, wrap around
4572 to the maximum CPU when the first CPU is reached.
4573 rand move a process to any randomly chosen CPU.
4574 randinc move a process to the current CPU + a randomly
4575 chosen value 1..4, modulo the number of CPUs.
4576 syncnext move synchronously all the race-sched stressor
4577 processes to the next CPU every second; this
4578 loads just 1 CPU at a time in a round-robin
4579 method.
4580 syncprev move synchronously all the race-sched stressor
4581 processes to the previous CPU every second; this
4582 loads just 1 CPU at a time in a round-robin
4583 method.
4584
4585 --race-sched-ops N
4586 stop after N process creation bogo-operations.
4587
4588 Radixsort stressor
4589 --radixsort N
4590 start N workers that sort random 8 byte strings using
4591 radixsort.
4592
4593 --radixsort-ops N
4594 stop radixsort stress workers after N bogo radixsorts.
4595
4596 --radixsort-size N
4597 specify number of strings to sort, default is 262144 (256 ×
4598 1024).
4599
4600 Memory filesystem stressor
4601 --ramfs N
4602 start N workers mounting a memory based file system using
4603 ramfs and tmpfs (Linux only). This alternates between
4604 mounting and umounting a ramfs or tmpfs file system using
4605 the traditional mount(2) and umount(2) system call as well
4606 as the newer Linux 5.2 fsopen(2), fsmount(2), fsconfig(2)
4607 and move_mount(2) system calls if they are available. The
4608 default ram file system size is 2MB.
4609
4610 --ramfs-fill
4611 fill ramfs with zero'd data using fallocate(2) if it is
4612 available or multiple calls to write(2) if not.
4613
4614 --ramfs-ops N
4615 stop after N ramfs mount operations.
4616
4617 --ramfs-size N
4618 set the ramfs size (must be multiples of the page size).
4619
4620 Raw device stressor
4621 --rawdev N
4622 start N workers that read the underlying raw drive device
4623 using direct IO reads. The device (with minor number 0)
4624 that stores the current working directory is the raw device
4625 to be read by the stressor. The read size is exactly the
4626 size of the underlying device block size. By default, this
4627 stressor will exercise all the of the rawdev methods (see
4628 the --rawdev-method option). This is a Linux only stressor
4629 and requires root privilege to be able to read the raw de‐
4630 vice.
4631
4632 --rawdev-method method
4633 Available rawdev stress methods are described as follows:
4634
4635 Method Description
4636 all iterate over all the rawdev stress methods as
4637 listed below:
4638 sweep repeatedly read across the raw device from the 0th
4639 block to the end block in steps of the number of
4640 blocks on the device / 128 and back to the start
4641 again.
4642 wiggle repeatedly read across the raw device in 128 evenly
4643 steps with each step reading 1024 blocks backwards
4644 from each step.
4645 ends repeatedly read the first and last 128 start and
4646 end blocks of the raw device alternating from start
4647 of the device to the end of the device.
4648 random repeatedly read 256 random blocks
4649 burst repeatedly read 256 sequential blocks starting from
4650 a random block on the raw device.
4651
4652 --rawdev-ops N
4653 stop the rawdev stress workers after N raw device read bogo
4654 operations.
4655
4656 Random list stressor
4657 --randlist N
4658 start N workers that creates a list of objects in random‐
4659 ized memory order and traverses the list setting and read‐
4660 ing the objects. This is designed to exerise memory and
4661 cache thrashing. Normally the objects are allocated on the
4662 heap, however for objects of page size or larger there is a
4663 1 in 16 chance of objects being allocated using shared
4664 anonymous memory mapping to mix up the address spaces of
4665 the allocations to create more TLB thrashing.
4666
4667 --randist-compact
4668 Allocate all the list objects using one large heap alloca‐
4669 tion and divide this up for all the list objects. This re‐
4670 moves the overhead of the heap keeping track of each list
4671 object, hence uses less memory.
4672
4673 --randlist-items N
4674 Allocate N items on the list. By default, 100,000 items are
4675 allocated.
4676
4677 --randlist-ops N
4678 stop randlist workers after N list traversals
4679
4680 --randlist-size N
4681 Allocate each item to be N bytes in size. By default, the
4682 size is 64 bytes of data payload plus the list handling
4683 pointer overhead.
4684
4685 Localhost raw socket stressor
4686 --rawsock N
4687 start N workers that send and receive packet data using raw
4688 sockets on the localhost. Requires CAP_NET_RAW to run.
4689
4690 --rawsock-ops N
4691 stop rawsock workers after N packets are received.
4692
4693 --rawsock-port P
4694 start at socket port P. For N rawsock worker processes,
4695 ports P to P - 1 are used.
4696
4697 Localhost ethernet raw packets stressor
4698 --rawpkt N
4699 start N workers that sends and receives ethernet packets
4700 using raw packets on the localhost via the loopback device.
4701 Requires CAP_NET_RAW to run.
4702
4703 --rawpkt-ops N
4704 stop rawpkt workers after N packets from the sender process
4705 are received.
4706
4707 --rawpkt-port N
4708 start at port P. For N rawpkt worker processes, ports P to
4709 (P * 4) - 1 are used. The default starting port is port
4710 14000.
4711
4712 --rawpkt-rxring N
4713 setup raw packets with RX ring with N number of blocks,
4714 this selects TPACKET_V. N must be one of 1, 2, 4, 8 or 16.
4715
4716 Localhost raw UDP packet stressor
4717 --rawudp N
4718 start N workers that send and receive UDP packets using raw
4719 sockets on the localhost. Requires CAP_NET_RAW to run.
4720
4721 --rawudp-if NAME
4722 use network interface NAME. If the interface NAME does not
4723 exist, is not up or does not support the domain then the
4724 loopback (lo) interface is used as the default.
4725
4726 --rawudp-ops N
4727 stop rawudp workers after N packets are received.
4728
4729 --rawudp-port N
4730 start at port P. For N rawudp worker processes, ports P to
4731 (P * 4) - 1 are used. The default starting port is port
4732 13000.
4733
4734 Random number generator stressor
4735 --rdrand N
4736 start N workers that read a random number from an on-chip
4737 random number generator This uses the rdrand instruction on
4738 Intel x86 processors or the darn instruction on Power9 pro‐
4739 cessors.
4740
4741 --rdrand-ops N
4742 stop rdrand stress workers after N bogo rdrand operations
4743 (1 bogo op = 2048 random bits successfully read).
4744
4745 --rdrand-seed
4746 use rdseed instead of rdrand (x86 only).
4747
4748 Read-ahead stressor
4749 --readahead N
4750 start N workers that randomly seek and perform 4096 byte
4751 read/write I/O operations on a file with readahead. The de‐
4752 fault file size is 64 MB. Readaheads and reads are batched
4753 into 16 readaheads and then 16 reads.
4754
4755 --readahead-bytes N
4756 set the size of readahead file, the default is 1 GB. One
4757 can specify the size as % of free space on the file system
4758 or in units of Bytes, KBytes, MBytes and GBytes using the
4759 suffix b, k, m or g.
4760
4761 --readahead-ops N
4762 stop readahead stress workers after N bogo read operations.
4763
4764 Reboot stressor
4765 --reboot N
4766 start N workers that exercise the reboot(2) system call.
4767 When possible, it will create a process in a PID namespace
4768 and perform a reboot power off command that should shutdown
4769 the process. Also, the stressor exercises invalid reboot
4770 magic values and invalid reboots when there are insuffi‐
4771 cient privileges that will not actually reboot the system.
4772
4773 --reboot-ops N
4774 stop the reboot stress workers after N bogo reboot cycles.
4775
4776 CPU registers stressor
4777 --regs N
4778 start N workers that shuffle data around the CPU registers
4779 exercising register move instructions. Each bogo-op repre‐
4780 sents 1000 calls of a shuffling function that shuffles the
4781 registers 32 times. Only implemented for the GCC compiler
4782 since this requires register annotations and optimization
4783 level 0 to compile appropriately.
4784
4785 --regs-ops N
4786 stop regs stressors after N bogo operations.
4787
4788 Memory page reordering stressor
4789 --remap N
4790 start N workers that map 512 pages and re-order these pages
4791 using the deprecated system call remap_file_pages(2). Sev‐
4792 eral page re-orderings are exercised: forward, reverse,
4793 random and many pages to 1 page.
4794
4795 --remap-mlock
4796 attempt to mlock mmap'd huge pages into memory causing more
4797 memory pressure by preventing pages from swapped out.
4798
4799 --remap-ops N
4800 stop after N remapping bogo operations.
4801
4802 --remap-pages N
4803 specify number of pages to remap, must be a power of 2, de‐
4804 fault is 512 pages.
4805
4806 Renaming file stressor
4807 -R N, --rename N
4808 start N workers that each create a file and then repeatedly
4809 rename it.
4810
4811 --rename-ops N
4812 stop rename stress workers after N bogo rename operations.
4813
4814 Process rescheduling stressor
4815 --resched N
4816 start N workers that exercise process rescheduling. Each
4817 stressor spawns a child process for each of the positive
4818 nice levels and iterates over the nice levels from 0 to the
4819 lowest priority level (highest nice value). For each of the
4820 nice levels 1024 iterations over 3 non-real time scheduling
4821 polices SCHED_OTHER, SCHED_BATCH and SCHED_IDLE are set and
4822 a sched_yield occurs to force heavy rescheduling activity.
4823 When the -v verbose option is used the distribution of the
4824 number of yields across the nice levels is printed for the
4825 first stressor out of the N stressors.
4826
4827 --resched-ops N
4828 stop after N rescheduling sched_yield calls.
4829
4830 System resources stressor
4831 --resources N
4832 start N workers that consume various system resources. Each
4833 worker will spawn 1024 child processes that iterate 1024
4834 times consuming shared memory, heap, stack, temporary files
4835 and various file descriptors (eventfds, memoryfds, user‐
4836 faultfds, pipes and sockets).
4837
4838 --resources-mlock
4839 attempt to mlock mmap'd pages into memory causing more mem‐
4840 ory pressure by preventing pages from swapped out.
4841
4842 --resources-ops N
4843 stop after N resource child forks.
4844
4845 Writing temporary files in reverse position stressor
4846 --revio N
4847 start N workers continually writing in reverse position or‐
4848 der to temporary files. The default mode is to stress test
4849 reverse position ordered writes with randomly sized sparse
4850 holes between each write. With the --aggressive option en‐
4851 abled without any --revio-opts options the revio stressor
4852 will work through all the --revio-opt options one by one to
4853 cover a range of I/O options.
4854
4855 --revio-bytes N
4856 write N bytes for each revio process, the default is 1 GB.
4857 One can specify the size as % of free space on the file
4858 system or in units of Bytes, KBytes, MBytes and GBytes us‐
4859 ing the suffix b, k, m or g.
4860
4861 --revio-opts list
4862 specify various stress test options as a comma separated
4863 list. Options are the same as --hdd-opts but without the
4864 iovec option.
4865
4866 --revio-ops N
4867 stop revio stress workers after N bogo operations.
4868
4869 --revio-write-size N
4870 specify size of each write in bytes. Size can be from 1
4871 byte to 4MB.
4872
4873 Ring pipes stressor
4874 --ring-pipe N
4875 start N workers that move data around a ring of pipes using
4876 poll to detect when data is ready to copy. By default, 256
4877 pipes are used with two 4096 byte items of data being
4878 copied around the ring of pipes. Data is copied using read
4879 and write system calls. If the splice system call is avail‐
4880 able then one can use splice to use more efficient in-ker‐
4881 nel data passing instead of buffer copying.
4882
4883 --ring-pipe-num N
4884 specify the number of pipes to use. Ranges from 4 to
4885 262144, default is 256.
4886
4887 --ring-pipe-ops N
4888 stop after N pipe data transfers.
4889
4890 --ring-pipe-size N
4891 specify the size of data being copied in bytes. Ranges from
4892 1 to 4096, default is 4096.
4893
4894 --ring-pipe-splice
4895 enable splice to move data between pipes (only if splice()
4896 is available).
4897
4898 Rlimit stressor
4899 --rlimit N
4900 start N workers that exceed CPU and file size resource im‐
4901 its, generating SIGXCPU and SIGXFSZ signals.
4902
4903 --rlimit-ops N
4904 stop after N bogo resource limited SIGXCPU and SIGXFSZ sig‐
4905 nals have been caught.
4906
4907 VM reverse-mapping stressor
4908 --rmap N
4909 start N workers that exercise the VM reverse-mapping. This
4910 creates 16 processes per worker that write/read multiple
4911 file-backed memory mappings. There are 64 lots of 4 page
4912 mappings made onto the file, with each mapping overlapping
4913 the previous by 3 pages and at least 1 page of non-mapped
4914 memory between each of the mappings. Data is synchronously
4915 msync'd to the file 1 in every 256 iterations in a random
4916 manner.
4917
4918 --rmap-ops N
4919 stop after N bogo rmap memory writes/reads.
4920
4921 1 bit rotation stressor
4922 --rotate N
4923 start N workers that exercise 1 bit rotates left and right
4924 of unsigned integer variables. The default will rotate
4925 four 8, 16, 32, 64 (and if supported 128) bit values 10000
4926 times in a loop per bogo-op.
4927
4928 --rotate-method method
4929 specify the method of rotation to use. The 'all' method
4930 uses all the methods and is the default.
4931
4932 Method Description
4933 all exercise with all the rotate stressor methods (see below):
4934 rol8 8 bit unsigned rotate left by 1 bit
4935 ror8 8 bit unsigned rotate right by 1 bit
4936 rol16 16 bit unsigned rotate left by 1 bit
4937 ror16 16 bit unsigned rotate right by 1 bit
4938 rol32 32 bit unsigned rotate left by 1 bit
4939 ror32 32 bit unsigned rotate right by 1 bit
4940 rol64 64 bit unsigned rotate left by 1 bit
4941 ror64 64 bit unsigned rotate right by 1 bit
4942 rol128 128 bit unsigned rotate left by 1 bit
4943 ror128 128 bit unsigned rotate right by 1 bit
4944
4945 --rotate-ops N
4946 stop after N bogo rotate operations.
4947
4948 Restartable sequences (rseq) stressor (Linux)
4949 --rseq N
4950 start N workers that exercise restartable sequences via the
4951 rseq(2) system call. This loops over a long duration crit‐
4952 ical section that is likely to be interrupted. A rseq
4953 abort handler keeps count of the number of interruptions
4954 and a SIGSEV handler also tracks any failed rseq aborts
4955 that can occur if there is a mismatch in a rseq check sig‐
4956 nature. Linux only.
4957
4958 --rseq-ops N
4959 stop after N bogo rseq operations. Each bogo rseq operation
4960 is equivalent to 10000 iterations over a long duration rseq
4961 handled critical section.
4962
4963 Real-time clock stressor
4964 --rtc N
4965 start N workers that exercise the real time clock (RTC) in‐
4966 terfaces via /dev/rtc and /sys/class/rtc/rtc0. No destruc‐
4967 tive writes (modifications) are performed on the RTC. This
4968 is a Linux only stressor.
4969
4970 --rtc-ops N
4971 stop after N bogo RTC interface accesses.
4972
4973 Fast process rescheduling stressor
4974 --schedmix N
4975 start N workers that each start child processes that re‐
4976 peatedly select random a scheduling policy and then exe‐
4977 cutes a short duration randomly chosen time consuming ac‐
4978 tivity. This exercises rapid re-scheduling of processes and
4979 generates a large amount of scheduling timer interrupts.
4980
4981 --schedmix-ops N
4982 stop after N scheduling mixed operations.
4983
4984 --schedmix-procs N
4985 specify the number of chid processes to run for each stres‐
4986 sor instance, range from 1 to 64, default is 16.
4987
4988 Scheduling policy stressor
4989 --schedpolicy N
4990 start N workers that set the worker to various available
4991 scheduling policies out of SCHED_OTHER, SCHED_BATCH,
4992 SCHED_IDLE, SCHED_FIFO, SCHED_RR and SCHED_DEADLINE. For
4993 the real time scheduling policies a random sched priority
4994 is selected between the minimum and maximum scheduling pri‐
4995 ority settings.
4996
4997 --schedpolicy-ops N
4998 stop after N bogo scheduling policy changes.
4999
5000 --schedpolicy-rand
5001 Select scheduling policy randomly so that the new policy is
5002 always different to the previous policy. The default is to
5003 work through the scheduling policies sequentially.
5004
5005 Stream control transmission protocol (SCTP) stressor
5006 --sctp N
5007 start N workers that perform network sctp stress activity
5008 using the Stream Control Transmission Protocol (SCTP).
5009 This involves client/server processes performing rapid con‐
5010 nect, send/receives and disconnects on the local host.
5011
5012 --sctp-domain D
5013 specify the domain to use, the default is ipv4. Currently
5014 ipv4 and ipv6 are supported.
5015
5016 --sctp-if NAME
5017 use network interface NAME. If the interface NAME does not
5018 exist, is not up or does not support the domain then the
5019 loopback (lo) interface is used as the default.
5020
5021 --sctp-ops N
5022 stop sctp workers after N bogo operations.
5023
5024 --sctp-port P
5025 start at sctp port P. For N sctp worker processes, ports P
5026 to (P * 4) - 1 are used for ipv4, ipv6 domains and ports P
5027 to P - 1 are used for the unix domain.
5028
5029 --sctp-sched [ fcfs | prio | rr ]
5030 specify SCTP scheduler, one of fcfs (default), prio (prior‐
5031 ity) or rr (round-robin).
5032
5033 File sealing (SEAL) stressor (Linux)
5034 --seal N
5035 start N workers that exercise the fcntl(2) SEAL commands on
5036 a small anonymous file created using memfd_create(2). Af‐
5037 ter each SEAL command is issued the stressor also sanity
5038 checks if the seal operation has sealed the file correctly.
5039 (Linux only).
5040
5041 --seal-ops N
5042 stop after N bogo seal operations.
5043
5044 Secure computing stressor
5045 --seccomp N
5046 start N workers that exercise Secure Computing system call
5047 filtering. Each worker creates child processes that write a
5048 short message to /dev/null and then exits. 2% of the child
5049 processes have a seccomp filter that disallows the write
5050 system call and hence it is killed by seccomp with a
5051 SIGSYS. Note that this stressor can generate many audit
5052 log messages each time the child is killed. Requires
5053 CAP_SYS_ADMIN to run.
5054
5055 --seccomp-ops N
5056 stop seccomp stress workers after N seccomp filter tests.
5057
5058 Secret memory stressor (Linux >= 5.11)
5059 --secretmem N
5060 start N workers that mmap pages using file mapping off a
5061 memfd_secret file descriptor. Each stress loop iteration
5062 will expand the mappable region by 3 pages using ftruncate
5063 and mmap and touches the pages. The pages are then frag‐
5064 mented by unmapping the middle page and then umapping the
5065 first and last pages. This tries to force page fragmenta‐
5066 tion and also trigger out of memory (OOM) kills of the
5067 stressor when the secret memory is exhausted. Note this is
5068 a Linux 5.11+ only stressor and the kernel needs to be
5069 booted with "secretmem=" option to allocate a secret memory
5070 reservation.
5071
5072 --secretmem-ops N
5073 stop secretmem stress workers after N stress loop itera‐
5074 tions.
5075
5076 IO seek stressor
5077 --seek N
5078 start N workers that randomly seeks and performs 512 byte
5079 read/write I/O operations on a file. The default file size
5080 is 16 GB.
5081
5082 --seek-ops N
5083 stop seek stress workers after N bogo seek operations.
5084
5085 --seek-punch
5086 punch randomly located 8K holes into the file to cause more
5087 extents to force a more demanding seek stressor, (Linux
5088 only).
5089
5090 --seek-size N
5091 specify the size of the file in bytes. Small file sizes al‐
5092 low the I/O to occur in the cache, causing greater CPU
5093 load. Large file sizes force more I/O operations to drive
5094 causing more wait time and more I/O on the drive. One can
5095 specify the size in units of Bytes, KBytes, MBytes and
5096 GBytes using the suffix b, k, m or g.
5097
5098 POSIX semaphore stressor
5099 --sem N
5100 start N workers that perform POSIX semaphore wait and post
5101 operations. By default, a parent and 4 children are started
5102 per worker to provide some contention on the semaphore.
5103 This stresses fast semaphore operations and produces rapid
5104 context switching.
5105
5106 --sem-ops N
5107 stop semaphore stress workers after N bogo semaphore opera‐
5108 tions.
5109
5110 --sem-procs N
5111 start N child workers per worker to provide contention on
5112 the semaphore, the default is 4 and a maximum of 64 are al‐
5113 lowed.
5114
5115 --sem-sysv N
5116 start N workers that perform System V semaphore wait and
5117 post operations. By default, a parent and 4 children are
5118 started per worker to provide some contention on the sema‐
5119 phore. This stresses fast semaphore operations and produces
5120 rapid context switching.
5121
5122 --sem-sysv-ops N
5123 stop semaphore stress workers after N bogo System V sema‐
5124 phore operations.
5125
5126 --sem-sysv-procs N
5127 start N child processes per worker to provide contention on
5128 the System V semaphore, the default is 4 and a maximum of
5129 64 are allowed.
5130
5131 Sendfile stressor
5132 --sendfile N
5133 start N workers that send an empty file to /dev/null. This
5134 operation spends nearly all the time in the kernel. The
5135 default sendfile size is 4MB. The sendfile options are for
5136 Linux only.
5137
5138 --sendfile-ops N
5139 stop sendfile workers after N sendfile bogo operations.
5140
5141 --sendfile-size S
5142 specify the size to be copied with each sendfile call. The
5143 default size is 4MB. One can specify the size in units of
5144 Bytes, KBytes, MBytes and GBytes using the suffix b, k, m
5145 or g.
5146
5147 Sessions stressor
5148 --session N
5149 start N workers that create child and grandchild processes
5150 that set and get their session ids. 25% of the grandchild
5151 processes are not waited for by the child to create or‐
5152 phaned sessions that need to be reaped by init.
5153
5154 --session-ops N
5155 stop session workers after N child processes are spawned
5156 and reaped.
5157
5158 Setting data in the Kernel stressor
5159 --set N
5160 start N workers that call system calls that try to set data
5161 in the kernel, currently these are: setgid, sethostname,
5162 setpgid, setpgrp, setuid, setgroups, setreuid, setregid,
5163 setresuid, setresgid and setrlimit. Some of these system
5164 calls are OS specific.
5165
5166 --set-ops N
5167 stop set workers after N bogo set operations.
5168
5169 Shellsort stressor
5170 --shellsort N
5171 start N workers that sort 32 bit integers using shellsort.
5172
5173 --shellsort-ops N
5174 stop shellsort stress workers after N bogo shellsorts.
5175
5176 --shellsort-size N
5177 specify number of 32 bit integers to sort, default is
5178 262144 (256 × 1024).
5179
5180 POSIX shared memory stressor
5181 --shm N
5182 start N workers that open and allocate shared memory ob‐
5183 jects using the POSIX shared memory interfaces. By de‐
5184 fault, the test will repeatedly create and destroy 32
5185 shared memory objects, each of which is 8MB in size.
5186
5187 --shm-bytes N
5188 specify the size of the POSIX shared memory objects to be
5189 created. One can specify the size as % of total available
5190 memory or in units of Bytes, KBytes, MBytes and GBytes us‐
5191 ing the suffix b, k, m or g.
5192
5193 --shm-mlock
5194 attempt to mlock shared memory objects into memory causing
5195 more memory pressure by preventing pages from swapped out.
5196
5197 --shm-objs N
5198 specify the number of shared memory objects to be created.
5199
5200 --shm-ops N
5201 stop after N POSIX shared memory create and destroy bogo
5202 operations are complete.
5203
5204 --shm-sysv N
5205 start N workers that allocate shared memory using the Sys‐
5206 tem V shared memory interface. By default, the test will
5207 repeatedly create and destroy 8 shared memory segments,
5208 each of which is 8MB in size.
5209
5210 --shm-sysv-bytes N
5211 specify the size of the shared memory segment to be cre‐
5212 ated. One can specify the size as % of total available mem‐
5213 ory or in units of Bytes, KBytes, MBytes and GBytes using
5214 the suffix b, k, m or g.
5215
5216 --shm-sysv-mlock
5217 attempt to mlock shared memory segment into memory causing
5218 more memory pressure by preventing pages from swapped out.
5219
5220 --shm-sysv-ops N
5221 stop after N shared memory create and destroy bogo opera‐
5222 tions are complete.
5223
5224 --shm-sysv-segs N
5225 specify the number of shared memory segments to be created.
5226 The default is 8 segments.
5227
5228 SIGABRT stressor
5229 --sigabrt N
5230 start N workers that create children that are killed by
5231 SIGABRT signals or by calling abort(3).
5232
5233 --sigabrt-ops N
5234 stop the sigabrt workers after N SIGABRT signals are suc‐
5235 cessfully handled.
5236
5237 SIGBUS stressor
5238 --sigbus N
5239 start N workers that rapidly create and catch bus errors
5240 generated via misaligned access and accessing a file backed
5241 memory mapping that does not have file storage to back the
5242 page being accessed.
5243
5244 --sigbus-ops N
5245 stop sigbus stress workers after N bogo bus errors.
5246
5247 SIGCHLD stressor
5248 --sigchld N
5249 start N workers that create children to generate SIGCHLD
5250 signals. This exercises children that exit (CLD_EXITED),
5251 get killed (CLD_KILLED), get stopped (CLD_STOPPED) or con‐
5252 tinued (CLD_CONTINUED).
5253
5254 --sigchld-ops N
5255 stop the sigchld workers after N SIGCHLD signals are suc‐
5256 cessfully handled.
5257
5258 SIGFD stressor (Linux)
5259 --sigfd N
5260 start N workers that generate SIGRT signals and are handled
5261 by reads by a child process using a file descriptor set up
5262 using signalfd(2). (Linux only). This will generate a
5263 heavy context switch load when all CPUs are fully loaded.
5264
5265 --sigfd-ops
5266 stop sigfd workers after N bogo SIGUSR1 signals are sent.
5267
5268 SIGFPE stressor
5269 --sigfpe N
5270 start N workers that rapidly cause division by zero SIGFPE
5271 faults.
5272
5273 --sigfpe-ops N
5274 stop sigfpe stress workers after N bogo SIGFPE faults.
5275
5276 SIGIO stressor
5277 --sigio N
5278 start N workers that read data from a child process via a
5279 pipe and generate SIGIO signals. This exercises asynchro‐
5280 nous I/O via SIGIO.
5281
5282 --sigio-ops N
5283 stop sigio stress workers after handling N SIGIO signals.
5284
5285 System signals stressor
5286 --signal N
5287 start N workers that exercise the signal system call three
5288 different signal handlers, SIG_IGN (ignore), a SIGCHLD han‐
5289 dler and SIG_DFL (default action). For the SIGCHLD han‐
5290 dler, the stressor sends itself a SIGCHLD signal and checks
5291 if it has been handled. For other handlers, the stressor
5292 checks that the SIGCHLD handler has not been called. This
5293 stress test calls the signal system call directly when pos‐
5294 sible and will try to avoid the C library attempt to re‐
5295 place signal with the more modern sigaction system call.
5296
5297 --signal-ops N
5298 stop signal stress workers after N rounds of signal handler
5299 setting.
5300
5301 Nested signal handling stressor
5302 --signest N
5303 start N workers that exercise nested signal handling. A
5304 signal is raised and inside the signal handler a different
5305 signal is raised, working through a list of signals to ex‐
5306 ercise. An alternative signal stack is used that is large
5307 enough to handle all the nested signal calls. The -v op‐
5308 tion will log the approximate size of the stack required
5309 and the average stack size per nested call.
5310
5311 --signest-ops N
5312 stop after handling N nested signals.
5313
5314 Pending signals stressor
5315 --sigpending N
5316 start N workers that check if SIGUSR1 signals are pending.
5317 This stressor masks SIGUSR1, generates a SIGUSR1 signal and
5318 uses sigpending(2) to see if the signal is pending. Then it
5319 unmasks the signal and checks if the signal is no longer
5320 pending.
5321
5322 --sigpending-ops N
5323 stop sigpending stress workers after N bogo sigpending
5324 pending/unpending checks.
5325
5326 SIGPIPE stressor
5327 --sigpipe N
5328 start N workers that repeatedly spawn off child process
5329 that exits before a parent can complete a pipe write, caus‐
5330 ing a SIGPIPE signal. The child process is either spawned
5331 using clone(2) if it is available or use the slower fork(2)
5332 instead.
5333
5334 --sigpipe-ops N
5335 stop N workers after N SIGPIPE signals have been caught and
5336 handled.
5337
5338 Signal queueing stressor
5339 --sigq N
5340 start N workers that rapidly send SIGUSR1 signals using
5341 sigqueue(3) to child processes that wait for the signal via
5342 sigwaitinfo(2).
5343
5344 --sigq-ops N
5345 stop sigq stress workers after N bogo signal send opera‐
5346 tions.
5347
5348 Real-time signals stressor
5349 --sigrt N
5350 start N workers that each create child processes to handle
5351 SIGRTMIN to SIGRMAX real time signals. The parent sends
5352 each child process a RT signal via siqueue(2) and the child
5353 process waits for this via sigwaitinfo(2). When the child
5354 receives the signal it then sends a RT signal to one of the
5355 other child processes also via sigqueue(2).
5356
5357 --sigrt-ops N
5358 stop sigrt stress workers after N bogo sigqueue signal send
5359 operations.
5360
5361 SIGSEV stressor
5362 --sigsegv N
5363 start N workers that rapidly create and catch segmentation
5364 faults generated via illegal memory access, illegal vdso
5365 system calls, illegal port reads, illegal interrupts or ac‐
5366 cess to x86 time stamp counter.
5367
5368 --sigsegv-ops N
5369 stop sigsegv stress workers after N bogo segmentation
5370 faults.
5371
5372 Waiting for process signals stressor
5373 --sigsuspend N
5374 start N workers that each spawn off 4 child processes that
5375 wait for a SIGUSR1 signal from the parent using sigsus‐
5376 pend(2). The parent sends SIGUSR1 signals to each child in
5377 rapid succession. Each sigsuspend wakeup is counted as one
5378 bogo operation.
5379
5380 --sigsuspend-ops N
5381 stop sigsuspend stress workers after N bogo sigsuspend
5382 wakeups.
5383
5384 SIGTRAP stressor
5385 --sigtrap N
5386 start N workers that exercise the SIGTRAP signal. For sys‐
5387 tems that support SIGTRAP, the signal is generated using
5388 raise(SIGTRAP). Only x86 Linux systems the SIGTRAP is also
5389 generated by an int 3 instruction.
5390
5391 --sigtrap-ops N
5392 stop sigtrap stress workers after N SIGTRAPs have been han‐
5393 dled.
5394
5395 Random memory and processor cache line stressor via a skiplist
5396 --skiplist N
5397 start N workers that store and then search for integers us‐
5398 ing a skiplist. By default, 65536 integers are added and
5399 searched. This is a useful method to exercise random ac‐
5400 cess of memory and processor cache.
5401
5402 --skiplist-ops N
5403 stop the skiplist worker after N skiplist store and search
5404 cycles are completed.
5405
5406 --skiplist-size N
5407 specify the size (number of integers) to store and search
5408 in the skiplist. Size can be from 1K to 4M.
5409
5410 Time interrupts and context switches stressor
5411 --sleep N
5412 start N workers that spawn off multiple threads that each
5413 perform multiple sleeps of ranges 1us to 0.1s. This cre‐
5414 ates multiple context switches and timer interrupts.
5415
5416 --sleep-max P
5417 start P threads per worker. The default is 1024, the maxi‐
5418 mum allowed is 30000.
5419
5420 --sleep-ops N
5421 stop after N sleep bogo operations.
5422
5423 System management interrupts (SMI) stressor
5424 --smi N
5425 start N workers that attempt to generate system management
5426 interrupts (SMIs) into the x86 ring -2 system management
5427 mode (SMM) by exercising the advanced power management
5428 (APM) port 0xb2. This requires the --pathological option
5429 and root privilege and is only implemented on x86 Linux
5430 platforms. This probably does not work in a virtualized en‐
5431 vironment. The stressor will attempt to determine the time
5432 stolen by SMIs with some naïve benchmarking.
5433
5434 --smi-ops N
5435 stop after N attempts to trigger the SMI.
5436
5437 Network socket stressor
5438 -S N, --sock N
5439 start N workers that perform various socket stress activ‐
5440 ity. This involves a pair of client/server processes per‐
5441 forming rapid connect, send and receives and disconnects on
5442 the local host.
5443
5444 --sock-domain D
5445 specify the domain to use, the default is ipv4. Currently
5446 ipv4, ipv6 and unix are supported.
5447
5448 --sock-if NAME
5449 use network interface NAME. If the interface NAME does not
5450 exist, is not up or does not support the domain then the
5451 loopback (lo) interface is used as the default.
5452
5453 --sock-msgs N
5454 send N messages per connect, send/receive, disconnect iter‐
5455 ation. The default is 1000 messages. If N is too small then
5456 the rate is throttled back by the overhead of socket con‐
5457 nect and disconnect (on Linux, one needs to increase
5458 /proc/sys/net/netfilter/nf_conntrack_max to allow more con‐
5459 nections).
5460
5461 --sock-nodelay
5462 This disables the TCP Nagle algorithm, so data segments are
5463 always sent as soon as possible. This stops data from be‐
5464 ing buffered before being transmitted, hence resulting in
5465 poorer network utilisation and more context switches be‐
5466 tween the sender and receiver.
5467
5468 --sock-ops N
5469 stop socket stress workers after N bogo operations.
5470
5471 --sock-opts [ random | send | sendmsg | sendmmsg ]
5472 by default, messages are sent using send(2). This option
5473 allows one to specify the sending method using send(2),
5474 sendmsg(2), sendmmsg(2) or a random selection of one of
5475 these 3 on each iteration. Note that sendmmsg is only
5476 available for Linux systems that support this system call.
5477
5478 --sock-port P
5479 start at socket port P. For N socket worker processes,
5480 ports P to P - 1 are used.
5481
5482 --sock-protocol P
5483 Use the specified protocol P, default is tcp. Options are
5484 tcp and mptcp (if supported by the operating system).
5485
5486 --sock-type [ stream | seqpacket ]
5487 specify the socket type to use. The default type is stream.
5488 seqpacket currently only works for the unix socket domain.
5489
5490 --sock-zerocopy
5491 enable zerocopy for send and recv calls if the MSG_ZEROCOPY
5492 is supported.
5493
5494 Socket abusing stressor
5495 --sockabuse N
5496 start N workers that abuse a socket file descriptor with
5497 various file based system that don't normally act on sock‐
5498 ets. The kernel should handle these illegal and unexpected
5499 calls gracefully.
5500
5501 --sockabuse-ops N
5502 stop after N iterations of the socket abusing stressor
5503 loop.
5504
5505 --sockabuse-port P
5506 start at socket port P. For N sockabuse worker processes,
5507 ports P to P - 1 are used.
5508
5509 Socket diagnostic stressor (Linux)
5510 --sockdiag N
5511 start N workers that exercise the Linux sock_diag netlink
5512 socket diagnostics (Linux only). This currently requests
5513 diagnostics using UDIAG_SHOW_NAME, UDIAG_SHOW_VFS,
5514 UDIAG_SHOW_PEER, UDIAG_SHOW_ICONS, UDIAG_SHOW_RQLEN and
5515 UDIAG_SHOW_MEMINFO for the AF_UNIX family of socket connec‐
5516 tions.
5517
5518 --sockdiag-ops N
5519 stop after receiving N sock_diag diagnostic messages.
5520
5521 Socket file descriptor stressor
5522 --sockfd N
5523 start N workers that pass file descriptors over a UNIX do‐
5524 main socket using the CMSG(3) ancillary data mechanism. For
5525 each worker, pair of client/server processes are created,
5526 the server opens as many file descriptors on /dev/null as
5527 possible and passing these over the socket to a client that
5528 reads these from the CMSG data and immediately closes the
5529 files.
5530
5531 --sockfd-ops N
5532 stop sockfd stress workers after N bogo operations.
5533
5534 --sockfd-port P
5535 start at socket port P. For N socket worker processes,
5536 ports P to P - 1 are used.
5537
5538 Opening network socket stressor
5539 --sockmany N
5540 start N workers that use a client process to attempt to
5541 open as many as 100000 TCP/IP socket connections to a
5542 server on port 10000.
5543
5544 --sockmany-if NAME
5545 use network interface NAME. If the interface NAME does not
5546 exist, is not up or does not support the domain then the
5547 loopback (lo) interface is used as the default.
5548
5549 --sockmany-ops N
5550 stop after N connections.
5551
5552 --sockmany-port P
5553 start at socket port P. For N sockmany worker processes,
5554 ports P to P - 1 are used.
5555
5556 Socket I/O stressor
5557 --sockpair N
5558 start N workers that perform socket pair I/O read/writes.
5559 This involves a pair of client/server processes performing
5560 randomly sized socket I/O operations.
5561
5562 --sockpair-ops N
5563 stop socket pair stress workers after N bogo operations.
5564
5565 Softlockup stressor
5566 --softlockup N
5567 start N workers that flip between with the "real-time"
5568 SCHED_FIO and SCHED_RR scheduling policies at the highest
5569 priority to force softlockups. This can only be run with
5570 CAP_SYS_NICE capability and for best results the number of
5571 stressors should be at least the number of online CPUs.
5572 Once running, this is practically impossible to stop and it
5573 will force softlockup issues and may trigger watchdog time‐
5574 out reboots.
5575
5576 --softlockup-ops N
5577 stop softlockup stress workers after N bogo scheduler pol‐
5578 icy changes.
5579
5580 Sparse matrix stressor
5581 --sparsematrix N
5582 start N workers that exercise 3 different sparse matrix im‐
5583 plementations based on hashing, Judy array (for 64 bit sys‐
5584 tems), 2-d circular linked-lists, memory mapped 2-d matrix
5585 (non-sparse), quick hashing (on preallocated nodes) and
5586 red-black tree. The sparse matrix is populated with val‐
5587 ues, random values potentially non-existing values are
5588 read, known existing values are read and known existing
5589 values are marked as zero. This default 500 × 500 sparse
5590 matrix is used and 5000 items are put into the sparse ma‐
5591 trix making it 2% utilized.
5592
5593 --sparsematrix-items N
5594 populate the sparse matrix with N items. If N is greater
5595 than the number of elements in the sparse matrix than N
5596 will be capped to create at 100% full sparse matrix.
5597
5598 --sparsematrix-method [ all | hash | hashjudy | judy | list | mmap
5599 | qhash | rb ]
5600 specify the type of sparse matrix implementation to use.
5601 The 'all' method uses all the methods and is the default.
5602
5603 Method Description
5604 all exercise with all the sparsematrix stressor meth‐
5605 ods (see below):
5606 hash use a hash table and allocate nodes on the heap
5607 for each unique value at a (x, y) matrix posi‐
5608 tion.
5609 hashjudy use a hash table for x coordinates and a Judy ar‐
5610 ray for y coordinates for values at a (x, y) ma‐
5611 trix position.
5612 judy use a Judy array with a unique 1-to-1 mapping of
5613 (x, y) matrix position into the array.
5614 list use a circular linked-list for sparse y positions
5615 each with circular linked-lists for sparse x po‐
5616 sitions for the (x, y) matrix coordinates.
5617 mmap use a non-sparse mmap the entire 2-d matrix
5618 space. Only (x, y) matrix positions that are ref‐
5619 erenced will get physically mapped. Note that
5620 large sparse matrices cannot be mmap'd due to
5621 lack of virtual address limitations, and too many
5622 referenced pages can trigger the out of memory
5623 killer on Linux.
5624 qhash use a hash table with pre-allocated nodes for
5625 each unique value. This is a quick hash table im‐
5626 plementation, nodes are not allocated each time
5627 with calloc and are allocated from a pre-allo‐
5628 cated pool leading to quicker hash table perfor‐
5629 mance than the hash method.
5630 rb use a red-black balanced tree using one tree node
5631 for each unique value at a (x, y) matrix posi‐
5632 tion.
5633 splay use a splay tree using one tree node for each
5634 unique value at a (x, y) matrix position.
5635
5636 --sparsematrix-ops N
5637 stop after N sparsematrix test iterations.
5638
5639 --sparsematrix-size N
5640 use a N × N sized sparse matrix
5641
5642 POSIX process spawn (posix_spawn) stressor (Linux)
5643 --spawn N
5644 start N workers continually spawn children using
5645 posix_spawn(3) that exec stress-ng and then exit almost im‐
5646 mediately. Currently Linux only.
5647
5648 --spawn-ops N
5649 stop spawn stress workers after N bogo spawns.
5650
5651 Splice stressor (Linux)
5652 --splice N
5653 move data from /dev/zero to /dev/null through a pipe with‐
5654 out any copying between kernel address space and user ad‐
5655 dress space using splice(2). This is only available for
5656 Linux.
5657
5658 --splice-bytes N
5659 transfer N bytes per splice call, the default is 64K. One
5660 can specify the size as % of total available memory or in
5661 units of Bytes, KBytes, MBytes and GBytes using the suffix
5662 b, k, m or g.
5663
5664 --splice-ops N
5665 stop after N bogo splice operations.
5666
5667 Stack stressor
5668 --stack N
5669 start N workers that rapidly cause and catch stack over‐
5670 flows by use of large recursive stack allocations. Much
5671 like the brk stressor, this can eat up pages rapidly and
5672 may trigger the kernel OOM killer on the process, however,
5673 the killed stressor is respawned again by a monitoring par‐
5674 ent process.
5675
5676 --stack-fill
5677 the default action is to touch the lowest page on each
5678 stack allocation. This option touches all the pages by
5679 filling the new stack allocation with zeros which forces
5680 physical pages to be allocated and hence is more aggres‐
5681 sive.
5682
5683 --stack-mlock
5684 attempt to mlock stack pages into memory causing more mem‐
5685 ory pressure by preventing pages from swapped out.
5686
5687 --stack-ops N
5688 stop stack stress workers after N bogo stack overflows.
5689
5690 --stack-pageout
5691 force stack pages out to swap (available when madvise(2)
5692 supports MADV_PAGEOUT).
5693
5694 --stack-unmap
5695 unmap a single page in the middle of a large buffer allo‐
5696 cated on the stack on each stack allocation. This forces
5697 the stack mapping into multiple separate allocation map‐
5698 pings.
5699
5700 Dirty page and stack exception stressor
5701 --stackmmap N
5702 start N workers that use a 2MB stack that is memory mapped
5703 onto a temporary file. A recursive function works down the
5704 stack and flushes dirty stack pages back to the memory
5705 mapped file using msync(2) until the end of the stack is
5706 reached (stack overflow). This exercises dirty page and
5707 stack exception handling.
5708
5709 --stackmmap-ops N
5710 stop workers after N stack overflows have occurred.
5711
5712 Libc string functions stressor
5713 --str N
5714 start N workers that exercise various libc string functions
5715 on random strings.
5716
5717 --str-method strfunc
5718 select a specific libc string function to stress. Available
5719 string functions to stress are: all, index, rindex, str‐
5720 casecmp, strcat, strchr, strcoll, strcmp, strcpy, strlen,
5721 strncasecmp, strncat, strncmp, strrchr and strxfrm. See
5722 string(3) for more information on these string functions.
5723 The 'all' method is the default and will exercise all the
5724 string methods.
5725
5726 --str-ops N
5727 stop after N bogo string operations.
5728
5729 STREAM memory stressor
5730 --stream N
5731 start N workers exercising a memory bandwidth stressor very
5732 loosely based on the STREAM "Sustainable Memory Bandwidth
5733 in High Performance Computers" benchmarking tool by John D.
5734 McCalpin, Ph.D. This stressor allocates buffers that are at
5735 least 4 times the size of the CPU L2 cache and continually
5736 performs rounds of following computations on large arrays
5737 of double precision floating point numbers:
5738
5739 Operation Description
5740 copy c[i] = a[i]
5741 scale b[i] = scalar * c[i]
5742 add c[i] = a[i] + b[i]
5743 triad a[i] = b[i] + (c[i] * scalar)
5744
5745 Since this is loosely based on a variant of the STREAM
5746 benchmark code, DO NOT submit results based on this as it
5747 is intended to in stress-ng just to stress memory and com‐
5748 pute and NOT intended for STREAM accurate tuned or non-
5749 tuned benchmarking whatsoever. Use the official STREAM
5750 benchmarking tool if you desire accurate and standardised
5751 STREAM benchmarks.
5752
5753 The stressor calculates the memory read rate, memory write
5754 rate and floating point operations rate. These will differ
5755 from the maximum theoretical read/write/compute rates be‐
5756 cause of loop overheads and the use of volatile pointers to
5757 ensure the compiler does not optimize out stores.
5758
5759 --stream-index N
5760 specify number of stream indices used to index into the
5761 data arrays a, b and c. This adds indirection into the
5762 data lookup by using randomly shuffled indexing into the
5763 three data arrays. Level 0 (no indexing) is the default,
5764 and 3 is where all 3 arrays are indexed via 3 different
5765 randomly shuffled indexes. The higher the index setting the
5766 more impact this has on L1, L2 and L3 caching and hence
5767 forces higher memory read/write latencies.
5768
5769 --stream-l3-size N
5770 Specify the CPU Level 3 cache size in bytes. One can spec‐
5771 ify the size in units of Bytes, KBytes, MBytes and GBytes
5772 using the suffix b, k, m or g. If the L3 cache size is not
5773 provided, then stress-ng will attempt to determine the
5774 cache size, and failing this, will default the size to 4MB.
5775
5776 --stream-mlock
5777 attempt to mlock the stream buffers into memory to prevent
5778 them from being swapped out.
5779
5780 --stream-madvise [ hugepage | nohugepage | normal ]
5781 Specify the madvise options used on the memory mapped buf‐
5782 fer used in the stream stressor. Non-linux systems will
5783 only have the 'normal' madvise advice. The default is 'nor‐
5784 mal'.
5785
5786 --stream-ops N
5787 stop after N stream bogo operations, where a bogo operation
5788 is one round of copy, scale, add and triad operations.
5789
5790 Swap partitions stressor (Linux)
5791 --swap N
5792 start N workers that add and remove small randomly sizes
5793 swap partitions (Linux only). Note that if too many swap
5794 partitions are added then the stressors may exit with exit
5795 code 3 (not enough resources). Requires CAP_SYS_ADMIN to
5796 run.
5797
5798 --swap-ops N
5799 stop the swap workers after N swapon/swapoff iterations.
5800
5801 Context switching between mutually tied processes stressor
5802 -s N, --switch N
5803 start N workers that force context switching between two
5804 mutually blocking/unblocking tied processes. By default
5805 message passing over a pipe is used, but different methods
5806 are available.
5807
5808 --switch-freq F
5809 run the context switching at the frequency of F context
5810 switches per second. Note that the specified switch rate
5811 may not be achieved because of CPU speed and memory band‐
5812 width limitations.
5813
5814 --switch-method [ mq | pipe | sem-sysv ]
5815 select the preferred context switch block/run synchroniza‐
5816 tion method, these are as follows:
5817
5818 Method Description
5819 mq use posix message queue with a 1 item size. Mes‐
5820 sages are passed between a sender and receiver
5821 process.
5822 pipe single character messages are passed down a sin‐
5823 gle character sized pipe between a sender and re‐
5824 ceiver process.
5825 sem-sysv a SYSV semaphore is used to block/run two pro‐
5826 cesses.
5827
5828 --switch-ops N
5829 stop context switching workers after N bogo operations.
5830
5831 Symlink stressor
5832 --symlink N
5833 start N workers creating and removing symbolic links.
5834
5835 --symlink-ops N
5836 stop symlink stress workers after N bogo operations.
5837
5838 --symlink-sync
5839 sync dirty data and metadata to disk.
5840
5841 Partial file syncing (sync_file_range) stressor
5842 --sync-file N
5843 start N workers that perform a range of data syncs across a
5844 file using sync_file_range(2). Three mixes of syncs are
5845 performed, from start to the end of the file, from end of
5846 the file to the start, and a random mix. A random selection
5847 of valid sync types are used, covering the
5848 SYNC_FILE_RANGE_WAIT_BEFORE, SYNC_FILE_RANGE_WRITE and
5849 SYNC_FILE_RANGE_WAIT_AFTER flag bits.
5850
5851 --sync-file-bytes N
5852 specify the size of the file to be sync'd. One can specify
5853 the size as % of free space on the file system in units of
5854 Bytes, KBytes, MBytes and GBytes using the suffix b, k, m
5855 or g.
5856
5857 --sync-file-ops N
5858 stop sync-file workers after N bogo sync operations.
5859
5860 CPU synchronized loads stressor
5861 --syncload N
5862 start N workers that produce sporadic short lived loads
5863 synchronized across N stressor processes. By default re‐
5864 peated cycles of 125ms busy load followed by 62.5ms sleep
5865 occur across all the workers in step to create bursts of
5866 load to exercise C state transitions and CPU frequency
5867 scaling. The busy load and sleeps have +/-10% jitter added
5868 to try exercising scheduling patterns.
5869
5870 --syncload-msbusy M
5871 specify the busy load duration in milliseconds.
5872
5873 --syncload-mssleep M
5874 specify the sleep duration in milliseconds.
5875
5876 --syncload-ops N
5877 stop syncload workers after N load/sleep cycles.
5878
5879 System calls bad address and fault handling stressor
5880 --sysbadaddr N
5881 start N workers that pass bad addresses to system calls to
5882 exercise bad address and fault handling. The addresses used
5883 are null pointers, read only pages, write only pages, un‐
5884 mapped addresses, text only pages, unaligned addresses and
5885 top of memory addresses.
5886
5887 --sysbadaddr-ops N
5888 stop the sysbadaddr stressors after N bogo system calls.
5889
5890 System calls stressor
5891 --syscall N
5892 start N workers that exercise a range of available system
5893 calls. System calls that fail due to lack of capabilities
5894 or errors are ignored. The stressor will try to maximize
5895 the rate of system calls being executed based the entire
5896 time taken to setup, run and cleanup after each system
5897 call.
5898
5899 --syscall-method method
5900 select the choice of system calls to executed based on the
5901 fasted test duration times. Note that this includes the
5902 time to setup, execute the system call and cleanup after‐
5903 wards. The available methods are as follows:
5904
5905 Method Description
5906 all select all the available system calls
5907 fast10 select the fastest 10% system call tests
5908 fast25 select the fastest 25% system call tests
5909 fast50 select the fastest 50% system call tests
5910 fast75 select the fastest 75% system call tests
5911 fast90 select the fastest 90% system call tests
5912 geomean1 select tests that are less or equal to the geo‐
5913 metric mean of all the test times
5914 geomean1 select tests that are less or equal to 2 × the
5915 geometric mean of all the test times
5916 geomean1 select tests that are less or equal to 3 × the
5917 geometric mean of all the test times
5918
5919 --syscall-ops N
5920 stop after N system calls
5921
5922 System information stressor
5923 --sysinfo N
5924 start N workers that continually read system and process
5925 specific information. This reads the process user and sys‐
5926 tem times using the times(2) system call. For Linux sys‐
5927 tems, it also reads overall system statistics using the
5928 sysinfo(2) system call and also the file system statistics
5929 for all mounted file systems using statfs(2).
5930
5931 --sysinfo-ops N
5932 stop the sysinfo workers after N bogo operations.
5933
5934 System calls with invalid arguments stressor (Linux)
5935 --sysinval N
5936 start N workers that exercise system calls in random order
5937 with permutations of invalid arguments to force kernel er‐
5938 ror handling checks. The stress test autodetects system
5939 calls that cause processes to crash or exit prematurely and
5940 will blocklist these after several repeated breakages. Sys‐
5941 tem call arguments that cause system calls to work success‐
5942 fully are also detected an blocklisted too. Linux only.
5943
5944 --sysinval-ops N
5945 stop sysinval workers after N system call attempts.
5946
5947 /sys stressor (Linux)
5948 --sysfs N
5949 start N workers that recursively read files from /sys
5950 (Linux only). This may cause specific kernel drivers to
5951 emit messages into the kernel log.
5952
5953 --sysfs-ops N
5954 stop sysfs reading after N bogo read operations. Note,
5955 since the number of entries may vary between kernels, this
5956 bogo ops metric is probably very misleading.
5957
5958 Tee stressor (Linux)
5959 --tee N
5960 move data from a writer process to a reader process through
5961 pipes and to /dev/null without any copying between kernel
5962 address space and user address space using tee(2). This is
5963 only available for Linux.
5964
5965 --tee-ops N
5966 stop after N bogo tee operations.
5967
5968 Timer event stressor (Linux)
5969 -T N, --timer N
5970 start N workers creating timer events at a default rate of
5971 1 MHz (Linux only); this can create a many thousands of
5972 timer clock interrupts. Each timer event is caught by a
5973 signal handler and counted as a bogo timer op.
5974
5975 --timer-freq F
5976 run timers at F Hz; range from 1 to 1000000000 Hz (Linux
5977 only). By selecting an appropriate frequency stress-ng can
5978 generate hundreds of thousands of interrupts per second.
5979 Note: it is also worth using --timer-slack 0 for high fre‐
5980 quencies to stop the kernel from coalescing timer events.
5981
5982 --timer-ops N
5983 stop timer stress workers after N bogo timer events (Linux
5984 only).
5985
5986 --timer-rand
5987 select a timer frequency based around the timer frequency
5988 +/- 12.5% random jitter. This tries to force more variabil‐
5989 ity in the timer interval to make the scheduling less pre‐
5990 dictable.
5991
5992 Timerfd stressor (Linux)
5993 --timerfd N
5994 start N workers creating timerfd events at a default rate
5995 of 1 MHz (Linux only); this can create a many thousands of
5996 timer clock events. Timer events are waited for on the
5997 timer file descriptor using select(2) and then read and
5998 counted as a bogo timerfd op.
5999
6000 --timerfs-fds N
6001 try to use a maximum of N timerfd file descriptors per
6002 stressor.
6003
6004 --timerfd-freq F
6005 run timers at F Hz; range from 1 to 1000000000 Hz (Linux
6006 only). By selecting an appropriate frequency stress-ng can
6007 generate hundreds of thousands of interrupts per second.
6008
6009 --timerfd-ops N
6010 stop timerfd stress workers after N bogo timerfd events
6011 (Linux only).
6012
6013 --timerfd-rand
6014 select a timerfd frequency based around the timer frequency
6015 +/- 12.5% random jitter. This tries to force more variabil‐
6016 ity in the timer interval to make the scheduling less pre‐
6017 dictable.
6018
6019 Translation lookaside buffer shootdowns stressor
6020 --tlb-shootdown N
6021 start N workers that force Translation Lookaside Buffer
6022 (TLB) shootdowns. This is achieved by creating up to 16
6023 child processes that all share a region of memory and these
6024 processes are shared amongst the available CPUs. The pro‐
6025 cesses adjust the page mapping settings causing TLBs to be
6026 force flushed on the other processors, causing the TLB
6027 shootdowns.
6028
6029 --tlb-shootdown-ops N
6030 stop after N bogo TLB shootdown operations are completed.
6031
6032 Tmpfs stressor
6033 --tmpfs N
6034 start N workers that create a temporary file on an avail‐
6035 able tmpfs file system and perform various file based mmap
6036 operations upon it.
6037
6038 --tmpfs-mmap-async
6039 enable file based memory mapping and use asynchronous
6040 msync'ing on each page, see --tmpfs-mmap-file.
6041
6042 --tmpfs-mmap-file
6043 enable tmpfs file based memory mapping and by default use
6044 synchronous msync'ing on each page.
6045
6046 --tmpfs-ops N
6047 stop tmpfs stressors after N bogo mmap operations.
6048
6049 Touching files stressor
6050 --touch N
6051 touch files by using open(2) or creat(2) and then closing
6052 and unlinking them. The filename contains the bogo-op num‐
6053 ber and is incremented on each touch operation, hence this
6054 fills the dentry cache. Note that the user time and system
6055 time may be very low as most of the run time is waiting for
6056 file I/O and this produces very large bogo-op rates for the
6057 very low CPU time used.
6058
6059 --touch-method [ random | open | creat ]
6060 select the method the file is created, either randomly us‐
6061 ing open(2) or create(2), just using open(2) with the
6062 O_CREAT open flag, or with creat(2).
6063
6064 --touch-ops N
6065 stop the touch workers after N file touches.
6066
6067 --touch-opts all, direct, dsync, excl, noatime, sync
6068 specify various file open options as a comma separated
6069 list. Options are as follows:
6070
6071 Option Description
6072 all use all the open options, namely direct, dsync,
6073 excl, noatime and sync
6074 direct try to minimize cache effects of the I/O to and
6075 from this file, using the O_DIRECT open flag.
6076 dsync ensure output has been transferred to underlying
6077 hardware and file metadata has been updated using
6078 the O_DSYNC open flag.
6079 excl fail if file already exists (it should not).
6080 noatime do not update the file last access time if the
6081 file is read.
6082 sync ensure output has been transferred to underlying
6083 hardware using the O_SYNC open flag.
6084
6085 Tree data structures stressor
6086 --tree N
6087 start N workers that exercise tree data structures. The de‐
6088 fault is to add, find and remove 250,000 64 bit integers
6089 into AVL (avl), Red-Black (rb), Splay (splay), btree and
6090 binary trees. The intention of this stressor is to exer‐
6091 cise memory and cache with the various tree operations.
6092
6093 --tree-method [ all | avl | binary | btree | rb | splay ]
6094 specify the tree to be used. By default, all the trees are
6095 used (the 'all' option).
6096
6097 --tree-ops N
6098 stop tree stressors after N bogo ops. A bogo op covers the
6099 addition, finding and removing all the items into the
6100 tree(s).
6101
6102 --tree-size N
6103 specify the size of the tree, where N is the number of 64
6104 bit integers to be added into the tree.
6105
6106 Trigonometric functions stressor
6107 --trig N
6108 start N workers that exercise sin, cos, sincos (where
6109 available) and tan trigonometric functions using float,
6110 double and long double floating point variants. Each func‐
6111 tion is exercised 10,000 times per bogo-operation.
6112
6113 --trig-method function
6114 specify a trigonometric stress function. By default, all
6115 the functions are exercised sequentially, however one can
6116 specify just one function to be used if required. Avail‐
6117 able options are as follows:
6118
6119 Method Description
6120 all iterate through all of the following trigonometric
6121 functions
6122 cos cosine (double precision)
6123 cosf cosine (float precision)
6124 cosl cosine (long double precision)
6125 sin sine (double precision)
6126 sinf sine (float precision)
6127 sinl sine (long double precision)
6128 sincos sine and cosine (double precision)
6129 sincosf sine and cosine (float precision)
6130 sincosl sine and cosine (long double precision)
6131 tan tangent (double precision)
6132 tanf tangent (float precision)
6133 tanl tangent (long double precision)
6134
6135 --trig-ops N
6136 stop after N bogo-operations.
6137
6138 Time stamp counter (TSC) stressor
6139 --tsc N
6140 start N workers that read the Time Stamp Counter (TSC) 256
6141 times per loop iteration (bogo operation). This exercises
6142 the tsc instruction for x86, the mftb instruction for
6143 ppc64, the rdcycle instruction for RISC-V and the tick in‐
6144 struction on SPARC.
6145
6146 --tsc-lfence
6147 add lfence after each tsc read to force serialization (x86
6148 only).
6149
6150 --tsc-ops N
6151 stop the tsc workers after N bogo operations are completed.
6152
6153 Binary tree stressor
6154 --tsearch N
6155 start N workers that insert, search and delete 32 bit inte‐
6156 gers on a binary tree using tsearch(3), tfind(3) and
6157 tdelete(3). By default, there are 65536 randomized integers
6158 used in the tree. This is a useful method to exercise ran‐
6159 dom access of memory and processor cache.
6160
6161 --tsearch-ops N
6162 stop the tsearch workers after N bogo tree operations are
6163 completed.
6164
6165 --tsearch-size N
6166 specify the size (number of 32 bit integers) in the array
6167 to tsearch. Size can be from 1K to 4M.
6168
6169 Network tunnel stressor
6170 --tun N
6171 start N workers that create a network tunnel device and
6172 sends and receives packets over the tunnel using UDP and
6173 then destroys it. A new random 192.168.*.* IPv4 address is
6174 used each time a tunnel is created.
6175
6176 --tun-ops N
6177 stop after N iterations of creating/sending/receiving/de‐
6178 stroying a tunnel.
6179
6180 --tun-tap
6181 use network tap device using level 2 frames (bridging)
6182 rather than a tun device for level 3 raw packets (tun‐
6183 nelling).
6184
6185 UDP network stressor
6186 --udp N
6187 start N workers that transmit data using UDP. This involves
6188 a pair of client/server processes performing rapid connect,
6189 send and receives and disconnects on the local host.
6190
6191 --udp-domain D
6192 specify the domain to use, the default is ipv4. Currently
6193 ipv4 and ipv6 are supported.
6194
6195 --udp-gro
6196 enable UDP-GRO (Generic Receive Offload) if supported.
6197
6198 --udp-if NAME
6199 use network interface NAME. If the interface NAME does not
6200 exist, is not up or does not support the domain then the
6201 loopback (lo) interface is used as the default.
6202
6203 --udp-lite
6204 use the UDP-Lite (RFC 3828) protocol (only for ipv4 and
6205 ipv6 domains).
6206
6207 --udp-ops N
6208 stop udp stress workers after N bogo operations.
6209
6210 --udp-port P
6211 start at port P. For N udp worker processes, ports P to P -
6212 1 are used. By default, ports 7000 upwards are used.
6213
6214 UDP flooding stressor
6215 --udp-flood N
6216 start N workers that attempt to flood the host with UDP
6217 packets to random ports. The IP address of the packets are
6218 currently not spoofed. This is only available on systems
6219 that support AF_PACKET.
6220
6221 --udp-flood-domain D
6222 specify the domain to use, the default is ipv4. Currently
6223 ipv4 and ipv6 are supported.
6224
6225 --udp-flood-if NAME
6226 use network interface NAME. If the interface NAME does not
6227 exist, is not up or does not support the domain then the
6228 loopback (lo) interface is used as the default.
6229
6230 --udp-flood-ops N
6231 stop udp-flood stress workers after N bogo operations.
6232
6233 Umount stressor
6234 --umount N
6235 start N workers that exercise mounting and racying unmount‐
6236 ing of small tmpfs and ramfs file systems. Three child pro‐
6237 cesses are invoked, one to mount, another to force umount
6238 and a third to exercice /proc/mounts. Small random delays
6239 are used between mount and umount calls to try to trigger
6240 race conditions on the umount calls.
6241
6242 --umount-ops N
6243 stop umount workers after N successful bogo mount/umount
6244 operations.
6245
6246 Unshare stressor (Linux)
6247 --unshare N
6248 start N workers that each fork off 32 child processes, each
6249 of which exercises the unshare(2) system call by disassoci‐
6250 ating parts of the process execution context. (Linux only).
6251
6252 --unshare-ops N
6253 stop after N bogo unshare operations.
6254
6255 Uprobe stressor (Linux)
6256 --uprobe N
6257 start N workers that trace the entry to libc function get‐
6258 pid() using the Linux uprobe kernel tracing mechanism. This
6259 requires CAP_SYS_ADMIN capabilities and a modern Linux up‐
6260 robe capable kernel.
6261
6262 --uprobe-ops N
6263 stop uprobe tracing after N trace events of the function
6264 that is being traced.
6265
6266 /dev/urandom stressor (Linux)
6267 -u N, --urandom N
6268 start N workers reading /dev/urandom (Linux only). This
6269 will load the kernel random number source.
6270
6271 --urandom-ops N
6272 stop urandom stress workers after N urandom bogo read oper‐
6273 ations (Linux only).
6274
6275 Page faults stressor (Linux)
6276 --userfaultfd N
6277 start N workers that generate write page faults on a small
6278 anonymously mapped memory region and handle these faults
6279 using the user space fault handling via the userfaultfd
6280 mechanism. This will generate a large quantity of major
6281 page faults and also context switches during the handling
6282 of the page faults. (Linux only).
6283
6284 --userfaultfd-bytes N
6285 mmap N bytes per userfaultfd worker to page fault on, the
6286 default is 16MB. One can specify the size as % of total
6287 available memory or in units of Bytes, KBytes, MBytes and
6288 GBytes using the suffix b, k, m or g.
6289
6290 --userfaultfd-ops N
6291 stop userfaultfd stress workers after N page faults.
6292
6293 SYGSYS stressor
6294 --usersyscall N
6295 start N workers that exercise the Linux prctl userspace
6296 system call mechanism. A userspace system call is handled
6297 by a SIGSYS signal handler and exercised with the system
6298 call disabled (ENOSYS) and enabled (via SIGSYS) using prctl
6299 PR_SET_SYSCALL_USER_DISPATCH.
6300
6301 --usersyscall-ops N
6302 stop after N successful userspace syscalls via a SIGSYS
6303 signal handler.
6304
6305 File timestamp stressor
6306 --utime N
6307 start N workers updating file timestamps. This is mainly
6308 CPU bound when the default is used as the system flushes
6309 metadata changes only periodically.
6310
6311 --utime-fsync
6312 force metadata changes on each file timestamp update to be
6313 flushed to disk. This forces the test to become I/O bound
6314 and will result in many dirty metadata writes.
6315
6316 --utime-ops N
6317 stop utime stress workers after N utime bogo operations.
6318
6319 Virtual dynamic shared object stressor
6320 --vdso N
6321 start N workers that repeatedly call each of the system
6322 call functions in the vDSO (virtual dynamic shared object).
6323 The vDSO is a shared library that the kernel maps into the
6324 address space of all user-space applications to allow fast
6325 access to kernel data to some system calls without the need
6326 of performing an expensive system call.
6327
6328 --vdso-func F
6329 Instead of calling all the vDSO functions, just call the
6330 vDSO function F. The functions depend on the kernel being
6331 used, but are typically clock_gettime, getcpu, gettimeofday
6332 and time.
6333
6334 --vdso-ops N
6335 stop after N vDSO functions calls.
6336
6337 Vector floating point operations stressor
6338 --vecfp N
6339 start N workers that exericise floating point (single and
6340 double precision) addition, multiplication, division and
6341 negation on vectors of 128, 64, 32, 16 and 8 floating point
6342 values. The -v option will show the approximate throughput
6343 in millions of floating pointer operations per second for
6344 each operation. For x86, the gcc/clang target clones at‐
6345 tribute has been used to produced vector optimizations for
6346 a range of mmx, sse, avx and processor features.
6347
6348 --vecfp-method method
6349 specify a vecfp stress method. By default, all the stress
6350 methods are exercised sequentially, however one can specify
6351 just one method to be used if required.
6352
6353 Method Description
6354 all iterate through all of the following vector
6355 methods
6356 floatv128add addition of a vector of 128 single precision
6357 floating point values
6358 floatv64add addition of a vector of 64 single precision
6359 floating point values
6360 floatv32add addition of a vector of 32 single precision
6361 floating point values
6362 floatv16add addition of a vector of 16 single precision
6363 floating point values
6364 floatv8add addition of a vector of 8 single precision
6365 floating point values
6366 floatv128mul multiplication of a vector of 128 single
6367 precision floating point values
6368 floatv64mul multiplication of a vector of 64 single pre‐
6369 cision floating point values
6370 floatv32mul multiplication of a vector of 32 single pre‐
6371 cision floating point values
6372 floatv16mul multiplication of a vector of 16 single pre‐
6373 cision floating point values
6374 floatv8mul multiplication of a vector of 8 single pre‐
6375 cision floating point values
6376 floatv128div division of a vector of 128 single precision
6377 floating point values
6378 floatv64div division of a vector of 64 single precision
6379 floating point values
6380 floatv32div division of a vector of 32 single precision
6381 floating point values
6382 floatv16div division of a vector of 16 single precision
6383 floating point values
6384 floatv8div division of a vector of 8 single precision
6385 floating point values
6386
6387 doublev128add addition of a vector of 128 double precision
6388 floating point values
6389 doublev64add addition of a vector of 64 double precision
6390 floating point values
6391 doublev32add addition of a vector of 32 double precision
6392 floating point values
6393 doublev16add addition of a vector of 16 double precision
6394 floating point values
6395 doublev8add addition of a vector of 8 double precision
6396 floating point values
6397 doublev128mul multiplication of a vector of 128 double
6398 precision floating point values
6399 doublev64mul multiplication of a vector of 64 double pre‐
6400 cision floating point values
6401 doublev32mul multiplication of a vector of 32 double pre‐
6402 cision floating point values
6403 doublev16mul multiplication of a vector of 16 double pre‐
6404 cision floating point values
6405 doublev8mul multiplication of a vector of 8 double pre‐
6406 cision floating point values
6407 doublev128div division of a vector of 128 double precision
6408 floating point values
6409 doublev64div division of a vector of 64 double precision
6410 floating point values
6411 doublev32div division of a vector of 32 double precision
6412 floating point values
6413 doublev16div division of a vector of 16 double precision
6414 floating point values
6415 doublev8div division of a vector of 8 double precision
6416 floating point values
6417 doublev128neg negation of a vector of 128 double precision
6418 floating point values
6419 doublev64neg negation of a vector of 64 double precision
6420 floating point values
6421 doublev32neg negation of a vector of 32 double precision
6422 floating point values
6423 doublev16neg negation of a vector of 16 double precision
6424 floating point values
6425 doublev8neg negation of a vector of 8 double precision
6426 floating point values
6427
6428 --vecfp-ops N
6429 stop after N vector floating point bogo-operations. Each
6430 bogo-op is equivalent to 65536 loops of 2 vector opera‐
6431 tions. For example, one bogo-op on a 16 wide vector is
6432 equivalent to 65536 × 2 × 16 floating point operations.
6433
6434 Vector math operations stressor
6435 --vecmath N
6436 start N workers that perform various unsigned integer math
6437 operations on various 128 bit vectors. A mix of vector math
6438 operations are performed on the following vectors: 16 × 8
6439 bits, 8 × 16 bits, 4 × 32 bits, 2 × 64 bits. The metrics
6440 produced by this mix depend on the processor architecture
6441 and the vector math optimisations produced by the compiler.
6442
6443 --vecmath-ops N
6444 stop after N bogo vector integer math operations.
6445
6446 Shuffled vector math operations stressor
6447 --vecshuf N
6448 start N workers that shuffle data on various 64 byte vec‐
6449 tors comprised of 8, 16, 32, 64 and 128 bit unsigned inte‐
6450 gers. The integers are shuffled around the vector with 4
6451 shuffle operations per loop, 65536 loops make up one bogo-
6452 op of shuffling. The data shuffling rates and shuffle oper‐
6453 ation rates are logged when using the -v option. This
6454 stressor exercises vector load, shuffle/permute, pack‐
6455 ing/unpacking and store operations.
6456
6457 --vecshuf-method method
6458 specify a vector shuffling stress method. By default, all
6459 the stress methods are exercised sequentially, however one
6460 can specify just one method to be used if required.
6461
6462 Method Description
6463 all iterate through all of the following vector methods
6464 u8x64 shuffle a vector of 64 unsigned 8 bit integers
6465 u16x32 shuffle a vector of 32 unsigned 16 bit integers
6466 u32x16 shuffle a vector of 16 unsigned 32 bit integers
6467 u64x8 shuffle a vector of 8 unsigned 64 bit integers
6468 u128x4 shuffle a vector of 4 unsigned 128 bit integers
6469 (when supported)
6470
6471 --vecshuf-ops N
6472 stop after N bogo vector shuffle ops. One bogo-op is equa‐
6473 vlent of 4 × 65536 vector shuffle operations on 64 bytes of
6474 vector data.
6475
6476 Wide vector math operations stressor
6477 --vecwide N
6478 start N workers that perform various 8 bit math operations
6479 on vectors of 4, 8, 16, 32, 64, 128, 256, 512, 1024 and
6480 2048 bytes. With the -v option the relative compute perfor‐
6481 mance vs the expected compute performance based on total
6482 run time is shown for the first vecwide worker. The vecwide
6483 stressor exercises various processor vector instruction
6484 mixes and how well the compiler can map the vector opera‐
6485 tions to the target instruction set.
6486
6487 --vecwide-ops N
6488 stop after N bogo vector operations (2048 iterations of a
6489 mix of vector instruction operations).
6490
6491 File based authenticy protection (verity) stressor
6492 --verity N
6493 start N workers that exercise read-only file based authen‐
6494 ticy protection using the verity ioctls FS_IOC_ENABLE_VER‐
6495 ITY and FS_IOC_MEASURE_VERITY. This requires file systems
6496 with verity support (currently ext4 and f2fs on Linux) with
6497 the verity feature enabled. The test attempts to creates a
6498 small file with multiple small extents and enables verity
6499 on the file and verifies it. It also checks to see if the
6500 file has verity enabled with the FS_VERITY_FL bit set on
6501 the file flags.
6502
6503 --verity-ops N
6504 stop the verity workers after N file create, enable verity,
6505 check verity and unlink cycles.
6506
6507 vfork stressor
6508 --vfork N
6509 start N workers continually vforking children that immedi‐
6510 ately exit.
6511
6512 --vfork-max P
6513 create P processes and then wait for them to exit per iter‐
6514 ation. The default is just 1; higher values will create
6515 many temporary zombie processes that are waiting to be
6516 reaped. One can potentially fill up the process table using
6517 high values for --vfork-max and --vfork.
6518
6519 --vfork-ops N
6520 stop vfork stress workers after N bogo operations.
6521
6522 --vfork-vm
6523 deprecated since stress-ng V0.14.03
6524
6525 vfork processes as much as possible stressor
6526 --vforkmany N
6527 start N workers that spawn off a chain of vfork children
6528 until the process table fills up and/or vfork fails. vfork
6529 can rapidly create child processes and the parent process
6530 has to wait until the child dies, so this stressor rapidly
6531 fills up the process table.
6532
6533 --vforkmany-ops N
6534 stop vforkmany stressors after N vforks have been made.
6535
6536 --vforkmany-vm
6537 enable detrimental performance virtual memory advice using
6538 madvise on all pages of the vforked process. Where possible
6539 this will try to set every page in the new process with us‐
6540 ing madvise MADV_MERGEABLE, MADV_WILLNEED, MADV_HUGEPAGE
6541 and MADV_RANDOM flags. Linux only.
6542
6543 Memory allocate and write stressor
6544 -m N, --vm N
6545 start N workers continuously calling mmap(2)/munmap(2) and
6546 writing to the allocated memory. Note that this can cause
6547 systems to trip the kernel OOM killer on Linux systems if
6548 not enough physical memory and swap is not available.
6549
6550 --vm-bytes N
6551 mmap N bytes per vm worker, the default is 256MB. One can
6552 specify the size as % of total available memory or in units
6553 of Bytes, KBytes, MBytes and GBytes using the suffix b, k,
6554 m or g.
6555
6556 --vm-hang N
6557 sleep N seconds before unmapping memory, the default is
6558 zero seconds. Specifying 0 will do an infinite wait.
6559
6560 --vm-keep
6561 do not continually unmap and map memory, just keep on re-
6562 writing to it.
6563
6564 --vm-locked
6565 Lock the pages of the mapped region into memory using mmap
6566 MAP_LOCKED (since Linux 2.5.37). This is similar to lock‐
6567 ing memory as described in mlock(2).
6568
6569 --vm-madvise advice
6570 Specify the madvise 'advice' option used on the memory
6571 mapped regions used in the vm stressor. Non-linux systems
6572 will only have the 'normal' madvise advice, linux systems
6573 support 'dontneed', 'hugepage', 'mergeable' , 'nohugepage',
6574 'normal', 'random', 'sequential', 'unmergeable' and 'will‐
6575 need' advice. If this option is not used then the default
6576 is to pick random madvise advice for each mmap call. See
6577 madvise(2) for more details.
6578
6579 --vm-method method
6580 specify a vm stress method. By default, all the stress
6581 methods are exercised sequentially, however one can specify
6582 just one method to be used if required. Each of the vm
6583 workers have 3 phases:
6584
6585 1. Initialised. The anonymously memory mapped region is set
6586 to a known pattern.
6587
6588 2. Exercised. Memory is modified in a known predictable
6589 way. Some vm workers alter memory sequentially, some use
6590 small or large strides to step along memory.
6591
6592 3. Checked. The modified memory is checked to see if it
6593 matches the expected result.
6594
6595 The vm methods containing 'prime' in their name have a
6596 stride of the largest prime less than 2↑64, allowing to
6597 them to thoroughly step through memory and touch all loca‐
6598 tions just once while also doing without touching memory
6599 cells next to each other. This strategy exercises the cache
6600 and page non-locality.
6601
6602 Since the memory being exercised is virtually mapped then
6603 there is no guarantee of touching page addresses in any
6604 particular physical order. These workers should not be
6605 used to test that all the system's memory is working cor‐
6606 rectly either, use tools such as memtest86 instead.
6607
6608 The vm stress methods are intended to exercise memory in
6609 ways to possibly find memory issues and to try to force
6610 thermal errors.
6611
6612 Available vm stress methods are described as follows:
6613
6614 Method Description
6615 all iterate over all the vm stress methods as
6616 listed below.
6617 cache-lines work through memory in 64 byte cache sized
6618 steps writing a single byte per cache line.
6619 Once the write is complete, the memory is
6620 read to verify the values are written cor‐
6621 rectly.
6622
6623
6624
6625 cache-stripe work through memory in 64 byte cache sized
6626 chunks, writing in ascending address order on
6627 even offsets and descending address order on
6628 odd offsets.
6629 checkboard work through memory writing alternative
6630 zero/one bit values into memory in a mixed
6631 checkerboard pattern. Memory is swapped
6632 around to ensure every bit is read, bit
6633 flipped and re-written and then re-read for
6634 verification.
6635 flip sequentially work through memory 8 times,
6636 each time just one bit in memory flipped (in‐
6637 verted). This will effectively invert each
6638 byte in 8 passes.
6639 fwdrev write to even addressed bytes in a forward
6640 direction and odd addressed bytes in reverse
6641 direction. rhe contents are sanity checked
6642 once all the addresses have been written to.
6643 galpat-0 galloping pattern zeros. This sets all bits
6644 to 0 and flips just 1 in 4096 bits to 1. It
6645 then checks to see if the 1s are pulled down
6646 to 0 by their neighbours or of the neighbours
6647 have been pulled up to 1.
6648 galpat-1 galloping pattern ones. This sets all bits to
6649 1 and flips just 1 in 4096 bits to 0. It then
6650 checks to see if the 0s are pulled up to 1 by
6651 their neighbours or of the neighbours have
6652 been pulled down to 0.
6653 gray fill the memory with sequential gray codes
6654 (these only change 1 bit at a time between
6655 adjacent bytes) and then check if they are
6656 set correctly.
6657 grayflip fill memory with adjacent bytes of gray code
6658 and inverted gray code pairs to change as
6659 many bits at a time between adjacent bytes
6660 and check if these are set correctly.
6661 incdec work sequentially through memory twice, the
6662 first pass increments each byte by a specific
6663 value and the second pass decrements each
6664 byte back to the original start value. The
6665 increment/decrement value changes on each in‐
6666 vocation of the stressor.
6667 inc-nybble initialise memory to a set value (that
6668 changes on each invocation of the stressor)
6669 and then sequentially work through each byte
6670 incrementing the bottom 4 bits by 1 and the
6671 top 4 bits by 15.
6672 lfsr32 fill memory with values generated from a 32
6673 bit Galois linear feedback shift register us‐
6674 ing the polynomial x↑32 + x↑31 + x↑29 + x +
6675 1. This generates a ring of 2↑32 - 1 unique
6676 values (all 32 bit values except for 0).
6677 rand-set sequentially work through memory in 64 bit
6678 chunks setting bytes in the chunk to the same
6679 8 bit random value. The random value changes
6680 on each chunk. Check that the values have
6681 not changed.
6682 rand-sum sequentially set all memory to random values
6683 and then summate the number of bits that have
6684 changed from the original set values.
6685 read64 sequentially read memory using 32 × 64 bit
6686 reads per bogo loop. Each loop equates to one
6687 bogo operation. This exercises raw memory
6688 reads.
6689 ror fill memory with a random pattern and then
6690 sequentially rotate 64 bits of memory right
6691 by one bit, then check the final load/ro‐
6692 tate/stored values.
6693 swap fill memory in 64 byte chunks with random
6694 patterns. Then swap each 64 chunk with a ran‐
6695 domly chosen chunk. Finally, reverse the swap
6696 to put the chunks back to their original
6697 place and check if the data is correct. This
6698 exercises adjacent and random memory
6699 load/stores.
6700 move-inv sequentially fill memory 64 bits of memory at
6701 a time with random values, and then check if
6702 the memory is set correctly. Next, sequen‐
6703 tially invert each 64 bit pattern and again
6704 check if the memory is set as expected.
6705 modulo-x fill memory over 23 iterations. Each itera‐
6706 tion starts one byte further along from the
6707 start of the memory and steps along in 23
6708 byte strides. In each stride, the first byte
6709 is set to a random pattern and all other
6710 bytes are set to the inverse. Then it checks
6711 see if the first byte contains the expected
6712 random pattern. This exercises cache
6713 store/reads as well as seeing if neighbouring
6714 cells influence each other.
6715 mscan fill each bit in each byte with 1s then check
6716 these are set, fill each bit in each byte
6717 with 0s and check these are clear.
6718 prime-0 iterate 8 times by stepping through memory in
6719 very large prime strides clearing just on bit
6720 at a time in every byte. Then check to see if
6721 all bits are set to zero.
6722 prime-1 iterate 8 times by stepping through memory in
6723 very large prime strides setting just on bit
6724 at a time in every byte. Then check to see if
6725 all bits are set to one.
6726 prime-gray-0 first step through memory in very large prime
6727 strides clearing just on bit (based on a gray
6728 code) in every byte. Next, repeat this but
6729 clear the other 7 bits. Then check to see if
6730 all bits are set to zero.
6731 prime-gray-1 first step through memory in very large prime
6732 strides setting just on bit (based on a gray
6733 code) in every byte. Next, repeat this but
6734 set the other 7 bits. Then check to see if
6735 all bits are set to one.
6736 rowhammer try to force memory corruption using the
6737 rowhammer memory stressor. This fetches two
6738 32 bit integers from memory and forces a
6739 cache flush on the two addresses multiple
6740 times. This has been known to force bit flip‐
6741 ping on some hardware, especially with lower
6742 frequency memory refresh cycles.
6743
6744 walk-0d for each byte in memory, walk through each
6745 data line setting them to low (and the others
6746 are set high) and check that the written
6747 value is as expected. This checks if any data
6748 lines are stuck.
6749 walk-1d for each byte in memory, walk through each
6750 data line setting them to high (and the oth‐
6751 ers are set low) and check that the written
6752 value is as expected. This checks if any data
6753 lines are stuck.
6754 walk-0a in the given memory mapping, work through a
6755 range of specially chosen addresses working
6756 through address lines to see if any address
6757 lines are stuck low. This works best with
6758 physical memory addressing, however, exercis‐
6759 ing these virtual addresses has some value
6760 too.
6761 walk-1a in the given memory mapping, work through a
6762 range of specially chosen addresses working
6763 through address lines to see if any address
6764 lines are stuck high. This works best with
6765 physical memory addressing, however, exercis‐
6766 ing these virtual addresses has some value
6767 too.
6768 write64 sequentially write to memory using 32 × 64
6769 bit writes per bogo loop. Each loop equates
6770 to one bogo operation. This exercises raw
6771 memory writes. Note that memory writes are
6772 not checked at the end of each test itera‐
6773 tion.
6774 write64nt sequentially write to memory using 32 × 64
6775 bit non-temporal writes per bogo loop. Each
6776 loop equates to one bogo operation. This ex‐
6777 ercises cacheless raw memory writes and is
6778 only available on x86 sse2 capable systems
6779 built with gcc and clang compilers. Note
6780 that memory writes are not checked at the end
6781 of each test iteration.
6782 write1024v sequentially write to memory using 1 × 1024
6783 bit vector write per bogo loop (only avail‐
6784 able if the compiler supports vector types).
6785 Each loop equates to one bogo operation.
6786 This exercises raw memory writes. Note that
6787 memory writes are not checked at the end of
6788 each test iteration.
6789 wrrd128nt write to memory in 128 bit chunks using non-
6790 temporal writes (bypassing the cache). Each
6791 chunk is written 4 times to hammer the mem‐
6792 ory. Then check to see if the data is correct
6793 using non-temporal reads if they are avail‐
6794 able or normal memory reads if not. Only
6795 available with processors that provide non-
6796 temporal 128 bit writes.
6797 zero-one set all memory bits to zero and then check if
6798 any bits are not zero. Next, set all the mem‐
6799 ory bits to one and check if any bits are not
6800 one.
6801
6802 --vm-ops N
6803 stop vm workers after N bogo operations.
6804
6805 --vm-populate
6806 populate (prefault) page tables for the memory mappings;
6807 this can stress swapping. Only available on systems that
6808 support MAP_POPULATE (since Linux 2.5.46).
6809
6810 Virtual memory adressing stressor
6811 --vm-addr N
6812 start N workers that exercise virtual memory addressing us‐
6813 ing various methods to walk through a memory mapped address
6814 range. This will exercise mapped private addresses from 8MB
6815 to 64MB per worker and try to generate cache and TLB inef‐
6816 ficient addressing patterns. Each method will set the mem‐
6817 ory to a random pattern in a write phase and then sanity
6818 check this in a read phase.
6819
6820 --vm-addr-method method
6821 specify a vm address stress method. By default, all the
6822 stress methods are exercised sequentially, however one can
6823 specify just one method to be used if required.
6824
6825 Available vm address stress methods are described as fol‐
6826 lows:
6827
6828 Method Description
6829 all iterate over all the vm stress methods as listed
6830 below.
6831 bitposn iteratively write to memory in powers of 2 strides
6832 of max_stride to 1 and then read check memory in
6833 powers of 2 strides 1 to max_stride where
6834 max_stride is half the size of the memory mapped
6835 region. All bit positions of the memory address
6836 space are bit flipped in the striding.
6837 dec work through the address range backwards sequen‐
6838 tially, byte by byte.
6839 decinv like dec, but with all the relevant address bits
6840 inverted.
6841 flip address memory using gray coded addresses and
6842 their inverse to flip as many address bits per
6843 write/read operation
6844 gray work through memory with gray coded addresses so
6845 that each change of address just changes 1 bit
6846 compared to the previous address.
6847 grayinv like gray, but with the all relevant address bits
6848 inverted, hence all bits change apart from 1 in
6849 the address range.
6850 inc work through the address range forwards sequen‐
6851 tially, byte by byte.
6852 incinv like inc, but with all the relevant address bits
6853 inverted.
6854 pwr2 work through memory addresses in steps of powers
6855 of two.
6856 pwr2inv like pwr2, but with the all relevant address bits
6857 inverted.
6858 rev work through the address range with the bits in
6859 the address range reversed.
6860 revinv like rev, but with all the relevant address bits
6861 inverted.
6862
6863 --vm-addr-mlock
6864 attempt to mlock pages into memory causing more memory
6865 pressure by preventing pages from swapped out.
6866
6867 --vm-addr-ops N
6868 stop N workers after N bogo addressing passes.
6869
6870 Memory transfer between parent and child processes stressor (Linux)
6871 --vm-rw N
6872 start N workers that transfer memory to/from a parent/child
6873 using process_vm_writev(2) and process_vm_readv(2). This is
6874 feature is only supported on Linux. Memory transfers are
6875 only verified if the --verify option is enabled.
6876
6877 --vm-rw-bytes N
6878 mmap N bytes per vm-rw worker, the default is 16MB. One can
6879 specify the size as % of total available memory or in units
6880 of Bytes, KBytes, MBytes and GBytes using the suffix b, k,
6881 m or g.
6882
6883 --vm-rw-ops N
6884 stop vm-rw workers after N memory read/writes.
6885
6886 Memory unmap from a child process stressor
6887 --vm-segv N
6888 start N workers that create a child process that unmaps its
6889 address space causing a SIGSEGV on return from the unmap.
6890
6891 --vm-segv-ops N
6892 stop after N bogo vm-segv SIGSEGV faults.
6893
6894 Vmsplice stressor (Linux)
6895 --vm-splice N
6896 move data from memory to /dev/null through a pipe without
6897 any copying between kernel address space and user address
6898 space using vmsplice(2) and splice(2). This is only avail‐
6899 able for Linux.
6900
6901 --vm-splice-bytes N
6902 transfer N bytes per vmsplice call, the default is 64K. One
6903 can specify the size as % of total available memory or in
6904 units of Bytes, KBytes, MBytes and GBytes using the suffix
6905 b, k, m or g.
6906
6907 --vm-splice-ops N
6908 stop after N bogo vm-splice operations.
6909
6910 Virtual Memory Area (VMA) stressor
6911 --vma N
6912 start M workers that create pthreads to mmap, munmap,
6913 mlock, munlock, madvise, msync, mprotect, mincore and ac‐
6914 cess 16 pages in a randomly selected virtual memory address
6915 space. This is designed to trip races on VMA page modifica‐
6916 tions. Every 15 seconds a different virtual address space
6917 is randonly chosen.
6918
6919 --vma-ops N
6920 stop the vma stressors after N successful memory mappings.
6921
6922 Vector neural network instructions stressor
6923 --vnni N
6924 start N workers that exercise vector neural network in‐
6925 structions (VNNI) used in convolutional neural network
6926 loops. A 256 byte vector is operated upon using 8 bit mul‐
6927 tiply with 16 bit summation, 16 bit mulitply and 32 bit
6928 summation, and 8 bit summation. When processor features al‐
6929 low, these operations using 512, 256 and 128 bit vector op‐
6930 erations. Generic non-vectorized code variants also pro‐
6931 vided (which may be vectorized by more advanced optimising
6932 compilers).
6933
6934 --vnni-intrinsic
6935 just use the vnni methods that use intrinsic VNNI instruc‐
6936 tions and ignore the generic non-vectorized methods.
6937
6938 --vnni-method N
6939 select the VNNI method to be exercised, may be one of:
6940
6941 Method Description
6942 all exercise all the following VNNI methods
6943 vpaddb512 8 bit vector addition using 512 bit vector op‐
6944 erations on 64 × 8 bit integers, (x86 vpaddb)
6945 vpaddb256 8 bit vector addition using 256 bit vector op‐
6946 erations on 32 × 8 bit integers, (x86 vpaddb)
6947 vpaddb128 8 bit vector addition using 128 bit vectors
6948 operations on 32 × 8 bit integers, (x86
6949 vpaddb)
6950 vpaddb 8 bit vector addition using 8 bit sequential
6951 addition (may be vectorized by the compiler)
6952 vpdpbusd512 8 bit vector multiplication of unsigned and
6953 signed 8 bit values followed by 16 bit summa‐
6954 tion using 512 bit vector operations on 64 × 8
6955 bit integers, (x86 vpdpbusd)
6956 vpdpbusd256 8 bit vector multiplication of unsigned and
6957 signed 8 bit values followed by 16 bit summa‐
6958 tion using 256 bit vector operations on 32 × 8
6959 bit integers, (x86 vpdpbusd)
6960 vpdpbusd128 8 bit vector multiplication of unsigned and
6961 signed 8 bit values followed by 16 bit summa‐
6962 tion using 128 bit vector operations on 32 × 8
6963 bit integers, (x86 vpdpbusd)
6964 vpdpbusd 8 bit vector multiplication of unsigned and
6965 signed 8 bit values followed by 16 bit summa‐
6966 tion using sequential operations (may be vec‐
6967 torized by the compiler)
6968 vpdpwssd512 16 bit vector multiplication of unsigned and
6969 signed 16 bit values followed by 32 bit summa‐
6970 tion using 512 bit vector operations on 64 × 8
6971 bit integers, (x86 vpdpwssd)
6972 vpdpwssd256 16 bit vector multiplication of unsigned and
6973 signed 16 bit values followed by 32 bit summa‐
6974 tion using 256 bit vector operations on 64 × 8
6975 bit integers, (x86 vpdpwssd)
6976 vpdpwssd128 16 bit vector multiplication of unsigned and
6977 signed 16 bit values followed by 32 bit summa‐
6978 tion using 128 bit vector operations on 64 × 8
6979 bit integers, (x86 vpdpwssd)
6980
6981
6982 vpdpwssd 16 bit vector multiplication of unsigned and
6983 signed 16 bit values followed by 32 bit summa‐
6984 tion using sequential operations (may be vec‐
6985 torized by the compiler)
6986
6987 --vnni-ops N
6988 stop after N bogo VNNI computation operations. 1 bogo-op is
6989 equivalent to 1024 convolution loops operating on 256 bytes
6990 of data.
6991
6992 Pausing and resuming threads stressor
6993 --wait N
6994 start N workers that spawn off two children; one spins in a
6995 pause(2) loop, the other continually stops and continues
6996 the first. The controlling process waits on the first child
6997 to be resumed by the delivery of SIGCONT using waitpid(2)
6998 and waitid(2).
6999
7000 --wait-ops N
7001 stop after N bogo wait operations.
7002
7003
7004 CPU wait instruction stressor
7005 --waitcpu N
7006 start N workers that exercise processor wait instructions.
7007 For x86 these are pause, tpause and umwait (when available)
7008 and nop. For ARM the yield instruction is used. For other
7009 architectures currently nop instructions are used.
7010
7011 --waitcpu-ops N
7012 stop after N bogo processor wait operations.
7013
7014 Watchdog stressor
7015 --watchdog N
7016 start N workers that exercising the /dev/watchdog watchdog
7017 interface by opening it, perform various watchdog specific
7018 ioctl(2) commands on the device and close it. Before clos‐
7019 ing the special watchdog magic close message is written to
7020 the device to try and force it to never trip a watchdog re‐
7021 boot after the stressor has been run. Note that this
7022 stressor needs to be run as root with the --pathological
7023 option and is only available on Linux.
7024
7025 --watchdog-ops N
7026 stop after N bogo operations on the watchdog device.
7027
7028 Libc wide characterstring function stressor
7029 --wcs N
7030 start N workers that exercise various libc wide character
7031 string functions on random strings.
7032
7033 --wcs-method wcsfunc
7034 select a specific libc wide character string function to
7035 stress. Available string functions to stress are: all, wc‐
7036 scasecmp, wcscat, wcschr, wcscoll, wcscmp, wcscpy, wcslen,
7037 wcsncasecmp, wcsncat, wcsncmp, wcsrchr and wcsxfrm. The
7038 'all' method is the default and will exercise all the
7039 string methods.
7040
7041 --wcs-ops N
7042 stop after N bogo wide character string operations.
7043
7044 scheduler workload stressor
7045 --workload N
7046 start N workers that exercise the scheduler with items of
7047 work that are started at random times with random sleep de‐
7048 lays between work items. By default a 100,000 microsecond
7049 slice of time has 100 work items that start at random times
7050 during the slice. The work items by default run for a
7051 quanta of 1000 microseconds scaled by the percentage work
7052 load (default of 30%). For a slice of S microseconds and a
7053 work item quanta duration of Q microseconds, S / Q work
7054 items are executed per slice. For a work load of L percent,
7055 the run time per item is the quanta Q × L / 100 microsec‐
7056 onds. The --workload-hreads option allows work items to be
7057 taken from a queue and run concurrently if the scheduling
7058 run times overlap.
7059 If a work item is already running when a new work item is
7060 scheduled to run then the new work item is delayed and
7061 starts directly after the completion of the currently run‐
7062 ning work item when running with the default of zero worker
7063 threads. This emulates bursty scheduled compute, such as
7064 handling input packets where one may have lots of work
7065 items bunched together or with random unpredictable delays
7066 between work items.
7067
7068 --workload-load L
7069 specify the percentage run time load of each work item with
7070 respect to the run quanta duration. Essentially the run du‐
7071 ration of each work item is the quanta duration Q × L /
7072 100.
7073
7074 --workload-method method
7075 select the workload method. Each quanta of execution time
7076 is consumed using a tight spin-loop executing a workload
7077 method. The available methods are described as follows:
7078
7079 Method Description
7080 all randomly select any one of all the
7081 following methods:
7082 fma perform multiply-add operations, on
7083 modern processors these may be com‐
7084 piled into fused-multiply-add instruc‐
7085 tions.
7086 getpid get the stressor's PID via getpid(2).
7087 time get the current time via time(2).
7088 inc64 increment a 64 bit integer.
7089 memmove copy (move) a 1MB buffer using mem‐
7090 move(3).
7091 memread read from a 1MB buffer using fast mem‐
7092 ory reads.
7093 memset write to a 1MB buffer usin memset(3).
7094 mcw64 compute 64 bit random numbers using a
7095 mwc random generator.
7096 nop waste cycles using no-op instructions.
7097 pause stop execution using CPU pause/yield
7098 or memory barrier instructions where
7099 available.
7100
7101 random a random mix of all the workload meth‐
7102 ods, changing the workload method on
7103 every spin-loop.
7104 sqrt perform double precision floating
7105 point sqrt(3) and hypot(3) math opera‐
7106 tions.
7107
7108 --workload-sched [ batch | deadline | idle | fifo | other | rr ]
7109 select scheduling policy. Note tha fifo and rr require root
7110 privilege to set.
7111
7112 --workload-slice-us S
7113 specify the duration of each scheduling slice in microsec‐
7114 onds. The default is 100,000 microseconds (0.1 seconds).
7115
7116 --workload-quanta-us Q
7117 specify the duration of each work item in microseconds. The
7118 defaut is 1000 microseconds (1 millisecond).
7119
7120 --workload-threads N
7121 use N process threads to take scheduler work items of a
7122 workqueue and run the work item. When N is 0 (default), no
7123 threads are used and the work items are run back-to-back
7124 sequentially without using work queue. Using more than 2
7125 threads allows work items to be handled concurrently if
7126 enough idle processors are available.
7127
7128 --workload-dist [ cluster | even | poisson | random1 | random2 |
7129 random3 ]
7130 specify the scheduling distribution of work items, the de‐
7131 fualt is cluster. The distribution methods are described
7132 as follows:
7133
7134 Method Description
7135 cluster cluster 2/3 of the start times to try to start at
7136 the random time during the time slice, with the
7137 other 1/3 of start times evenly randomly distrib‐
7138 uted using a single random variable. The clustered
7139 start times causes a burst of items to be sched‐
7140 uled in a bunch with no delays between each clus‐
7141 tered work item.
7142 even evenly distribute scheduling start times across
7143 the workload slice
7144 poisson generate scheduling events that occur individually
7145 at random moments, but which tend to occur at an
7146 average rate (known as a Poisson process).
7147 random1 evenly randomly distribute scheduling start times
7148 using a single random variable.
7149 random2 randomly distribute scheduling start times using a
7150 sum of two random variables, much like throwing 2
7151 dice.
7152 random3 randomly distribute scheduling start times using a
7153 sum of three random variables, much like throwing
7154 3 dice.
7155
7156 --workload-ops N
7157 stop the workload workers after N workload bogo-operations.
7158
7159 x86 cpuid stressor
7160 --x86cpuid N
7161 start N workers that exercise the x86 cpuid instruction
7162 with 18 different leaf types.
7163
7164 --x86cpuid-ops N
7165 stop after N iterations that exercise the different cpuid
7166 leaf types.
7167
7168 x86-64 syscall stressor (Linux)
7169 --x86syscall N
7170 start N workers that repeatedly exercise the x86-64 syscall
7171 instruction to call the getpid(2), getcpu(2), gettimeof‐
7172 day(2) and time(2) system using the Linux vsyscall handler.
7173 Only for Linux.
7174
7175 --x86syscall-func F
7176 Instead of exercising the 4 syscall system calls, just call
7177 the syscall function F. The function F must be one of
7178 getcpu, gettimeofday and time.
7179
7180 --x86syscall-ops N
7181 stop after N x86syscall system calls.
7182
7183 Extended file attributes stressor
7184 --xattr N
7185 start N workers that create, update and delete batches of
7186 extended attributes on a file.
7187
7188 --xattr-ops N
7189 stop after N bogo extended attribute operations.
7190
7191 Yield scheduling stressor
7192 -y N, --yield N
7193 start N workers that call sched_yield(2). This stressor en‐
7194 sures that at least 2 child processes per CPU exercise
7195 shield_yield(2) no matter how many workers are specified,
7196 thus always ensuring rapid context switching.
7197
7198 --yield-ops N
7199 stop yield stress workers after N sched_yield(2) bogo oper‐
7200 ations.
7201
7202 /dev/zero stressor
7203 --zero N
7204 start N workers that exercise /dev/zero with reads, lseeks,
7205 ioctls and mmaps. For just /dev/zero read benchmarking use
7206 the --zero-read option.
7207
7208 --zero-ops N
7209 stop zero stress workers after N /dev/zero bogo read opera‐
7210 tions.
7211
7212 --zero-read
7213 just read /dev/zero with 4K reads with no additional exer‐
7214 cising on /dev/zero.
7215
7216 Zlib stressor
7217 --zlib N
7218 start N workers compressing and decompressing random data
7219 using zlib. Each worker has two processes, one that com‐
7220 presses random data and pipes it to another process that
7221 decompresses the data. This stressor exercises CPU, cache
7222 and memory.
7223
7224 --zlib-level L
7225 specify the compression level (0..9), where 0 = no compres‐
7226 sion, 1 = fastest compression and 9 = best compression.
7227
7228 --zlib-mem-level L
7229 specify the reserved compression state memory for zlib.
7230 Default is 8.
7231
7232 Value
7233 1 minimum memory usage.
7234 9 maximum memory usage.
7235
7236 --zlib-method method
7237 specify the type of random data to send to the zlib li‐
7238 brary. By default, the data stream is created from a ran‐
7239 dom selection of the different data generation processes.
7240 However one can specify just one method to be used if re‐
7241 quired. Available zlib data generation methods are de‐
7242 scribed as follows:
7243
7244 Method Description
7245 00ff randomly distributed 0x00 and 0xFF values.
7246 ascii01 randomly distributed ASCII 0 and 1 characters.
7247 asciidigits randomly distributed ASCII digits in the range
7248 of 0 and 9.
7249 bcd packed binary coded decimals, 0..99 packed
7250 into 2 4-bit nybbles.
7251 binary 32 bit random numbers.
7252 brown 8 bit brown noise (Brownian motion/Random Walk
7253 noise).
7254 double double precision floating point numbers from
7255 sin(θ).
7256 fixed data stream is repeated 0x04030201.
7257 gcr random values as 4 × 4 bit data turned into 4
7258 × 5 bit group coded recording (GCR) patterns.
7259 Each 5 bit GCR value starts or ends with at
7260 most one zero bit so that concatenated GCR
7261 codes have no more than two zero bits in a
7262 row.
7263 gray 16 bit gray codes generated from an increment‐
7264 ing counter.
7265 inc16 16 bit incrementing values starting from a
7266 random 16 bit value.
7267 latin Random latin sentences from a sample of Lorem
7268 Ipsum text.
7269 lehmer Fast random values generated using Lehmer's
7270 generator using a 128 bit multiply.
7271 lfsr32 Values generated from a 32 bit Galois linear
7272 feedback shift register using the polynomial
7273 x↑32 + x↑31 + x↑29 + x + 1. This generates a
7274 ring of 2↑32 - 1 unique values (all 32 bit
7275 values except for 0).
7276 logmap Values generated from a logistical map of the
7277 equation Χn+1 = r × Χn × (1 - Χn) where r > ≈
7278 3.56994567 to produce chaotic data. The values
7279 are scaled by a large arbitrary value and the
7280 lower 8 bits of this value are compressed.
7281 lrand48 Uniformly distributed pseudo-random 32 bit
7282 values generated from lrand48(3).
7283 morse Morse code generated from random latin sen‐
7284 tences from a sample of Lorem Ipsum text.
7285 nybble randomly distributed bytes in the range of
7286 0x00 to 0x0f.
7287 objcode object code selected from a random start point
7288 in the stress-ng text segment.
7289 parity 7 bit binary data with 1 parity bit.
7290 pink pink noise in the range 0..255 generated using
7291 the Gardner method with the McCartney selec‐
7292 tion tree optimization. Pink noise is where
7293 the power spectral density is inversely pro‐
7294 portional to the frequency of the signal and
7295 hence is slightly compressible.
7296 random segments of the data stream are created by
7297 randomly calling the different data generation
7298 methods.
7299 rarely1 data that has a single 1 in every 32 bits,
7300 randomly located.
7301 rarely0 data that has a single 0 in every 32 bits,
7302 randomly located.
7303 rdrand x86-64 only, generate random data using rdrand
7304 instruction.
7305 ror32 generate a 32 bit random value, rotate it
7306 right 0 to 7 places and store the rotated
7307 value for each of the rotations.
7308 text random ASCII text.
7309 utf8 random 8 bit data encoded to UTF-8.
7310 zero all zeros, compresses very easily.
7311
7312 --zlib-ops N
7313 stop after N bogo compression operations, each bogo com‐
7314 pression operation is a compression of 64K of random data
7315 at the highest compression level.
7316
7317 --zlib-strategy S
7318 specifies the strategy to use when deflating data. This is
7319 used to tune the compression algorithm. Default is 0.
7320
7321 Value
7322 0 used for normal data (Z_DEFAULT_STRATEGY).
7323 1 for data generated by a filter or predictor (Z_FILTERED)
7324 2 forces huffman encoding (Z_HUFFMAN_ONLY).
7325 3 Limit match distances to one run-length-encoding (Z_RLE).
7326 4 prevents dynamic huffman codes (Z_FIXED).
7327
7328 --zlib-stream-bytes S
7329 specify the amount of bytes to deflate until deflate should
7330 finish the block and return with Z_STREAM_END. One can
7331 specify the size in units of Bytes, KBytes, MBytes and
7332 GBytes using the suffix b, k, m or g. Default is 0 which
7333 creates and endless stream until stressor ends.
7334
7335 Value
7336 0 creates an endless deflate stream until stressor stops.
7337 n creates an stream of n bytes over and over again.
7338
7339 Each block will be closed with Z_STREAM_END.
7340
7341 --zlib-window-bits W
7342 specify the window bits used to specify the history buffer
7343 size. The value is specified as the base two logarithm of
7344 the buffer size (e.g. value 9 is 2↑9 = 512 bytes). Default
7345 is 15.
7346
7347 Value
7348 -8-(-15) raw deflate format.
7349 8-15 zlib format.
7350 24-31 gzip format.
7351 40-47 inflate auto format detection using zlib deflate format.
7352
7353 Zombie processes stressor
7354 --zombie N
7355 start N workers that create zombie processes. This will
7356 rapidly try to create a default of 8192 child processes
7357 that immediately die and wait in a zombie state until they
7358 are reaped. Once the maximum number of processes is
7359 reached (or fork fails because one has reached the maximum
7360 allowed number of children) the oldest child is reaped and
7361 a new process is then created in a first-in first-out man‐
7362 ner, and then repeated.
7363
7364 --zombie-max N
7365 try to create as many as N zombie processes. This may not
7366 be reached if the system limit is less than N.
7367
7368 --zombie-ops N
7369 stop zombie stress workers after N bogo zombie operations.
7370
7372 stress-ng --vm 8 --vm-bytes 80% -t 1h
7373
7374 run 8 virtual memory stressors that combined use 80% of the
7375 available memory for 1 hour. Thus each stressor uses 10% of the
7376 available memory.
7377
7378 stress-ng --cpu 4 --io 2 --vm 1 --vm-bytes 1G --timeout 60s
7379
7380 runs for 60 seconds with 4 cpu stressors, 2 io stressors and 1
7381 vm stressor using 1GB of virtual memory.
7382
7383 stress-ng --iomix 2 --iomix-bytes 10% -t 10m
7384
7385 runs 2 instances of the mixed I/O stressors using a total of 10%
7386 of the available file system space for 10 minutes. Each stressor
7387 will use 5% of the available file system space.
7388
7389 stress-ng --with cpu,matrix,vecmath,fp --seq 8 -t 1m
7390
7391 run 8 instances of cpu, matrix, vecmath and fp stressors sequen‐
7392 tially one after another, for 1 minute per stressor.
7393
7394 stress-ng --with cpu,matrix,vecmath,fp --permute 5 -t 10s
7395
7396 run permutations of 5 instances of cpu, matrix, vecmath and fp
7397 stressors sequentially one after another, for 10 seconds per
7398 permutation mix.
7399
7400 stress-ng --cyclic 1 --cyclic-dist 2500 --cyclic-method clock_ns
7401 --cyclic-prio 100 --cyclic-sleep 10000 --hdd 0 -t 1m
7402
7403 measures real time scheduling latencies created by the hdd
7404 stressor. This uses the high resolution nanosecond clock to mea‐
7405 sure latencies during sleeps of 10,000 nanoseconds. At the end
7406 of 1 minute of stressing, the latency distribution with 2500 ns
7407 intervals will be displayed. NOTE: this must be run with the
7408 CAP_SYS_NICE capability to enable the real time scheduling to
7409 get accurate measurements.
7410
7411 stress-ng --cpu 8 --cpu-ops 800000
7412
7413 runs 8 cpu stressors and stops after 800000 bogo operations.
7414
7415 stress-ng --sequential 2 --timeout 2m --metrics
7416
7417 run 2 simultaneous instances of all the stressors sequentially
7418 one by one, each for 2 minutes and summarise with performance
7419 metrics at the end.
7420
7421 stress-ng --cpu 4 --cpu-method fft --cpu-ops 10000 --metrics-brief
7422
7423 run 4 FFT cpu stressors, stop after 10000 bogo operations and
7424 produce a summary just for the FFT results.
7425
7426 stress-ng --cpu -1 --cpu-method all -t 1h --cpu-load 90
7427
7428 run cpu stressors on all online CPUs working through all the
7429 available CPU stressors for 1 hour, loading the CPUs at 90% load
7430 capacity.
7431
7432 stress-ng --cpu 0 --cpu-method all -t 20m
7433
7434 run cpu stressors on all configured CPUs working through all the
7435 available CPU stressors for 20 minutes
7436
7437 stress-ng --all 4 --timeout 5m
7438
7439 run 4 instances of all the stressors for 5 minutes.
7440
7441 stress-ng --random 64
7442
7443 run 64 stressors that are randomly chosen from all the available
7444 stressors.
7445
7446 stress-ng --cpu 64 --cpu-method all --verify -t 10m --metrics-brief
7447
7448 run 64 instances of all the different cpu stressors and verify
7449 that the computations are correct for 10 minutes with a bogo op‐
7450 erations summary at the end.
7451
7452 stress-ng --sequential -1 -t 10m
7453
7454 run all the stressors one by one for 10 minutes, with the number
7455 of instances of each stressor matching the number of online
7456 CPUs.
7457
7458 stress-ng --sequential 8 --class io -t 5m --times
7459
7460 run all the stressors in the io class one by one for 5 minutes
7461 each, with 8 instances of each stressor running concurrently and
7462 show overall time utilisation statistics at the end of the run.
7463
7464 stress-ng --all -1 --maximize --aggressive
7465
7466 run all the stressors (1 instance of each per online CPU) simul‐
7467 taneously, maximize the settings (memory sizes, file alloca‐
7468 tions, etc.) and select the most demanding/aggressive options.
7469
7470 stress-ng --random 32 -x numa,hdd,key
7471
7472 run 32 randomly selected stressors and exclude the numa, hdd and
7473 key stressors
7474
7475 stress-ng --sequential 4 --class vm --exclude bigheap,brk,stack
7476
7477 run 4 instances of the VM stressors one after each other, ex‐
7478 cluding the bigheap, brk and stack stressors
7479
7480 stress-ng --taskset 0,2-3 --cpu 3
7481
7482 run 3 instances of the CPU stressor and pin them to CPUs 0, 2
7483 and 3.
7484
7486 Status Description
7487 0 Success.
7488 1 Error; incorrect user options or a fatal resource issue in
7489 the stress-ng stressor harness (for example, out of mem‐
7490 ory).
7491 2 One or more stressors failed.
7492 3 One or more stressors failed to initialise because of lack
7493 of resources, for example ENOMEM (no memory), ENOSPC (no
7494 space on file system) or a missing or unimplemented system
7495 call.
7496 4 One or more stressors were not implemented on a specific
7497 architecture or operating system.
7498 5 A stressor has been killed by an unexpected signal.
7499 6 A stressor exited by exit(2) which was not expected and
7500 timing metrics could not be gathered.
7501 7 The bogo ops metrics maybe untrustworthy. This is most
7502 likely to occur when a stress test is terminated during
7503 the update of a bogo-ops counter such as when it has been
7504 OOM killed. A less likely reason is that the counter ready
7505 indicator has been corrupted.
7506
7508 File bug reports at: https://github.com/ColinIanKing/stress-ng/issues
7509
7511 cpuburn(1), perf(1), stress(1), taskset(1)
7512 https://github.com/ColinIanKing/stress-ng/blob/master/README.md
7513
7515 stress-ng was written by Colin Ian King <colin.i.king@gmail.com> and is
7516 a clean room re-implementation and extension of the original stress
7517 tool by Amos Waterland. Thanks also to the many contributors to
7518 stress-ng. The README.md file in the source contains a full list of the
7519 contributors.
7520
7522 Sending a SIGALRM, SIGINT or SIGHUP to stress-ng causes it to terminate
7523 all the stressor processes and ensures temporary files and shared mem‐
7524 ory segments are removed cleanly.
7525
7526 Sending a SIGUSR2 to stress-ng will dump out the current load average
7527 and memory statistics.
7528
7529 Note that the stress-ng cpu, io, vm and hdd tests are different imple‐
7530 mentations of the original stress tests and hence may produce different
7531 stress characteristics.
7532
7533 The bogo operations metrics may change with each release because of
7534 bug fixes to the code, new features, compiler optimisations or changes
7535 in system call performance.
7536
7538 Copyright © 2013-2021 Canonical Ltd, Copyright © 2021-2023 Colin Ian
7539 King.
7540 This is free software; see the source for copying conditions. There is
7541 NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR
7542 PURPOSE.
7543
7544
7545
7546 9 November 2023 STRESS-NG(1)