1STRESS-NG(1) General Commands Manual STRESS-NG(1)
2
3
4
6 stress-ng - a tool to load and stress a computer system
7
8
10 stress-ng [OPTION [ARG]] ...
11
12
14 stress-ng will stress test a computer system in various selectable
15 ways. It was designed to exercise various physical subsystems of a com‐
16 puter as well as the various operating system kernel interfaces.
17 stress-ng also has a wide range of CPU specific stress tests that exer‐
18 cise floating point, integer, bit manipulation and control flow.
19
20 stress-ng was originally intended to make a machine work hard and trip
21 hardware issues such as thermal overruns as well as operating system
22 bugs that only occur when a system is being thrashed hard. Use
23 stress-ng with caution as some of the tests can make a system run hot
24 on poorly designed hardware and also can cause excessive system thrash‐
25 ing which may be difficult to stop.
26
27 stress-ng can also measure test throughput rates; this can be useful to
28 observe performance changes across different operating system releases
29 or types of hardware. However, it has never been intended to be used as
30 a precise benchmark test suite, so do NOT use it in this manner.
31
32 Running stress-ng with root privileges will adjust out of memory set‐
33 tings on Linux systems to make the stressors unkillable in low memory
34 situations, so use this judiciously. With the appropriate privilege,
35 stress-ng can allow the ionice class and ionice levels to be adjusted,
36 again, this should be used with care.
37
38 One can specify the number of processes to invoke per type of stress
39 test; specifying a negative or zero value will select the number of
40 processors available as defined by sysconf(_SC_NPROCESSORS_CONF).
41
43 General stress-ng control options:
44
45 --abort
46 this option will force all running stressors to abort (termi‐
47 nate) if any other stressor terminates prematurely because of a
48 failure.
49
50 --aggressive
51 enables more file, cache and memory aggressive options. This may
52 slow tests down, increase latencies and reduce the number of
53 bogo ops as well as changing the balance of user time vs system
54 time used depending on the type of stressor being used.
55
56 -a N, --all N, --parallel N
57 start N instances of all stressors in parallel. If N is less
58 than zero, then the number of CPUs online is used for the number
59 of instances. If N is zero, then the number of configured CPUs
60 in the system is used. If N is less than zero then the number
61 of online CPUs is used.
62
63 -b N, --backoff N
64 wait N microseconds between the start of each stress worker
65 process. This allows one to ramp up the stress tests over time.
66
67 --class name
68 specify the class of stressors to run. Stressors are classified
69 into one or more of the following classes: cpu, cpu-cache,
70 device, io, interrupt, filesystem, memory, network, os, pipe,
71 scheduler and vm. Some stressors fall into just one class. For
72 example the 'get' stressor is just in the 'os' class. Other
73 stressors fall into more than one class, for example, the
74 'lsearch' stressor falls into the 'cpu', 'cpu-cache' and 'mem‐
75 ory' classes as it exercises all these three. Selecting a spe‐
76 cific class will run all the stressors that fall into that class
77 only when run with the --sequential option.
78
79 Specifying a name followed by a question mark (for example
80 --class vm?) will print out all the stressors in that specific
81 class.
82
83 -n, --dry-run
84 parse options, but do not run stress tests. A no-op.
85
86 --ftrace
87 enable kernel function call tracing (Linux only). This will use
88 the kernel debugfs ftrace mechanism to record all the kernel
89 functions used on the system while stress-ng is running. This
90 is only as accurate as the kernel ftrace output, so there may be
91 some variability on the data reported.
92
93 -h, --help
94 show help.
95
96 --ignite-cpu
97 alter kernel controls to try and maximize the CPU. This requires
98 root privilege to alter various /sys interface controls. Cur‐
99 rently this only works for Intel P-State enabled x86 systems on
100 Linux.
101
102 --ionice-class class
103 specify ionice class (only on Linux). Can be idle (default),
104 besteffort, be, realtime, rt.
105
106 --ionice-level level
107 specify ionice level (only on Linux). For idle, 0 is the only
108 possible option. For besteffort or realtime values 0 (highest
109 priority) to 7 (lowest priority). See ionice(1) for more
110 details.
111
112 --job jobfile
113 run stressors using a jobfile. The jobfile is essentially a
114 file containing stress-ng options (without the leading --) with
115 one option per line. Lines may have comments with comment text
116 proceeded by the # character. A simple example is as follows:
117
118 run sequential # run stressors sequentially
119 verbose # verbose output
120 metrics-brief # show metrics at end of run
121 timeout 60s # stop each stressor after 60 seconds
122 #
123 # vm stressor options:
124 #
125 vm 2 # 2 vm stressors
126 vm-bytes 128M # 128MB available memory
127 vm-keep # keep vm mapping
128 vm-populate # populate memory
129 #
130 # memcpy stressor options:
131 #
132 memcpy 5 # 5 memcpy stressors
133
134 The job file introduces the run command that specifies how to
135 run the stressors:
136
137 run sequential - run stressors sequentially
138 run parallel - run stressors together in parallel
139
140 Note that 'run parallel' is the default.
141
142 -k, --keep-name
143 by default, stress-ng will attempt to change the name of the
144 stress processes according to their functionality; this option
145 disables this and keeps the process names to be the name of the
146 parent process, that is, stress-ng.
147
148 --log-brief
149 by default stress-ng will report the name of the program, the
150 message type and the process id as a prefix to all output. The
151 --log-brief option will output messages without these fields to
152 produce a less verbose output.
153
154 --log-file filename
155 write messages to the specified log file.
156
157 --maximize
158 overrides the default stressor settings and instead sets these
159 to the maximum settings allowed. These defaults can always be
160 overridden by the per stressor settings options if required.
161
162 --max-fd N
163 set the maximum limit on file descriptors (value or a % of sys‐
164 tem allowed maximum). By default, stress-ng can use all the
165 available file descriptors; this option sets the limit in the
166 range from 10 up to the maximum limit of RLIMIT_NOFILE. One can
167 use a % setting too, e.g. 50% is half the maximum allowed file
168 descriptors. Note that stress-ng will use about 5 of the avail‐
169 able file descriptors so take this into consideration when using
170 this setting.
171
172 --metrics
173 output number of bogo operations in total performed by the
174 stress processes. Note that these are not a reliable metric of
175 performance or throughput and have not been designed to be used
176 for benchmarking whatsoever. The metrics are just a useful way
177 to observe how a system behaves when under various kinds of
178 load.
179
180 The following columns of information are output:
181
182 Column Heading Explanation
183 bogo ops number of iterations of the stressor
184 during the run. This is metric of how
185 much overall "work" has been achieved
186 in bogo operations.
187 real time (secs) average wall clock duration (in sec‐
188 onds) of the stressor. This is the
189 total wall clock time of all the
190 instances of that particular stressor
191 divided by the number of these stres‐
192 sors being run.
193 usr time (secs) total user time (in seconds) consumed
194 running all the instances of the
195 stressor.
196
197
198
199 sys time (secs) total system time (in seconds) con‐
200 sumed running all the instances of
201 the stressor.
202 bogo ops/s (real time) total bogo operations per second
203 based on wall clock run time. The
204 wall clock time reflects the apparent
205 run time. The more processors one has
206 on a system the more the work load
207 can be distributed onto these and
208 hence the wall clock time will reduce
209 and the bogo ops rate will increase.
210 This is essentially the "apparent"
211 bogo ops rate of the system.
212 bogo ops/s (usr+sys time) total bogo operations per second
213 based on cumulative user and system
214 time. This is the real bogo ops rate
215 of the system taking into considera‐
216 tion the actual time execution time
217 of the stressor across all the pro‐
218 cessors. Generally this will
219 decrease as one adds more concurrent
220 stressors due to contention on cache,
221 memory, execution units, buses and
222 I/O devices.
223
224 --metrics-brief
225 enable metrics and only output metrics that are non-zero.
226
227 --minimize
228 overrides the default stressor settings and instead sets these
229 to the minimum settings allowed. These defaults can always be
230 overridden by the per stressor settings options if required.
231
232 --no-madvise
233 from version 0.02.26 stress-ng automatically calls madvise(2)
234 with random advise options before each mmap and munmap to stress
235 the vm subsystem a little harder. The --no-advise option turns
236 this default off.
237
238 --no-rand-seed
239 Do not seed the stress-ng pseudo-random number generator with a
240 quasi random start seed, but instead seed it with constant val‐
241 ues. This forces tests to run each time using the same start
242 conditions which can be useful when one requires reproducible
243 stress tests.
244
245 --oomable
246 Do not respawn a stressor if it gets killed by the Out-of-Memory
247 (OOM) killer. The default behaviour is to restart a new
248 instance of a stressor if the kernel OOM killer terminates the
249 process. This option disables this default behaviour.
250
251 --page-in
252 touch allocated pages that are not in core, forcing them to be
253 paged back in. This is a useful option to force all the allo‐
254 cated pages to be paged in when using the bigheap, mmap and vm
255 stressors. It will severely degrade performance when the memory
256 in the system is less than the allocated buffer sizes. This
257 uses mincore(2) to determine the pages that are not in core and
258 hence need touching to page them back in.
259
260 --pathological
261 enable stressors that are known to hang systems. Some stressors
262 can quickly consume resources in such a way that they can
263 rapidly hang a system before the kernel can OOM kill them. These
264 stressors are not enabled by default, this option enables them,
265 but you probably don't want to do this. You have been warned.
266
267 --perf measure processor and system activity using perf events. Linux
268 only and caveat emptor, according to perf_event_open(2): "Always
269 double-check your results! Various generalized events have had
270 wrong values.". Note that with Linux 4.7 one needs to have
271 CAP_SYS_ADMIN capabilities for this option to work, or adjust
272 /proc/sys/kernel/perf_event_paranoid to below 2 to use this
273 without CAP_SYS_ADMIN.
274
275 -q, --quiet
276 do not show any output.
277
278 -r N, --random N
279 start N random stress workers. If N is 0, then the number of
280 configured processors is used for N.
281
282 --sched scheduler
283 select the named scheduler (only on Linux). To see the list of
284 available schedulers use: stress-ng --sched which
285
286 --sched-prio prio
287 select the scheduler priority level (only on Linux). If the
288 scheduler does not support this then the default priority level
289 of 0 is chosen.
290
291 --sched-period period
292 select the period parameter for deadline scheduler (only on
293 Linux). Default value is 0 (in nanoseconds).
294
295 --sched-runtime runtime
296 select the runtime parameter for deadline scheduler (only on
297 Linux). Default value is 99999 (in nanoseconds).
298
299 --sched-deadline deadline
300 select the deadline parameter for deadline scheduler (only on
301 Linux). Default value is 100000 (in nanoseconds).
302
303 --sched-reclaim
304 use cpu bandwidth reclaim feature for deadline scheduler (only
305 on Linux).
306
307 --sequential N
308 sequentially run all the stressors one by one for a default of
309 60 seconds. The number of instances of each of the individual
310 stressors to be started is N. If N is less than zero, then the
311 number of CPUs online is used for the number of instances. If N
312 is zero, then the number of CPUs in the system is used. Use the
313 --timeout option to specify the duration to run each stressor.
314
315 --stressors
316 output the names of the available stressors.
317
318 --syslog
319 log output (except for verbose -v messages) to the syslog.
320
321 --taskset list
322 set CPU affinity based on the list of CPUs provided; stress-ng
323 is bound to just use these CPUs (Linux only). The CPUs to be
324 used are specified by a comma separated list of CPU (0 to N-1).
325 One can specify a range of CPUs using '-', for example:
326 --taskset 0,2-3,6,7-11
327
328 --temp-path path
329 specify a path for stress-ng temporary directories and temporary
330 files; the default path is the current working directory. This
331 path must have read and write access for the stress-ng stress
332 processes.
333
334 --thermalstat S
335 every S seconds show CPU and thermal load statistics. This
336 option shows average CPU frequency in GHz (average of online-
337 CPUs), load averages (1 minute, 5 minute and 15 minutes) and
338 available thermal zone temperatures in degrees Centigrade.
339
340 --thrash
341 This can only be used when running on Linux and with root privi‐
342 lege. This option starts a background thrasher process that
343 works through all the processes on a system and tries to page as
344 many pages in the processes as possible. This will cause con‐
345 siderable amount of thrashing of swap on an over-committed sys‐
346 tem.
347
348 -t N, --timeout T
349 stop stress test after T seconds. One can also specify the units
350 of time in seconds, minutes, hours, days or years with the suf‐
351 fix s, m, h, d or y. Note: A timeout of 0 will run stress-ng
352 without any timeouts (run forever).
353
354 --timestamp
355 add a timestamp in hours, minutes, seconds and hundredths of a
356 second to the log output.
357
358 --timer-slack N
359 adjust the per process timer slack to N nanoseconds (Linux
360 only). Increasing the timer slack allows the kernel to coalesce
361 timer events by adding some fuzziness to timer expiration times
362 and hence reduce wakeups. Conversely, decreasing the timer
363 slack will increase wakeups. A value of 0 for the timer-slack
364 will set the system default of 50,000 nanoseconds.
365
366 --times
367 show the cumulative user and system times of all the child pro‐
368 cesses at the end of the stress run. The percentage of utilisa‐
369 tion of available CPU time is also calculated from the number of
370 on-line CPUs in the system.
371
372 --tz collect temperatures from the available thermal zones on the
373 machine (Linux only). Some devices may have one or more thermal
374 zones, where as others may have none.
375
376 -v, --verbose
377 show all debug, warnings and normal information output.
378
379 --verify
380 verify results when a test is run. This is not available on all
381 tests. This will sanity check the computations or memory con‐
382 tents from a test run and report to stderr any unexpected fail‐
383 ures.
384
385 -V, --version
386 show version of stress-ng, version of toolchain used to build
387 stress-ng and system information.
388
389 --vmstat S
390 every S seconds show statistics about processes, memory, paging,
391 block I/O, interrupts, context switches, disks and cpu activity.
392 The output is similar that to the output from the vmstat(8)
393 utility. Currently a Linux only option.
394
395 -x, --exclude list
396 specify a list of one or more stressors to exclude (that is, do
397 not run them). This is useful to exclude specific stressors
398 when one selects many stressors to run using the --class option,
399 --sequential, --all and --random options. Example, run the cpu
400 class stressors concurrently and exclude the numa and search
401 stressors:
402
403 stress-ng --class cpu --all 1 -x numa,bsearch,hsearch,lsearch
404
405 -Y, --yaml filename
406 output gathered statistics to a YAML formatted file named 'file‐
407 name'.
408
409
410
411 Stressor specific options:
412
413 --access N
414 start N workers that work through various settings of file mode
415 bits (read, write, execute) for the file owner and checks if the
416 user permissions of the file using access(2) and faccessat(2)
417 are sane.
418
419 --access-ops N
420 stop access workers after N bogo access sanity checks.
421
422 --affinity N
423 start N workers that run 16 processes that rapidly change CPU
424 affinity (only on Linux). Rapidly switching CPU affinity can
425 contribute to poor cache behaviour and high context switch rate.
426
427 --affinity-ops N
428 stop affinity workers after N bogo affinity operations (only on
429 Linux). Note that the counters across the 16 processes are not
430 locked to improve affinity test rates so the final number of
431 bogo-ops will be equal or more than the specified ops stop
432 threshold because of racy unlocked bogo-op counting.
433
434 --affinity-rand
435 switch CPU affinity randomly rather than the default of sequen‐
436 tially.
437
438 --af-alg N
439 start N workers that exercise the AF_ALG socket domain by hash‐
440 ing and encrypting various sized random messages. This exercises
441 the available hashes, ciphers, rng and aead crypto engines in
442 the Linux kernel.
443
444 --af-alg-ops N
445 stop af-alg workers after N AF_ALG messages are hashed.
446
447 --af-alg-dump
448 dump the internal list representing cryptographic algorithms
449 parsed from the /proc/crypto file to standard output (stdout).
450
451 --aio N
452 start N workers that issue multiple small asynchronous I/O
453 writes and reads on a relatively small temporary file using the
454 POSIX aio interface. This will just hit the file system cache
455 and soak up a lot of user and kernel time in issuing and han‐
456 dling I/O requests. By default, each worker process will handle
457 16 concurrent I/O requests.
458
459 --aio-ops N
460 stop POSIX asynchronous I/O workers after N bogo asynchronous
461 I/O requests.
462
463 --aio-requests N
464 specify the number of POSIX asynchronous I/O requests each
465 worker should issue, the default is 16; 1 to 4096 are allowed.
466
467 --aiol N
468 start N workers that issue multiple 4K random asynchronous I/O
469 writes using the Linux aio system calls io_setup(2), io_sub‐
470 mit(2), io_getevents(2) and io_destroy(2). By default, each
471 worker process will handle 16 concurrent I/O requests.
472
473 --aiol-ops N
474 stop Linux asynchronous I/O workers after N bogo asynchronous
475 I/O requests.
476
477 --aiol-requests N
478 specify the number of Linux asynchronous I/O requests each
479 worker should issue, the default is 16; 1 to 4096 are allowed.
480
481 --apparmor N
482 start N workers that exercise various parts of the AppArmor
483 interface. Currently one needs root permission to run this par‐
484 ticular test. Only available on Linux systems with AppArmor sup‐
485 port and requires the CAP_MAC_ADMIN capability.
486
487 --apparmor-ops
488 stop the AppArmor workers after N bogo operations.
489
490 --atomic N
491 start N workers that exercise various GCC __atomic_*() built in
492 operations on 8, 16, 32 and 64 bit integers that are shared
493 among the N workers. This stressor is only available for builds
494 using GCC 4.7.4 or higher. The stressor forces many front end
495 cache stalls and cache references.
496
497 --atomic-ops N
498 stop the atomic workers after N bogo atomic operations.
499
500 --bad-altstack N
501 start N workers that create broken alternative signal stacks for
502 SIGSEGV handling that in turn create secondary SIGSEGVs. A vari‐
503 ety of randonly selected methods are used to create the stacks:
504
505 a) Unmapping the alternative signal stack, before triggering
506 the signal handling.
507
508 b) Changing the alternative signal stack to just being read
509 only, write only, execute only.
510
511 c) Using a NULL alternative signal stack.
512
513 d) Using the signal handler object as the alternative signal
514 stack.
515
516 e) Unmapping the alternative signal stack during execution
517 of the signal handler.
518
519 --bad-altstack-ops N
520 stop the bad alternative stack stressors after N SIGSEGV bogo
521 operations.
522
523
524 --bad-ioctl N
525 start N workers that perform a range of illegal bad read ioctls
526 (using _IOR) across the device drivers. This exercises page
527 size, 64 bit, 32 bit, 16 bit and 8 bit reads as well as NULL
528 addresses, non-readable pages and PROT_NONE mapped pages. Cur‐
529 rently only for Linux and requires the --pathological option.
530
531 --bad-ioctl-ops N
532 stop the bad ioctl stressors after N bogo ioctl operations.
533
534 -B N, --bigheap N
535 start N workers that grow their heaps by reallocating memory. If
536 the out of memory killer (OOM) on Linux kills the worker or the
537 allocation fails then the allocating process starts all over
538 again. Note that the OOM adjustment for the worker is set so
539 that the OOM killer will treat these workers as the first candi‐
540 date processes to kill.
541
542 --bigheap-ops N
543 stop the big heap workers after N bogo allocation operations are
544 completed.
545
546 --bigheap-growth N
547 specify amount of memory to grow heap by per iteration. Size can
548 be from 4K to 64MB. Default is 64K.
549
550 --binderfs N
551 start N workers that mount, exercise and unmount binderfs. The
552 binder control device is exercised with 256 sequential
553 BINDER_CTL_ADD ioctl calls per loop.
554
555 --binderfs-ops N
556 stop after N binderfs cycles.
557
558 --bind-mount N
559 start N workers that repeatedly bind mount / to / inside a user
560 namespace. This can consume resources rapidly, forcing out of
561 memory situations. Do not use this stressor unless you want to
562 risk hanging your machine.
563
564 --bind-mount-ops N
565 stop after N bind mount bogo operations.
566
567 --branch N
568 start N workers that randomly jump to 256 randomly selected
569 locations and hence exercise the CPU branch prediction logic.
570
571 --branch-ops N
572 stop the branch stressors after N jumps
573
574 --brk N
575 start N workers that grow the data segment by one page at a time
576 using multiple brk(2) calls. Each successfully allocated new
577 page is touched to ensure it is resident in memory. If an out
578 of memory condition occurs then the test will reset the data
579 segment to the point before it started and repeat the data seg‐
580 ment resizing over again. The process adjusts the out of memory
581 setting so that it may be killed by the out of memory (OOM)
582 killer before other processes. If it is killed by the OOM
583 killer then it will be automatically re-started by a monitoring
584 parent process.
585
586 --brk-ops N
587 stop the brk workers after N bogo brk operations.
588
589 --brk-mlock
590 attempt to mlock future brk pages into memory causing more mem‐
591 ory pressure. If mlock(MCL_FUTURE) is implemented then this will
592 stop new brk pages from being swapped out.
593
594 --brk-notouch
595 do not touch each newly allocated data segment page. This dis‐
596 ables the default of touching each newly allocated page and
597 hence avoids the kernel from necessarily backing the page with
598 real physical memory.
599
600 --bsearch N
601 start N workers that binary search a sorted array of 32 bit
602 integers using bsearch(3). By default, there are 65536 elements
603 in the array. This is a useful method to exercise random access
604 of memory and processor cache.
605
606 --bsearch-ops N
607 stop the bsearch worker after N bogo bsearch operations are com‐
608 pleted.
609
610 --bsearch-size N
611 specify the size (number of 32 bit integers) in the array to
612 bsearch. Size can be from 1K to 4M.
613
614 -C N, --cache N
615 start N workers that perform random wide spread memory read and
616 writes to thrash the CPU cache. The code does not intelligently
617 determine the CPU cache configuration and so it may be sub-opti‐
618 mal in producing hit-miss read/write activity for some proces‐
619 sors.
620
621 --cache-fence
622 force write serialization on each store operation (x86 only).
623 This is a no-op for non-x86 architectures.
624
625 --cache-flush
626 force flush cache on each store operation (x86 only). This is a
627 no-op for non-x86 architectures.
628
629 --cache-level N
630 specify level of cache to exercise (1=L1 cache, 2=L2 cache,
631 3=L3/LLC cache (the default)). If the cache hierarchy cannot be
632 determined, built-in defaults will apply.
633
634 --cache-no-affinity
635 do not change processor affinity when --cache is in effect.
636
637 --cache-sfence
638 force write serialization on each store operation using the
639 sfence instruction (x86 only). This is a no-op for non-x86
640 architectures.
641
642 --cache-ops N
643 stop cache thrash workers after N bogo cache thrash operations.
644
645 --cache-prefetch
646 force read prefetch on next read address on architectures that
647 support prefetching.
648
649 --cache-ways N
650 specify the number of cache ways to exercise. This allows a sub‐
651 set of the overall cache size to be exercised.
652
653 --cap N
654 start N workers that read per process capabilities via calls to
655 capget(2) (Linux only).
656
657 --cap-ops N
658 stop after N cap bogo operations.
659
660 --chattr N
661 start N workers that attempt to exercise file attributes via the
662 EXT2_IOC_SETFLAGS ioctl. This is intended to be intentionally
663 racy and exercise a range of chattr attributes by enabling and
664 disabling them on a file shared amongst the N chattr stressor
665 processes. (Linux only).
666
667 --chattr-ops N
668 stop after N chattr bogo operations.
669
670 --chdir N
671 start N workers that change directory between directories using
672 chdir(2).
673
674 --chdir-ops N
675 stop after N chdir bogo operations.
676
677 --chdir-dirs N
678 exercise chdir on N directories. The default is 8192 directo‐
679 ries, this allows 64 to 65536 directories to be used instead.
680
681 --chmod N
682 start N workers that change the file mode bits via chmod(2) and
683 fchmod(2) on the same file. The greater the value for N then the
684 more contention on the single file. The stressor will work
685 through all the combination of mode bits.
686
687 --chmod-ops N
688 stop after N chmod bogo operations.
689
690 --chown N
691 start N workers that exercise chown(2) on the same file. The
692 greater the value for N then the more contention on the single
693 file.
694
695 --chown-ops N
696 stop the chown workers after N bogo chown(2) operations.
697
698 --chroot N
699 start N workers that exercise chroot(2) on various valid and
700 invalid chroot paths. Only available on Linux systems and
701 requires the CAP_SYS_ADMIN capability.
702
703 --chroot-ops N
704 stop the chroot workers after N bogo chroot(2) operations.
705
706 --clock N
707 start N workers exercising clocks and POSIX timers. For all
708 known clock types this will exercise clock_getres(2), clock_get‐
709 time(2) and clock_nanosleep(2). For all known timers it will
710 create a 50000ns timer and busy poll this until it expires.
711 This stressor will cause frequent context switching.
712
713 --clock-ops N
714 stop clock stress workers after N bogo operations.
715
716 --clone N
717 start N workers that create clones (via the clone(2) and
718 clone3() system calls). This will rapidly try to create a
719 default of 8192 clones that immediately die and wait in a zombie
720 state until they are reaped. Once the maximum number of clones
721 is reached (or clone fails because one has reached the maximum
722 allowed) the oldest clone thread is reaped and a new clone is
723 then created in a first-in first-out manner, and then repeated.
724 A random clone flag is selected for each clone to try to exer‐
725 cise different clone operations. The clone stressor is a Linux
726 only option.
727
728 --clone-ops N
729 stop clone stress workers after N bogo clone operations.
730
731 --clone-max N
732 try to create as many as N clone threads. This may not be
733 reached if the system limit is less than N.
734
735 --close N
736 start N workers that try to force race conditions on closing
737 opened file descriptors. These file descriptors have been
738 opened in various ways to try and exercise different kernel
739 close handlers.
740
741 --close-ops N
742 stop close workers after N bogo close operations.
743
744 --context N
745 start N workers that run three threads that use swapcontext(3)
746 to implement the thread-to-thread context switching. This exer‐
747 cises rapid process context saving and restoring and is band‐
748 width limited by register and memory save and restore rates.
749
750 --context-ops N
751 stop context workers after N bogo context switches. In this
752 stressor, 1 bogo op is equivalent to 1000 swapcontext calls.
753
754 --copy-file N
755 start N stressors that copy a file using the Linux
756 copy_file_range(2) system call. 2MB chunks of data are copied
757 from random locations from one file to random locations to a
758 destination file. By default, the files are 256 MB in size.
759 Data is sync'd to the filesystem after each copy_file_range(2)
760 call.
761
762 --copy-file-ops N
763 stop after N copy_file_range() calls.
764
765 --copy-file-bytes N
766 copy file size, the default is 256 MB. One can specify the size
767 as % of free space on the file system or in units of Bytes,
768 KBytes, MBytes and GBytes using the suffix b, k, m or g.
769
770 -c N, --cpu N
771 start N workers exercising the CPU by sequentially working
772 through all the different CPU stress methods. Instead of exer‐
773 cising all the CPU stress methods, one can specify a specific
774 CPU stress method with the --cpu-method option.
775
776 --cpu-ops N
777 stop cpu stress workers after N bogo operations.
778
779 -l P, --cpu-load P
780 load CPU with P percent loading for the CPU stress workers. 0 is
781 effectively a sleep (no load) and 100 is full loading. The
782 loading loop is broken into compute time (load%) and sleep time
783 (100% - load%). Accuracy depends on the overall load of the pro‐
784 cessor and the responsiveness of the scheduler, so the actual
785 load may be different from the desired load. Note that the num‐
786 ber of bogo CPU operations may not be linearly scaled with the
787 load as some systems employ CPU frequency scaling and so heavier
788 loads produce an increased CPU frequency and greater CPU bogo
789 operations.
790
791 Note: This option only applies to the --cpu stressor option and
792 not to all of the cpu class of stressors.
793
794 --cpu-load-slice S
795 note - this option is only useful when --cpu-load is less than
796 100%. The CPU load is broken into multiple busy and idle cycles.
797 Use this option to specify the duration of a busy time slice. A
798 negative value for S specifies the number of iterations to run
799 before idling the CPU (e.g. -30 invokes 30 iterations of a CPU
800 stress loop). A zero value selects a random busy time between 0
801 and 0.5 seconds. A positive value for S specifies the number of
802 milliseconds to run before idling the CPU (e.g. 100 keeps the
803 CPU busy for 0.1 seconds). Specifying small values for S lends
804 to small time slices and smoother scheduling. Setting
805 --cpu-load as a relatively low value and --cpu-load-slice to be
806 large will cycle the CPU between long idle and busy cycles and
807 exercise different CPU frequencies. The thermal range of the
808 CPU is also cycled, so this is a good mechanism to exercise the
809 scheduler, frequency scaling and passive/active thermal cooling
810 mechanisms.
811
812 Note: This option only applies to the --cpu stressor option and
813 not to all of the cpu class of stressors.
814
815 --cpu-method method
816 specify a cpu stress method. By default, all the stress methods
817 are exercised sequentially, however one can specify just one
818 method to be used if required. Available cpu stress methods are
819 described as follows:
820
821 Method Description
822 all iterate over all the below cpu stress methods
823 ackermann Ackermann function: compute A(3, 7), where:
824 A(m, n) = n + 1 if m = 0;
825 A(m - 1, 1) if m > 0 and n = 0;
826 A(m - 1, A(m, n - 1)) if m > 0 and n > 0
827 apery calculate Apery's constant ζ(3); the sum of
828 1/(n ↑ 3) for to a precision of 1.0x10↑14
829 bitops various bit operations from bithack, namely:
830 reverse bits, parity check, bit count, round to
831 nearest power of 2
832 callfunc recursively call 8 argument C function to a
833 depth of 1024 calls and unwind
834 cfloat 1000 iterations of a mix of floating point com‐
835 plex operations
836 cdouble 1000 iterations of a mix of double floating
837 point complex operations
838 clongdouble 1000 iterations of a mix of long double float‐
839 ing point complex operations
840 collatz compute the 1348 steps in the collatz sequence
841 from starting number 989345275647. Where f(n)
842 = n / 2 (for even n) and f(n) = 3n + 1 (for odd
843 n).
844 correlate perform a 8192 × 512 correlation of random dou‐
845 bles
846 cpuid fetch cpu specific information using the cpuid
847 instruction (x86 only)
848 crc16 compute 1024 rounds of CCITT CRC16 on random
849 data
850 decimal32 1000 iterations of a mix of 32 bit decimal
851 floating point operations (GCC only)
852 decimal64 1000 iterations of a mix of 64 bit decimal
853 floating point operations (GCC only)
854 decimal128 1000 iterations of a mix of 128 bit decimal
855 floating point operations (GCC only)
856 dither Floyd–Steinberg dithering of a 1024 × 768 ran‐
857 dom image from 8 bits down to 1 bit of depth
858 div64 50,000 64 bit unsigned integer divisions
859 djb2a 128 rounds of hash DJB2a (Dan Bernstein hash
860 using the xor variant) on 128 to 1 bytes of
861 random strings
862 double 1000 iterations of a mix of double precision
863 floating point operations
864 euler compute e using n = (1 + (1 ÷ n)) ↑ n
865 explog iterate on n = exp(log(n) ÷ 1.00002)
866 factorial find factorials from 1..150 using Stirling's
867 and Ramanujan's approximations
868 fibonacci compute Fibonacci sequence of 0, 1, 1, 2, 5,
869 8...
870 fft 4096 sample Fast Fourier Transform
871 fletcher16 1024 rounds of a naive implementation of a 16
872 bit Fletcher's checksum
873 float 1000 iterations of a mix of floating point
874 operations
875 float16 1000 iterations of a mix of 16 bit floating
876 point operations
877 float32 1000 iterations of a mix of 32 bit floating
878 point operations
879 float80 1000 iterations of a mix of 80 bit floating
880 point operations
881 float128 1000 iterations of a mix of 128 bit floating
882 point operations
883
884
885
886 floatconversion perform 65536 iterations of floating point con‐
887 versions between float, double and long double
888 floating point variables.
889 fnv1a 128 rounds of hash FNV-1a (Fowler–Noll–Vo hash
890 using the xor then multiply variant) on 128 to
891 1 bytes of random strings
892 gamma calculate the Euler-Mascheroni constant γ using
893 the limiting difference between the harmonic
894 series (1 + 1/2 + 1/3 + 1/4 + 1/5 ... + 1/n)
895 and the natural logarithm ln(n), for n = 80000.
896 gcd compute GCD of integers
897 gray calculate binary to gray code and gray code
898 back to binary for integers from 0 to 65535
899 hamming compute Hamming H(8,4) codes on 262144 lots of
900 4 bit data. This turns 4 bit data into 8 bit
901 Hamming code containing 4 parity bits. For data
902 bits d1..d4, parity bits are computed as:
903 p1 = d2 + d3 + d4
904 p2 = d1 + d3 + d4
905 p3 = d1 + d2 + d4
906 p4 = d1 + d2 + d3
907 hanoi solve a 21 disc Towers of Hanoi stack using the
908 recursive solution
909 hyperbolic compute sinh(θ) × cosh(θ) + sinh(2θ) + cosh(3θ)
910 for float, double and long double hyperbolic
911 sine and cosine functions where θ = 0 to 2π in
912 1500 steps
913 idct 8 × 8 IDCT (Inverse Discrete Cosine Transform).
914 int8 1000 iterations of a mix of 8 bit integer oper‐
915 ations.
916 int16 1000 iterations of a mix of 16 bit integer
917 operations.
918 int32 1000 iterations of a mix of 32 bit integer
919 operations.
920 int64 1000 iterations of a mix of 64 bit integer
921 operations.
922 int128 1000 iterations of a mix of 128 bit integer
923 operations (GCC only).
924 int32float 1000 iterations of a mix of 32 bit integer and
925 floating point operations.
926 int32double 1000 iterations of a mix of 32 bit integer and
927 double precision floating point operations.
928 int32longdouble 1000 iterations of a mix of 32 bit integer and
929 long double precision floating point opera‐
930 tions.
931 int64float 1000 iterations of a mix of 64 bit integer and
932 floating point operations.
933 int64double 1000 iterations of a mix of 64 bit integer and
934 double precision floating point operations.
935 int64longdouble 1000 iterations of a mix of 64 bit integer and
936 long double precision floating point opera‐
937 tions.
938 int128float 1000 iterations of a mix of 128 bit integer and
939 floating point operations (GCC only).
940 int128double 1000 iterations of a mix of 128 bit integer and
941 double precision floating point operations (GCC
942 only).
943 int128longdouble 1000 iterations of a mix of 128 bit integer and
944 long double precision floating point operations
945 (GCC only).
946 int128decimal32 1000 iterations of a mix of 128 bit integer and
947 32 bit decimal floating point operations (GCC
948 only).
949 int128decimal64 1000 iterations of a mix of 128 bit integer and
950 64 bit decimal floating point operations (GCC
951 only).
952 int128decimal128 1000 iterations of a mix of 128 bit integer and
953 128 bit decimal floating point operations (GCC
954 only).
955
956
957 intconversion perform 65536 iterations of integer conversions
958 between int16, int32 and int64 variables.
959 ipv4checksum compute 1024 rounds of the 16 bit ones' comple‐
960 ment IPv4 checksum.
961 jenkin Jenkin's integer hash on 128 rounds of 128..1
962 bytes of random data.
963 jmp Simple unoptimised compare >, <, == and jmp
964 branching.
965 ln2 compute ln(2) based on series:
966 1 - 1/2 + 1/3 - 1/4 + 1/5 - 1/6 ...
967 longdouble 1000 iterations of a mix of long double preci‐
968 sion floating point operations.
969 loop simple empty loop.
970 matrixprod matrix product of two 128 × 128 matrices of
971 double floats. Testing on 64 bit x86 hardware
972 shows that this is provides a good mix of mem‐
973 ory, cache and floating point operations and is
974 probably the best CPU method to use to make a
975 CPU run hot.
976 murmur3_32 murmur3_32 hash (Austin Appleby's Murmur3 hash,
977 32 bit variant) on 128 rounds of of 128..1
978 bytes of random data.
979 nsqrt compute sqrt() of long doubles using Newton-
980 Raphson.
981 omega compute the omega constant defined by Ωe↑Ω = 1
982 using efficient iteration of Ωn+1 = (1 + Ωn) /
983 (1 + e↑Ωn).
984 parity compute parity using various methods from the
985 Standford Bit Twiddling Hacks. Methods
986 employed are: the naïve way, the naïve way with
987 the Brian Kernigan bit counting optimisation,
988 the multiply way, the parallel way, and the
989 lookup table ways (2 variations).
990 phi compute the Golden Ratio ϕ using series.
991 pi compute π using the Srinivasa Ramanujan fast
992 convergence algorithm.
993 pjw 128 rounds of hash pjw function on 128 to 1
994 bytes of random strings.
995 prime find the first 10000 prime numbers using a
996 slightly optimised brute force naïve trial
997 division search.
998 psi compute ψ (the reciprocal Fibonacci constant)
999 using the sum of the reciprocals of the
1000 Fibonacci numbers.
1001 queens compute all the solutions of the classic 8
1002 queens problem for board sizes 1..11.
1003 rand 16384 iterations of rand(), where rand is the
1004 MWC pseudo random number generator. The MWC
1005 random function concatenates two 16 bit multi‐
1006 ply-with-carry generators:
1007 x(n) = 36969 × x(n - 1) + carry,
1008 y(n) = 18000 × y(n - 1) + carry mod 2 ↑ 16
1009
1010 and has period of around 2 ↑ 60.
1011 rand48 16384 iterations of drand48(3) and lrand48(3).
1012 rgb convert RGB to YUV and back to RGB (CCIR 601).
1013 sdbm 128 rounds of hash sdbm (as used in the SDBM
1014 database and GNU awk) on 128 to 1 bytes of ran‐
1015 dom strings.
1016 sieve find the first 10000 prime numbers using the
1017 sieve of Eratosthenes.
1018 stats calculate minimum, maximum, arithmetic mean,
1019 geometric mean, harmoninc mean and standard
1020 deviation on 250 randomly generated positive
1021 double precision values.
1022 sqrt compute sqrt(rand()), where rand is the MWC
1023 pseudo random number generator.
1024 trig compute sin(θ) × cos(θ) + sin(2θ) + cos(3θ) for
1025 float, double and long double sine and cosine
1026 functions where θ = 0 to 2π in 1500 steps.
1027
1028 union perform integer arithmetic on a mix of bit
1029 fields in a C union. This exercises how well
1030 the compiler and CPU can perform integer bit
1031 field loads and stores.
1032 zeta compute the Riemann Zeta function ζ(s) for s =
1033 2.0..10.0
1034
1035 Note that some of these methods try to exercise the CPU with
1036 computations found in some real world use cases. However, the
1037 code has not been optimised on a per-architecture basis, so may
1038 be a sub-optimal compared to hand-optimised code used in some
1039 applications. They do try to represent the typical instruction
1040 mixes found in these use cases.
1041
1042 --cpu-online N
1043 start N workers that put randomly selected CPUs offline and
1044 online. This Linux only stressor requires root privilege to per‐
1045 form this action. By default the first CPU (CPU 0) is never
1046 offlined as this has been found to be problematic on some sys‐
1047 tems and can result in a shutdown.
1048
1049 --cpu-online-all
1050 The default is to never offline the first CPU. This option will
1051 offline and online all the CPUs include CPU 0. This may cause
1052 some systems to shutdown.
1053
1054 --cpu-online-ops N
1055 stop after offline/online operations.
1056
1057 --crypt N
1058 start N workers that encrypt a 16 character random password
1059 using crypt(3). The password is encrypted using MD5, SHA-256
1060 and SHA-512 encryption methods.
1061
1062 --crypt-ops N
1063 stop after N bogo encryption operations.
1064
1065 --cyclic N
1066 start N workers that exercise the real time FIFO or Round Robin
1067 schedulers with cyclic nanosecond sleeps. Normally one would
1068 just use 1 worker instance with this stressor to get reliable
1069 statistics. This stressor measures the first 10 thousand laten‐
1070 cies and calculates the mean, mode, minimum, maximum latencies
1071 along with various latency percentiles for the just the first
1072 cyclic stressor instance. One has to run this stressor with
1073 CAP_SYS_NICE capability to enable the real time scheduling poli‐
1074 cies. The FIFO scheduling policy is the default.
1075
1076 --cyclic-ops N
1077 stop after N sleeps.
1078
1079 --cyclic-dist N
1080 calculate and print a latency distribution with the interval of
1081 N nanoseconds. This is helpful to see where the latencies are
1082 clustering.
1083
1084 --cyclic-method [ clock_ns | itimer | poll | posix_ns | pselect |
1085 usleep ]
1086 specify the cyclic method to be used, the default is clock_ns.
1087 The available cyclic methods are as follows:
1088
1089 Method Description
1090 clock_ns sleep for the specified time using the
1091 clock_nanosleep(2) high resolution nanosleep
1092 and the CLOCK_REALTIME real time clock.
1093 itimer wakeup a paused process with a CLOCK_REALTIME
1094 itimer signal.
1095 poll delay for the specified time using a poll delay
1096 loop that checks for time changes using
1097 clock_gettime(2) on the CLOCK_REALTIME clock.
1098
1099 posix_ns sleep for the specified time using the POSIX
1100 nanosleep(2) high resolution nanosleep.
1101 pselect sleep for the specified time using pselect(2)
1102 with null file descriptors.
1103 usleep sleep to the nearest microsecond using
1104 usleep(2).
1105
1106 --cyclic-policy [ fifo | rr ]
1107 specify the desired real time scheduling policy, ff (first-in,
1108 first-out) or rr (round robin).
1109
1110 --cyclic-prio P
1111 specify the scheduling priority P. Range from 1 (lowest) to 100
1112 (highest).
1113
1114 --cyclic-sleep N
1115 sleep for N nanoseconds per test cycle using clock_nanosleep(2)
1116 with the CLOCK_REALTIME timer. Range from 1 to 1000000000
1117 nanoseconds.
1118
1119 --daemon N
1120 start N workers that each create a daemon that dies immediately
1121 after creating another daemon and so on. This effectively works
1122 through the process table with short lived processes that do not
1123 have a parent and are waited for by init. This puts pressure on
1124 init to do rapid child reaping. The daemon processes perform
1125 the usual mix of calls to turn into typical UNIX daemons, so
1126 this artificially mimics very heavy daemon system stress.
1127
1128 --daemon-ops N
1129 stop daemon workers after N daemons have been created.
1130
1131 --dccp N
1132 start N workers that send and receive data using the Datagram
1133 Congestion Control Protocol (DCCP) (RFC4340). This involves a
1134 pair of client/server processes performing rapid connect, send
1135 and receives and disconnects on the local host.
1136
1137 --dccp-domain D
1138 specify the domain to use, the default is ipv4. Currently ipv4
1139 and ipv6 are supported.
1140
1141 --dccp-port P
1142 start DCCP at port P. For N dccp worker processes, ports P to P
1143 - 1 are used.
1144
1145 --dccp-ops N
1146 stop dccp stress workers after N bogo operations.
1147
1148 --dccp-opts [ send | sendmsg | sendmmsg ]
1149 by default, messages are sent using send(2). This option allows
1150 one to specify the sending method using send(2), sendmsg(2) or
1151 sendmmsg(2). Note that sendmmsg is only available for Linux
1152 systems that support this system call.
1153
1154 -D N, --dentry N
1155 start N workers that create and remove directory entries. This
1156 should create file system meta data activity. The directory
1157 entry names are suffixed by a gray-code encoded number to try to
1158 mix up the hashing of the namespace.
1159
1160 --dentry-ops N
1161 stop denty thrash workers after N bogo dentry operations.
1162
1163 --dentry-order [ forward | reverse | stride | random ]
1164 specify unlink order of dentries, can be one of forward,
1165 reverse, stride or random. By default, dentries are unlinked in
1166 random order. The forward order will unlink them from first to
1167 last, reverse order will unlink them from last to first, stride
1168 order will unlink them by stepping around order in a quasi-ran‐
1169 dom pattern and random order will randomly select one of for‐
1170 ward, reverse or stride orders.
1171
1172 --dentries N
1173 create N dentries per dentry thrashing loop, default is 2048.
1174
1175 --dev N
1176 start N workers that exercise the /dev devices. Each worker runs
1177 5 concurrent threads that perform open(2), fstat(2), lseek(2),
1178 poll(2), fcntl(2), mmap(2), munmap(2), fsync(2) and close(2) on
1179 each device. Note that watchdog devices are not exercised.
1180
1181 --dev-ops N
1182 stop dev workers after N bogo device exercising operations.
1183
1184 --dev-file filename
1185 specify the device file to exercise, for example, /dev/null. By
1186 default the stressor will work through all the device files it
1187 can fine, however, this option allows a single device file to be
1188 exercised.
1189
1190 --dev-shm N
1191 start N workers that fallocate large files in /dev/shm and then
1192 mmap these into memory and touch all the pages. This exercises
1193 pages being moved to/from the buffer cache. Linux only.
1194
1195 --dev-shm-ops N
1196 stop after N bogo allocation and mmap /dev/shm operations.
1197
1198 --dir N
1199 start N workers that create and remove directories using mkdir
1200 and rmdir.
1201
1202 --dir-ops N
1203 stop directory thrash workers after N bogo directory operations.
1204
1205 --dir-dirs N
1206 exercise dir on N directories. The default is 8192 directories,
1207 this allows 64 to 65536 directories to be used instead.
1208
1209 --dirdeep N
1210 start N workers that create a depth-first tree of directories to
1211 a maximum depth as limited by PATH_MAX or ENAMETOOLONG (which
1212 ever occurs first). By default, each level of the tree contains
1213 one directory, but this can be increased to a maximum of 10 sub-
1214 trees using the --dirdeep-dir option. To stress inode creation,
1215 a symlink and a hardlink to a file at the root of the tree is
1216 created in each level.
1217
1218 --dirdeep-ops N
1219 stop directory depth workers after N bogo directory operations.
1220
1221 --dirdeep-dirs N
1222 create N directories at each tree level. The default is just 1
1223 but can be increased to a maximum of 10 per level.
1224
1225 --dirdeep-inodes N
1226 consume up to N inodes per dirdeep stressor while creating
1227 directories and links. The value N can be the number of inodes
1228 or a percentage of the total available free inodes on the
1229 filesystem being used.
1230
1231 --dnotify N
1232 start N workers performing file system activities such as mak‐
1233 ing/deleting files/directories, renaming files, etc. to stress
1234 exercise the various dnotify events (Linux only).
1235
1236 --dnotify-ops N
1237 stop inotify stress workers after N dnotify bogo operations.
1238
1239 --dup N
1240 start N workers that perform dup(2) and then close(2) operations
1241 on /dev/zero. The maximum opens at one time is system defined,
1242 so the test will run up to this maximum, or 65536 open file
1243 descriptors, which ever comes first.
1244
1245 --dup-ops N
1246 stop the dup stress workers after N bogo open operations.
1247
1248 --dynlib N
1249 start N workers that dynamically load and unload various shared
1250 libraries. This exercises memory mapping and dynamic code load‐
1251 ing and symbol lookups. See dlopen(3) for more details of this
1252 mechanism.
1253
1254 --dynlib-ops N
1255 stop workers after N bogo load/unload cycles.
1256
1257 --efivar N
1258 start N works that exercise the Linux /sys/firmware/efi/vars
1259 interface by reading the EFI variables. This is a Linux only
1260 stress test for platforms that support the EFI vars interface
1261 and requires the CAP_SYS_ADMIN capability.
1262
1263 --efivar-ops N
1264 stop the efivar stressors after N EFI variable read operations.
1265
1266 --enosys N
1267 start N workers that exercise non-functional system call num‐
1268 bers. This calls a wide range of system call numbers to see if
1269 it can break a system where these are not wired up correctly.
1270 It also keeps track of system calls that exist (ones that don't
1271 return ENOSYS) so that it can focus on purely finding and exer‐
1272 cising non-functional system calls. This stressor exercises sys‐
1273 tem calls from 0 . .__NR_syscalls + 1024, random system calls
1274 within constrained in the ranges of 0 to 2^8, 2^16, 2^24, 2^32,
1275 2^40, 2^48, 2^56 and 2^64 bits, high system call numbers and
1276 various other bit patterns to try to get wide good coverage. To
1277 keep the environment clean, each system call being tested runs
1278 in a child process with reduced capabilities.
1279
1280 --enosys-ops N
1281 stop after N bogo enosys system call attempts
1282
1283 --env N
1284 start N workers that creates numerous large environment vari‐
1285 ables to try to trigger out of memory conditions using
1286 setenv(3). If ENOMEM occurs then the environment is emptied and
1287 another memory filling retry occurs. The process is restarted
1288 if it is killed by the Out Of Memory (OOM) killer.
1289
1290 --env-ops N
1291 stop after N bogo setenv/unsetenv attempts.
1292
1293 --epoll N
1294 start N workers that perform various related socket stress
1295 activity using epoll_wait(2) to monitor and handle new connec‐
1296 tions. This involves client/server processes performing rapid
1297 connect, send/receives and disconnects on the local host. Using
1298 epoll allows a large number of connections to be efficiently
1299 handled, however, this can lead to the connection table filling
1300 up and blocking further socket connections, hence impacting on
1301 the epoll bogo op stats. For ipv4 and ipv6 domains, multiple
1302 servers are spawned on multiple ports. The epoll stressor is for
1303 Linux only.
1304
1305 --epoll-domain D
1306 specify the domain to use, the default is unix (aka local). Cur‐
1307 rently ipv4, ipv6 and unix are supported.
1308
1309 --epoll-port P
1310 start at socket port P. For N epoll worker processes, ports P to
1311 (P * 4) - 1 are used for ipv4, ipv6 domains and ports P to P - 1
1312 are used for the unix domain.
1313
1314 --epoll-ops N
1315 stop epoll workers after N bogo operations.
1316
1317 --eventfd N
1318 start N parent and child worker processes that read and write 8
1319 byte event messages between them via the eventfd mechanism
1320 (Linux only).
1321
1322 --eventfd-ops N
1323 stop eventfd workers after N bogo operations.
1324
1325 --eventfd-nonblock N
1326 enable EFD_NONBLOCK to allow non-blocking on the event file
1327 descriptor. This will cause reads and writes to return with
1328 EAGAIN rather the blocking and hence causing a high rate of
1329 polling I/O.
1330
1331 --exec N
1332 start N workers continually forking children that exec stress-ng
1333 and then exit almost immediately. If a system has pthread sup‐
1334 port then 1 in 4 of the exec's will be from inside a pthread to
1335 exercise exec'ing from inside a pthread context.
1336
1337 --exec-ops N
1338 stop exec stress workers after N bogo operations.
1339
1340 --exec-max P
1341 create P child processes that exec stress-ng and then wait for
1342 them to exit per iteration. The default is just 1; higher values
1343 will create many temporary zombie processes that are waiting to
1344 be reaped. One can potentially fill up the process table using
1345 high values for --exec-max and --exec.
1346
1347 -F N, --fallocate N
1348 start N workers continually fallocating (preallocating file
1349 space) and ftruncating (file truncating) temporary files. If
1350 the file is larger than the free space, fallocate will produce
1351 an ENOSPC error which is ignored by this stressor.
1352
1353 --fallocate-bytes N
1354 allocated file size, the default is 1 GB. One can specify the
1355 size as % of free space on the file system or in units of Bytes,
1356 KBytes, MBytes and GBytes using the suffix b, k, m or g.
1357
1358 --fallocate-ops N
1359 stop fallocate stress workers after N bogo fallocate operations.
1360
1361 --fanotify N
1362 start N workers performing file system activities such as creat‐
1363 ing, opening, writing, reading and unlinking files to exercise
1364 the fanotify event monitoring interface (Linux only). Each
1365 stressor runs a child process to generate file events and a par‐
1366 ent process to read file events using fanotify. Has to be run
1367 with CAP_SYS_ADMIN capability.
1368
1369 --fanotify-ops N
1370 stop fanotify stress workers after N bogo fanotify events.
1371
1372 --fault N
1373 start N workers that generates minor and major page faults.
1374
1375 --fault-ops N
1376 stop the page fault workers after N bogo page fault operations.
1377
1378 --fcntl N
1379 start N workers that perform fcntl(2) calls with various com‐
1380 mands. The exercised commands (if available) are: F_DUPFD,
1381 F_DUPFD_CLOEXEC, F_GETFD, F_SETFD, F_GETFL, F_SETFL, F_GETOWN,
1382 F_SETOWN, F_GETOWN_EX, F_SETOWN_EX, F_GETSIG, F_SETSIG, F_GETLK,
1383 F_SETLK, F_SETLKW, F_OFD_GETLK, F_OFD_SETLK and F_OFD_SETLKW.
1384
1385 --fcntl-ops N
1386 stop the fcntl workers after N bogo fcntl operations.
1387
1388 --fiemap N
1389 start N workers that each create a file with many randomly
1390 changing extents and has 4 child processes per worker that
1391 gather the extent information using the FS_IOC_FIEMAP ioctl(2).
1392
1393 --fiemap-ops N
1394 stop after N fiemap bogo operations.
1395
1396 --fiemap-bytes N
1397 specify the size of the fiemap'd file in bytes. One can specify
1398 the size as % of free space on the file system or in units of
1399 Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g.
1400 Larger files will contain more extents, causing more stress when
1401 gathering extent information.
1402
1403 --fifo N
1404 start N workers that exercise a named pipe by transmitting 64
1405 bit integers.
1406
1407 --fifo-ops N
1408 stop fifo workers after N bogo pipe write operations.
1409
1410 --fifo-readers N
1411 for each worker, create N fifo reader workers that read the
1412 named pipe using simple blocking reads.
1413
1414 --file-ioctl N
1415 start N workers that exercise various file specific ioctl(2)
1416 calls. This will attempt to use the FIONBIO, FIOQSIZE, FIGETBSZ,
1417 FIOCLEX, FIONCLEX, FIONBIO, FIOASYNC, FIOQSIZE, FIFREEZE,
1418 FITHAW, FICLONE, FICLONERANGE, FIONREAD, FIONWRITE and
1419 FS_IOC_RESVSP ioctls if these are defined.
1420
1421 --file-ioctl-ops N
1422 stop file-ioctl workers after N file ioctl bogo operations.
1423
1424 --filename N
1425 start N workers that exercise file creation using various length
1426 filenames containing a range of allowed filename characters.
1427 This will try to see if it can exceed the file system allowed
1428 filename length was well as test various filename lengths
1429 between 1 and the maximum allowed by the file system.
1430
1431 --filename-ops N
1432 stop filename workers after N bogo filename tests.
1433
1434 --filename-opts opt
1435 use characters in the filename based on option 'opt'. Valid
1436 options are:
1437
1438 Option Description
1439 probe default option, probe the file system for valid
1440 allowed characters in a file name and use these
1441 posix use characters as specified by The Open Group
1442 Base Specifications Issue 7, POSIX.1-2008,
1443 3.278 Portable Filename Character Set
1444 ext use characters allowed by the ext2, ext3, ext4
1445 file systems, namely any 8 bit character apart
1446 from NUL and /
1447
1448 --flock N
1449 start N workers locking on a single file.
1450
1451 --flock-ops N
1452 stop flock stress workers after N bogo flock operations.
1453
1454 -f N, --fork N
1455 start N workers continually forking children that immediately
1456 exit.
1457
1458 --fork-ops N
1459 stop fork stress workers after N bogo operations.
1460
1461 --fork-max P
1462 create P child processes and then wait for them to exit per
1463 iteration. The default is just 1; higher values will create many
1464 temporary zombie processes that are waiting to be reaped. One
1465 can potentially fill up the process table using high values for
1466 --fork-max and --fork.
1467
1468 --fp-error N
1469 start N workers that generate floating point exceptions. Compu‐
1470 tations are performed to force and check for the FE_DIVBYZERO,
1471 FE_INEXACT, FE_INVALID, FE_OVERFLOW and FE_UNDERFLOW exceptions.
1472 EDOM and ERANGE errors are also checked.
1473
1474 --fp-error-ops N
1475 stop after N bogo floating point exceptions.
1476
1477 --fstat N
1478 start N workers fstat'ing files in a directory (default is
1479 /dev).
1480
1481 --fstat-ops N
1482 stop fstat stress workers after N bogo fstat operations.
1483
1484 --fstat-dir directory
1485 specify the directory to fstat to override the default of /dev.
1486 All the files in the directory will be fstat'd repeatedly.
1487
1488 --full N
1489 start N workers that exercise /dev/full. This attempts to write
1490 to the device (which should always get error ENOSPC), to read
1491 from the device (which should always return a buffer of zeros)
1492 and to seek randomly on the device (which should always suc‐
1493 ceed). (Linux only).
1494
1495 --full-ops N
1496 stop the stress full workers after N bogo I/O operations.
1497
1498 --funccall N
1499 start N workers that call functions of 1 through to 9 arguments.
1500 By default functions with uint64_t arguments are called, how‐
1501 ever, this can be changed using the --funccall-method option.
1502
1503 --funccall-ops N
1504 stop the funccall workers after N bogo function call operations.
1505 Each bogo operation is 1000 calls of functions of 1 through to 9
1506 arguments of the chosen argument type.
1507
1508 --funccall-method method
1509 specify the method of funccall argument type to be used. The
1510 default is uint64_t but can be one of uint8 uint16 uint32 uint64
1511 uint128 float double longdouble float80 float128 decimal32 deci‐
1512 mal64 and decimal128. Note that some of these types are only
1513 available with specific architectures and compiler versions.
1514
1515 --funcret N
1516 start N workers that pass and return by value various small to
1517 large data types.
1518
1519 --funcret-ops N
1520 stop the funcret workers after N bogo function call operations.
1521
1522 --funcret-method method
1523 specify the method of funcret argument type to be used. The
1524 default is uint64_t but can be one of uint8 uint16 uint32 uint64
1525 uint128 float double longdouble float80 float128 decimal32 deci‐
1526 mal64 decimal128 uint8x32 uint8x128 uint64x128.
1527
1528 --futex N
1529 start N workers that rapidly exercise the futex system call.
1530 Each worker has two processes, a futex waiter and a futex waker.
1531 The waiter waits with a very small timeout to stress the timeout
1532 and rapid polled futex waiting. This is a Linux specific stress
1533 option.
1534
1535 --futex-ops N
1536 stop futex workers after N bogo successful futex wait opera‐
1537 tions.
1538
1539 --get N
1540 start N workers that call system calls that fetch data from the
1541 kernel, currently these are: getpid, getppid, getcwd, getgid,
1542 getegid, getuid, getgroups, getpgrp, getpgid, getpriority,
1543 getresgid, getresuid, getrlimit, prlimit, getrusage, getsid,
1544 gettid, getcpu, gettimeofday, uname, adjtimex, sysfs. Some of
1545 these system calls are OS specific.
1546
1547 --get-ops N
1548 stop get workers after N bogo get operations.
1549
1550 --getdent N
1551 start N workers that recursively read directories /proc, /dev/,
1552 /tmp, /sys and /run using getdents and getdents64 (Linux only).
1553
1554 --getdent-ops N
1555 stop getdent workers after N bogo getdent bogo operations.
1556
1557 --getrandom N
1558 start N workers that get 8192 random bytes from the /dev/urandom
1559 pool using the getrandom(2) system call (Linux) or getentropy(2)
1560 (OpenBSD).
1561
1562 --getrandom-ops N
1563 stop getrandom workers after N bogo get operations.
1564
1565 --handle N
1566 start N workers that exercise the name_to_handle_at(2) and
1567 open_by_handle_at(2) system calls. (Linux only).
1568
1569 --handle-ops N
1570 stop after N handle bogo operations.
1571
1572 -d N, --hdd N
1573 start N workers continually writing, reading and removing tempo‐
1574 rary files. The default mode is to stress test sequential writes
1575 and reads. With the --aggressive option enabled without any
1576 --hdd-opts options the hdd stressor will work through all the
1577 --hdd-opt options one by one to cover a range of I/O options.
1578
1579 --hdd-bytes N
1580 write N bytes for each hdd process, the default is 1 GB. One can
1581 specify the size as % of free space on the file system or in
1582 units of Bytes, KBytes, MBytes and GBytes using the suffix b, k,
1583 m or g.
1584
1585 --hdd-opts list
1586 specify various stress test options as a comma separated list.
1587 Options are as follows:
1588
1589 Option Description
1590 direct try to minimize cache effects of the I/O. File
1591 I/O writes are performed directly from user
1592 space buffers and synchronous transfer is also
1593 attempted. To guarantee synchronous I/O, also
1594 use the sync option.
1595 dsync ensure output has been transferred to underly‐
1596 ing hardware and file metadata has been updated
1597 (using the O_DSYNC open flag). This is equiva‐
1598 lent to each write(2) being followed by a call
1599 to fdatasync(2). See also the fdatasync option.
1600 fadv-dontneed advise kernel to expect the data will not be
1601 accessed in the near future.
1602 fadv-noreuse advise kernel to expect the data to be accessed
1603 only once.
1604 fadv-normal advise kernel there are no explicit access pat‐
1605 tern for the data. This is the default advice
1606 assumption.
1607 fadv-rnd advise kernel to expect random access patterns
1608 for the data.
1609 fadv-seq advise kernel to expect sequential access pat‐
1610 terns for the data.
1611 fadv-willneed advise kernel to expect the data to be accessed
1612 in the near future.
1613 fsync flush all modified in-core data after each
1614 write to the output device using an explicit
1615 fsync(2) call.
1616
1617
1618
1619
1620 fdatasync similar to fsync, but do not flush the modified
1621 metadata unless metadata is required for later
1622 data reads to be handled correctly. This uses
1623 an explicit fdatasync(2) call.
1624 iovec use readv/writev multiple buffer I/Os rather
1625 than read/write. Instead of 1 read/write opera‐
1626 tion, the buffer is broken into an iovec of 16
1627 buffers.
1628 noatime do not update the file last access timestamp,
1629 this can reduce metadata writes.
1630 sync ensure output has been transferred to underly‐
1631 ing hardware (using the O_SYNC open flag). This
1632 is equivalent to a each write(2) being followed
1633 by a call to fsync(2). See also the fsync
1634 option.
1635 rd-rnd read data randomly. By default, written data is
1636 not read back, however, this option will force
1637 it to be read back randomly.
1638 rd-seq read data sequentially. By default, written
1639 data is not read back, however, this option
1640 will force it to be read back sequentially.
1641 syncfs write all buffered modifications of file meta‐
1642 data and data on the filesystem that contains
1643 the hdd worker files.
1644 utimes force update of file timestamp which may
1645 increase metadata writes.
1646 wr-rnd write data randomly. The wr-seq option cannot
1647 be used at the same time.
1648 wr-seq write data sequentially. This is the default if
1649 no write modes are specified.
1650
1651 Note that some of these options are mutually exclusive, for example,
1652 there can be only one method of writing or reading. Also, fadvise
1653 flags may be mutually exclusive, for example fadv-willneed cannot be
1654 used with fadv-dontneed.
1655
1656 --hdd-ops N
1657 stop hdd stress workers after N bogo operations.
1658
1659 --hdd-write-size N
1660 specify size of each write in bytes. Size can be from 1 byte to
1661 4MB.
1662
1663 --heapsort N
1664 start N workers that sort 32 bit integers using the BSD heap‐
1665 sort.
1666
1667 --heapsort-ops N
1668 stop heapsort stress workers after N bogo heapsorts.
1669
1670 --heapsort-size N
1671 specify number of 32 bit integers to sort, default is 262144
1672 (256 × 1024).
1673
1674 --hrtimers N
1675 start N workers that exercise high resolution times at a high
1676 frequency. Each stressor starts 32 processes that run with ran‐
1677 dom timer intervals of 0..499999 nanoseconds. Running this
1678 stressor with appropriate privilege will run these with the
1679 SCHED_RR policy.
1680
1681 --hrtimers-ops N
1682 stop hrtimers stressors after N timer event bogo operations
1683
1684 --hsearch N
1685 start N workers that search a 80% full hash table using
1686 hsearch(3). By default, there are 8192 elements inserted into
1687 the hash table. This is a useful method to exercise access of
1688 memory and processor cache.
1689
1690 --hsearch-ops N
1691 stop the hsearch workers after N bogo hsearch operations are
1692 completed.
1693
1694 --hsearch-size N
1695 specify the number of hash entries to be inserted into the hash
1696 table. Size can be from 1K to 4M.
1697
1698 --icache N
1699 start N workers that stress the instruction cache by forcing
1700 instruction cache reloads. This is achieved by modifying an
1701 instruction cache line, causing the processor to reload it when
1702 we call a function in inside it. Currently only verified and
1703 enabled for Intel x86 CPUs.
1704
1705 --icache-ops N
1706 stop the icache workers after N bogo icache operations are com‐
1707 pleted.
1708
1709 --icmp-flood N
1710 start N workers that flood localhost with randonly sized ICMP
1711 ping packets. This stressor requires the CAP_NET_RAW capbility.
1712
1713 --icmp-flood-ops N
1714 stop icmp flood workers after N ICMP ping packets have been
1715 sent.
1716
1717 --idle-scan N
1718 start N workers that scan the idle page bitmap across a range of
1719 physical pages. This sets and checks for idle pages via the idle
1720 page tracking interface /sys/kernel/mm/page_idle/bitmap. This
1721 is for Linux only.
1722
1723 --idle-scan-ops N
1724 stop after N bogo page scan operations. Currently one bogo page
1725 scan operation is equivalent to setting and checking 64 physical
1726 pages.
1727
1728 --idle-page N
1729 start N workers that walks through every page exercising the
1730 Linux /sys/kernel/mm/page_idle/bitmap interface. Requires
1731 CAP_SYS_RESOURCE capability.
1732
1733 --idle-page-ops N
1734 stop after N bogo idle page operations.
1735
1736 --inode-flags N
1737 start N workers that exercise inode flags using the FS_IOC_GET‐
1738 FLAGS and FS_IOC_SETFLAGS ioctl(2). This attempts to apply all
1739 the available inode flags onto a directory and file even if the
1740 underlying file system may not support these flags (errors are
1741 just ignored). Each worker runs 4 threads that exercise the
1742 flags on the same directory and file to try to force races. This
1743 is a Linux only stressor, see ioctl_iflags(2) for more details.
1744
1745 --inode-flags-ops N
1746 stop the inode-flags workers after N ioctl flag setting
1747 attempts.
1748
1749 --inotify N
1750 start N workers performing file system activities such as mak‐
1751 ing/deleting files/directories, moving files, etc. to stress
1752 exercise the various inotify events (Linux only).
1753
1754 --inotify-ops N
1755 stop inotify stress workers after N inotify bogo operations.
1756
1757 -i N, --io N
1758 start N workers continuously calling sync(2) to commit buffer
1759 cache to disk. This can be used in conjunction with the --hdd
1760 options.
1761
1762 --io-ops N
1763 stop io stress workers after N bogo operations.
1764
1765 --iomix N
1766 start N workers that perform a mix of sequential, random and
1767 memory mapped read/write operations as well as forced sync'ing
1768 and (if run as root) cache dropping. Multiple child processes
1769 are spawned to all share a single file and perform different I/O
1770 operations on the same file.
1771
1772 --iomix-bytes N
1773 write N bytes for each iomix worker process, the default is 1
1774 GB. One can specify the size as % of free space on the file sys‐
1775 tem or in units of Bytes, KBytes, MBytes and GBytes using the
1776 suffix b, k, m or g.
1777
1778 --iomix-ops N
1779 stop iomix stress workers after N bogo iomix I/O operations.
1780
1781 --ioport N
1782 start N workers than perform bursts of 16 reads and 16 writes of
1783 ioport 0x80 (x86 Linux systems only). I/O performed on x86
1784 platforms on port 0x80 will cause delays on the CPU performing
1785 the I/O.
1786
1787 --ioport-ops N
1788 stop the ioport stressors after N bogo I/O operations
1789
1790 --ioport-opts [ in | out | inout ]
1791 specify if port reads in, port read writes out or reads and
1792 writes are to be performed. The default is both in and out.
1793
1794 --ioprio N
1795 start N workers that exercise the ioprio_get(2) and
1796 ioprio_set(2) system calls (Linux only).
1797
1798 --ioprio-ops N
1799 stop after N io priority bogo operations.
1800
1801 --io-uring N
1802 start N workers that perform iovec write and read I/O operations
1803 using the Linux io-uring interface. On each bogo-loop 1024 × 512
1804 byte writes and 1024 × reads are performed on a temporary file.
1805
1806 --io-uring-ops
1807 stop after N rounds of write and reads.
1808
1809 --ipsec-mb N
1810 start N workers that perform cryptographic processing using the
1811 highly optimized Intel Multi-Buffer Crypto for IPsec library.
1812 Depending on the features available, SSE, AVX, AVX and AVX512
1813 CPU features will be used on data encrypted by SHA, DES, CMAC,
1814 CTR, HMAC MD5, HMAC SHA1 and HMAC SHA512 cryptographic routines.
1815 This is only available for x86-64 modern Intel CPUs.
1816
1817 --ipsec-mb-ops N
1818 stop after N rounds of processing of data using the crypto‐
1819 graphic routines.
1820
1821 --ipsec-mb-feature [ sse | avx | avx2 | avx512 ]
1822 Just use the specified processor CPU feature. By default, all
1823 the available features for the CPU are exercised.
1824
1825 --itimer N
1826 start N workers that exercise the system interval timers. This
1827 sets up an ITIMER_PROF itimer that generates a SIGPROF signal.
1828 The default frequency for the itimer is 1 MHz, however, the
1829 Linux kernel will set this to be no more that the jiffy setting,
1830 hence high frequency SIGPROF signals are not normally possible.
1831 A busy loop spins on getitimer(2) calls to consume CPU and hence
1832 decrement the itimer based on amount of time spent in CPU and
1833 system time.
1834
1835 --itimer-ops N
1836 stop itimer stress workers after N bogo itimer SIGPROF signals.
1837
1838 --itimer-freq F
1839 run itimer at F Hz; range from 1 to 1000000 Hz. Normally the
1840 highest frequency is limited by the number of jiffy ticks per
1841 second, so running above 1000 Hz is difficult to attain in prac‐
1842 tice.
1843
1844 --itimer-rand
1845 select an interval timer frequency based around the interval
1846 timer frequency +/- 12.5% random jitter. This tries to force
1847 more variability in the timer interval to make the scheduling
1848 less predictable.
1849
1850 --judy N
1851 start N workers that insert, search and delete 32 bit integers
1852 in a Judy array using a predictable yet sparse array index. By
1853 default, there are 131072 integers used in the Judy array. This
1854 is a useful method to exercise random access of memory and pro‐
1855 cessor cache.
1856
1857 --judy-ops N
1858 stop the judy workers after N bogo judy operations are com‐
1859 pleted.
1860
1861 --judy-size N
1862 specify the size (number of 32 bit integers) in the Judy array
1863 to exercise. Size can be from 1K to 4M 32 bit integers.
1864
1865 --kcmp N
1866 start N workers that use kcmp(2) to compare parent and child
1867 processes to determine if they share kernel resources. Supported
1868 only for Linux and requires CAP_SYS_PTRACE capability.
1869
1870 --kcmp-ops N
1871 stop kcmp workers after N bogo kcmp operations.
1872
1873 --key N
1874 start N workers that create and manipulate keys using add_key(2)
1875 and ketctl(2). As many keys are created as the per user limit
1876 allows and then the following keyctl commands are exercised on
1877 each key: KEYCTL_SET_TIMEOUT, KEYCTL_DESCRIBE, KEYCTL_UPDATE,
1878 KEYCTL_READ, KEYCTL_CLEAR and KEYCTL_INVALIDATE.
1879
1880 --key-ops N
1881 stop key workers after N bogo key operations.
1882
1883 --kill N
1884 start N workers sending SIGUSR1 kill signals to a SIG_IGN signal
1885 handler. Most of the process time will end up in kernel space.
1886
1887 --kill-ops N
1888 stop kill workers after N bogo kill operations.
1889
1890 --klog N
1891 start N workers exercising the kernel syslog(2) system call.
1892 This will attempt to read the kernel log with various sized read
1893 buffers. Linux only.
1894
1895 --klog-ops N
1896 stop klog workers after N syslog operations.
1897
1898 --lease N
1899 start N workers locking, unlocking and breaking leases via the
1900 fcntl(2) F_SETLEASE operation. The parent processes continually
1901 lock and unlock a lease on a file while a user selectable number
1902 of child processes open the file with a non-blocking open to
1903 generate SIGIO lease breaking notifications to the parent. This
1904 stressor is only available if F_SETLEASE, F_WRLCK and F_UNLCK
1905 support is provided by fcntl(2).
1906
1907 --lease-ops N
1908 stop lease workers after N bogo operations.
1909
1910 --lease-breakers N
1911 start N lease breaker child processes per lease worker. Nor‐
1912 mally one child is plenty to force many SIGIO lease breaking
1913 notification signals to the parent, however, this option allows
1914 one to specify more child processes if required.
1915
1916 --link N
1917 start N workers creating and removing hardlinks.
1918
1919 --link-ops N
1920 stop link stress workers after N bogo operations.
1921
1922 --lockbus N
1923 start N workers that rapidly lock and increment 64 bytes of ran‐
1924 domly chosen memory from a 16MB mmap'd region (Intel x86 and ARM
1925 CPUs only). This will cause cacheline misses and stalling of
1926 CPUs.
1927
1928 --lockbus-ops N
1929 stop lockbus workers after N bogo operations.
1930
1931 --locka N
1932 start N workers that randomly lock and unlock regions of a file
1933 using the POSIX advisory locking mechanism (see fcntl(2),
1934 F_SETLK, F_GETLK). Each worker creates a 1024 KB file and
1935 attempts to hold a maximum of 1024 concurrent locks with a child
1936 process that also tries to hold 1024 concurrent locks. Old locks
1937 are unlocked in a first-in, first-out basis.
1938
1939 --locka-ops N
1940 stop locka workers after N bogo locka operations.
1941
1942 --lockf N
1943 start N workers that randomly lock and unlock regions of a file
1944 using the POSIX lockf(3) locking mechanism. Each worker creates
1945 a 64 KB file and attempts to hold a maximum of 1024 concurrent
1946 locks with a child process that also tries to hold 1024 concur‐
1947 rent locks. Old locks are unlocked in a first-in, first-out
1948 basis.
1949
1950 --lockf-ops N
1951 stop lockf workers after N bogo lockf operations.
1952
1953 --lockf-nonblock
1954 instead of using blocking F_LOCK lockf(3) commands, use non-
1955 blocking F_TLOCK commands and re-try if the lock failed. This
1956 creates extra system call overhead and CPU utilisation as the
1957 number of lockf workers increases and should increase locking
1958 contention.
1959
1960 --lockofd N
1961 start N workers that randomly lock and unlock regions of a file
1962 using the Linux open file description locks (see fcntl(2),
1963 F_OFD_SETLK, F_OFD_GETLK). Each worker creates a 1024 KB file
1964 and attempts to hold a maximum of 1024 concurrent locks with a
1965 child process that also tries to hold 1024 concurrent locks. Old
1966 locks are unlocked in a first-in, first-out basis.
1967
1968 --lockofd-ops N
1969 stop lockofd workers after N bogo lockofd operations.
1970
1971 --longjmp N
1972 start N workers that exercise setjmp(3)/longjmp(3) by rapid
1973 looping on longjmp calls.
1974
1975 --longjmp-ops N
1976 stop longjmp stress workers after N bogo longjmp operations (1
1977 bogo op is 1000 longjmp calls).
1978
1979 --loop N
1980 start N workers that exercise the loopback control device. This
1981 creates 2MB loopback devices, expands them to 4MB, performs some
1982 loopback status information get and set operations and then
1983 destoys them. Linux only and requires CAP_SYS_ADMIN capability.
1984
1985 --loop-ops N
1986 stop after N bogo loopback creation/deletion operations.
1987
1988 --lsearch N
1989 start N workers that linear search a unsorted array of 32 bit
1990 integers using lsearch(3). By default, there are 8192 elements
1991 in the array. This is a useful method to exercise sequential
1992 access of memory and processor cache.
1993
1994 --lsearch-ops N
1995 stop the lsearch workers after N bogo lsearch operations are
1996 completed.
1997
1998 --lsearch-size N
1999 specify the size (number of 32 bit integers) in the array to
2000 lsearch. Size can be from 1K to 4M.
2001
2002 --madvise N
2003 start N workers that apply random madvise(2) advise settings on
2004 pages of a 4MB file backed shared memory mapping.
2005
2006 --madvise-ops N
2007 stop madvise stressors after N bogo madvise operations.
2008
2009 --malloc N
2010 start N workers continuously calling malloc(3), calloc(3), real‐
2011 loc(3) and free(3). By default, up to 65536 allocations can be
2012 active at any point, but this can be altered with the --mal‐
2013 loc-max option. Allocation, reallocation and freeing are chosen
2014 at random; 50% of the time memory is allocation (via malloc,
2015 calloc or realloc) and 50% of the time allocations are free'd.
2016 Allocation sizes are also random, with the maximum allocation
2017 size controlled by the --malloc-bytes option, the default size
2018 being 64K. The worker is re-started if it is killed by the out
2019 of memory (OOM) killer.
2020
2021 --malloc-bytes N
2022 maximum per allocation/reallocation size. Allocations are ran‐
2023 domly selected from 1 to N bytes. One can specify the size as %
2024 of total available memory or in units of Bytes, KBytes, MBytes
2025 and GBytes using the suffix b, k, m or g. Large allocation
2026 sizes cause the memory allocator to use mmap(2) rather than
2027 expanding the heap using brk(2).
2028
2029 --malloc-max N
2030 maximum number of active allocations allowed. Allocations are
2031 chosen at random and placed in an allocation slot. Because about
2032 50%/50% split between allocation and freeing, typically half of
2033 the allocation slots are in use at any one time.
2034
2035 --malloc-ops N
2036 stop after N malloc bogo operations. One bogo operations relates
2037 to a successful malloc(3), calloc(3) or realloc(3).
2038
2039 --malloc-pthreads N
2040 specify number of malloc stressing concurrent pthreads to run.
2041 The default is 0 (just one main process, no pthreads). This
2042 option will do nothing if pthreads are not supported.
2043
2044 --malloc-thresh N
2045 specify the threshold where malloc uses mmap(2) instead of
2046 sbrk(2) to allocate more memory. This is only available on sys‐
2047 tems that provide the GNU C mallopt(3) tuning function.
2048
2049 --matrix N
2050 start N workers that perform various matrix operations on float‐
2051 ing point values. Testing on 64 bit x86 hardware shows that this
2052 provides a good mix of memory, cache and floating point opera‐
2053 tions and is an excellent way to make a CPU run hot.
2054
2055 By default, this will exercise all the matrix stress methods one
2056 by one. One can specify a specific matrix stress method with
2057 the --matrix-method option.
2058
2059 --matrix-ops N
2060 stop matrix stress workers after N bogo operations.
2061
2062 --matrix-method method
2063 specify a matrix stress method. Available matrix stress methods
2064 are described as follows:
2065
2066 Method Description
2067 all iterate over all the below matrix stress meth‐
2068 ods
2069 add add two N × N matrices
2070 copy copy one N × N matrix to another
2071 div divide an N × N matrix by a scalar
2072 frobenius Frobenius product of two N × N matrices
2073 hadamard Hadamard product of two N × N matrices
2074 identity create an N × N identity matrix
2075 mean arithmetic mean of two N × N matrices
2076 mult multiply an N × N matrix by a scalar
2077 negate negate an N × N matrix
2078 prod product of two N × N matrices
2079 sub subtract one N × N matrix from another N × N
2080 matrix
2081 square multiply an N × N matrix by itself
2082 trans transpose an N × N matrix
2083 zero zero an N × N matrix
2084
2085 --matrix-size N
2086 specify the N × N size of the matrices. Smaller values result
2087 in a floating point compute throughput bound stressor, where as
2088 large values result in a cache and/or memory bandwidth bound
2089 stressor.
2090
2091 --matrix-yx
2092 perform matrix operations in order y by x rather than the
2093 default x by y. This is suboptimal ordering compared to the
2094 default and will perform more data cache stalls.
2095
2096 --matrix-3d N
2097 start N workers that perform various 3D matrix operations on
2098 floating point values. Testing on 64 bit x86 hardware shows that
2099 this provides a good mix of memory, cache and floating point
2100 operations and is an excellent way to make a CPU run hot.
2101
2102 By default, this will exercise all the 3D matrix stress methods
2103 one by one. One can specify a specific 3D matrix stress method
2104 with the --matrix-3d-method option.
2105
2106 --matrix-3d-ops N
2107 stop the 3D matrix stress workers after N bogo operations.
2108
2109 --matrix-3d-method method
2110 specify a 3D matrix stress method. Available 3D matrix stress
2111 methods are described as follows:
2112
2113 Method Description
2114 all iterate over all the below matrix stress meth‐
2115 ods
2116 add add two N × N × N matrices
2117 copy copy one N × N × N matrix to another
2118 div divide an N × N × N matrix by a scalar
2119 frobenius Frobenius product of two N × N × N matrices
2120 hadamard Hadamard product of two N × N × N matrices
2121 identity create an N × N × N identity matrix
2122 mean arithmetic mean of two N × N × N matrices
2123 mult multiply an N × N × N matrix by a scalar
2124 negate negate an N × N × N matrix
2125 sub subtract one N × N × N matrix from another N ×
2126 N × N matrix
2127 trans transpose an N × N × N matrix
2128 zero zero an N × N × N matrix
2129
2130 --matrix-3d-size N
2131 specify the N × N × N size of the matrices. Smaller values
2132 result in a floating point compute throughput bound stressor,
2133 where as large values result in a cache and/or memory bandwidth
2134 bound stressor.
2135
2136 --matrix-3d-zyx
2137 perform matrix operations in order z by y by x rather than the
2138 default x by y by z. This is suboptimal ordering compared to the
2139 default and will perform more data cache stalls.
2140
2141 --mcontend N
2142 start N workers that produce memory contention read/write pat‐
2143 terns. Each stressor runs with 5 threads that read and write to
2144 two different mappings of the same underlying physical page.
2145 Various caching operations are also exercised to cause sub-opti‐
2146 mal memory access patterns. The threads also randomly change
2147 CPU affinity to exercise CPU and memory migration stress.
2148
2149 --mcontend-ops N
2150 stop mcontend stressors after N bogo read/write operations.
2151
2152 --membarrier N
2153 start N workers that exercise the membarrier system call (Linux
2154 only).
2155
2156 --membarrier-ops N
2157 stop membarrier stress workers after N bogo membarrier opera‐
2158 tions.
2159
2160 --memcpy N
2161 start N workers that copy 2MB of data from a shared region to a
2162 buffer using memcpy(3) and then move the data in the buffer with
2163 memmove(3) with 3 different alignments. This will exercise pro‐
2164 cessor cache and system memory.
2165
2166 --memcpy-ops N
2167 stop memcpy stress workers after N bogo memcpy operations.
2168
2169 --memcpy-method [ all | libc | builtin | naive ]
2170 specify a memcpy copying method. Available memcpy methods are
2171 described as follows:
2172
2173
2174 Method Description
2175 all use libc, builtin and naive methods
2176 libc use libc memcpy and memmove functions, this is
2177 the default
2178 builtin use the compiler built in optimized memcpy and
2179 memmove functions
2180 naive use unoptimized naive byte by byte copying and
2181 memory moving
2182
2183 --memfd N
2184 start N workers that create allocations of 1024 pages using
2185 memfd_create(2) and ftruncate(2) for allocation and mmap(2) to
2186 map the allocation into the process address space. (Linux
2187 only).
2188
2189 --memfd-bytes N
2190 allocate N bytes per memfd stress worker, the default is 256MB.
2191 One can specify the size in as % of total available memory or in
2192 units of Bytes, KBytes, MBytes and GBytes using the suffix b, k,
2193 m or g.
2194
2195 --memfd-fds N
2196 create N memfd file descriptors, the default is 256. One can
2197 select 8 to 4096 memfd file descriptions with this option.
2198
2199 --memfd-ops N
2200 stop after N memfd-create(2) bogo operations.
2201
2202 --memhotplug N
2203 start N workers that offline and online memory hotplug regions.
2204 Linux only and requires CAP_SYS_ADMIN capabilities.
2205
2206 --memhotplug-ops N
2207 stop memhotplug stressors after N memory offline and online bogo
2208 operations.
2209
2210 --memrate N
2211 start N workers that exercise a buffer with 64, 32, 16 and 8 bit
2212 reads and writes. This memory stressor allows one to also spec‐
2213 ify the maximum read and write rates. The stressors will run at
2214 maximum speed if no read or write rates are specified.
2215
2216 --memrate-ops N
2217 stop after N bogo memrate operations.
2218
2219 --memrate-bytes N
2220 specify the size of the memory buffer being exercised. The
2221 default size is 256MB. One can specify the size in units of
2222 Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g.
2223
2224 --memrate-rd-mbs N
2225 specify the maximum allowed read rate in MB/sec. The actual read
2226 rate is dependent on scheduling jitter and memory accesses from
2227 other running processes.
2228
2229 --memrate-wr-mbs N
2230 specify the maximum allowed read rate in MB/sec. The actual
2231 write rate is dependent on scheduling jitter and memory accesses
2232 from other running processes.
2233
2234 --memthrash N
2235 start N workers that thrash and exercise a 16MB buffer in vari‐
2236 ous ways to try and trip thermal overrun. Each stressor will
2237 start 1 or more threads. The number of threads is chosen so
2238 that there will be at least 1 thread per CPU. Note that the
2239 optimal choice for N is a value that divides into the number of
2240 CPUs.
2241
2242 --memthrash-ops N
2243 stop after N memthrash bogo operations.
2244
2245 --memthrash-method method
2246 specify a memthrash stress method. Available memthrash stress
2247 methods are described as follows:
2248
2249 Method Description
2250 all iterate over all the below memthrash methods
2251 chunk1 memset 1 byte chunks of random data into random
2252 locations
2253
2254 chunk8 memset 8 byte chunks of random data into random
2255 locations
2256 chunk64 memset 64 byte chunks of random data into ran‐
2257 dom locations
2258 chunk256 memset 256 byte chunks of random data into ran‐
2259 dom locations
2260 chunkpage memset page size chunks of random data into
2261 random locations
2262 flip flip (invert) all bits in random locations
2263 flush flush cache line in random locations
2264 lock lock randomly choosing locations (Intel x86 and
2265 ARM CPUs only)
2266 matrix treat memory as a 2 × 2 matrix and swap random
2267 elements
2268 memmove copy all the data in buffer to the next memory
2269 location
2270 memset memset the memory with random data
2271 mfence stores with write serialization
2272 prefetch prefetch data at random memory locations
2273 random randomly run any of the memthrash methods
2274 except for 'random' and 'all'
2275 spinread spin loop read the same random location 2^19
2276 times
2277 spinwrite spin loop write the same random location 2^19
2278 times
2279 swap step through memory swapping bytes in steps of
2280 65 and 129 byte strides
2281
2282 --mergesort N
2283 start N workers that sort 32 bit integers using the BSD merge‐
2284 sort.
2285
2286 --mergesort-ops N
2287 stop mergesort stress workers after N bogo mergesorts.
2288
2289 --mergesort-size N
2290 specify number of 32 bit integers to sort, default is 262144
2291 (256 × 1024).
2292
2293 --mincore N
2294 start N workers that walk through all of memory 1 page at a time
2295 checking if the page mapped and also is resident in memory using
2296 mincore(2). It also maps and unmaps a page to check if the page
2297 is mapped or not using mincore(2).
2298
2299 --mincore-ops N
2300 stop after N mincore bogo operations. One mincore bogo op is
2301 equivalent to a 300 mincore(2) calls. --mincore-random instead
2302 of walking through pages sequentially, select pages at random.
2303 The chosen address is iterated over by shifting it right one
2304 place and checked by mincore until the address is less or equal
2305 to the page size.
2306
2307 --mknod N
2308 start N workers that create and remove fifos, empty files and
2309 named sockets using mknod and unlink.
2310
2311 --mknod-ops N
2312 stop directory thrash workers after N bogo mknod operations.
2313
2314 --mlock N
2315 start N workers that lock and unlock memory mapped pages using
2316 mlock(2), munlock(2), mlockall(2) and munlockall(2). This is
2317 achieved by the mapping of three contiguous pages and then lock‐
2318 ing the second page, hence ensuring non-contiguous pages are
2319 locked . This is then repeated until the maximum allowed mlocks
2320 or a maximum of 262144 mappings are made. Next, all future map‐
2321 pings are mlocked and the worker attempts to map 262144 pages,
2322 then all pages are munlocked and the pages are unmapped.
2323
2324 --mlock-ops N
2325 stop after N mlock bogo operations.
2326
2327 --mlockmany N
2328 start N workers that fork off up to 1024 child processes in
2329 total; each child will attempt to anonymously mmap and mlock the
2330 maximum allowed mlockable memory size. The stress test attempts
2331 to avoid swapping by tracking low memory and swap allocations
2332 (but some swapping may occur). Once either the maximum number of
2333 child process is reached or all mlockable in-core memory is
2334 locked then child processes are killed and the stress test is
2335 repeated.
2336
2337 --mlockmany-ops N
2338 stop after N mlockmany (mmap and mlock) operations.
2339
2340 --mmap N
2341 start N workers continuously calling mmap(2)/munmap(2). The
2342 initial mapping is a large chunk (size specified by
2343 --mmap-bytes) followed by pseudo-random 4K unmappings, then
2344 pseudo-random 4K mappings, and then linear 4K unmappings. Note
2345 that this can cause systems to trip the kernel OOM killer on
2346 Linux systems if not enough physical memory and swap is not
2347 available. The MAP_POPULATE option is used to populate pages
2348 into memory on systems that support this. By default, anonymous
2349 mappings are used, however, the --mmap-file and --mmap-async
2350 options allow one to perform file based mappings if desired.
2351
2352 --mmap-ops N
2353 stop mmap stress workers after N bogo operations.
2354
2355 --mmap-async
2356 enable file based memory mapping and use asynchronous msync'ing
2357 on each page, see --mmap-file.
2358
2359 --mmap-bytes N
2360 allocate N bytes per mmap stress worker, the default is 256MB.
2361 One can specify the size as % of total available memory or in
2362 units of Bytes, KBytes, MBytes and GBytes using the suffix b, k,
2363 m or g.
2364
2365 --mmap-file
2366 enable file based memory mapping and by default use synchronous
2367 msync'ing on each page.
2368
2369 --mmap-mmap2
2370 use mmap2 for 4K page aligned offsets if mmap2 is available,
2371 otherwise fall back to mmap.
2372
2373 --mmap-mprotect
2374 change protection settings on each page of memory. Each time a
2375 page or a group of pages are mapped or remapped then this option
2376 will make the pages read-only, write-only, exec-only, and read-
2377 write.
2378
2379 --mmap-odirect
2380 enable file based memory mapping and use O_DIRECT direct I/O.
2381
2382 --mmap-osync
2383 enable file based memory mapping and used O_SYNC synchronous I/O
2384 integrity completion.
2385
2386 --mmapaddr N
2387 start N workers that memory map pages at a random memory loca‐
2388 tion that is not already mapped. On 64 bit machines the random
2389 address is randomly chosen 32 bit or 64 bit address. If the map‐
2390 ping works a second page is memory mapped from the first mapped
2391 address. The stressor exercises mmap/munmap, mincore and seg‐
2392 fault handling.
2393
2394 --mmapaddr-ops N
2395 stop after N random address mmap bogo operations.
2396
2397 --mmapfork N
2398 start N workers that each fork off 32 child processes, each of
2399 which tries to allocate some of the free memory left in the sys‐
2400 tem (and trying to avoid any swapping). The child processes
2401 then hint that the allocation will be needed with madvise(2) and
2402 then memset it to zero and hint that it is no longer needed with
2403 madvise before exiting. This produces significant amounts of VM
2404 activity, a lot of cache misses and with minimal swapping.
2405
2406 --mmapfork-ops N
2407 stop after N mmapfork bogo operations.
2408
2409 --mmapfixed N
2410 start N workers that perform fixed address allocations from the
2411 top virtual address down to 128K. The allocated sizes are from
2412 1 page to 8 pages and various random mmap flags are used
2413 MAP_SHARED/MAP_PRIVATE, MAP_LOCKED, MAP_NORESERVE, MAP_POPULATE.
2414 If successfully map'd then the allocation is remap'd to an
2415 address that is several pages higher in memory. Mappings and
2416 remappings are madvised with random madvise options to further
2417 exercise the mappings.
2418
2419 --mmapfixed-ops N
2420 stop after N mmapfixed memory mapping bogo operations.
2421
2422 --mmapmany N
2423 start N workers that attempt to create the maximum allowed per-
2424 process memory mappings. This is achieved by mapping 3 contigu‐
2425 ous pages and then unmapping the middle page hence splitting the
2426 mapping into two. This is then repeated until the maximum
2427 allowed mappings or a maximum of 262144 mappings are made.
2428
2429 --mmapmany-ops N
2430 stop after N mmapmany bogo operations
2431
2432 --mq N start N sender and receiver processes that continually send and
2433 receive messages using POSIX message queues. (Linux only).
2434
2435 --mq-ops N
2436 stop after N bogo POSIX message send operations completed.
2437
2438 --mq-size N
2439 specify size of POSIX message queue. The default size is 10 mes‐
2440 sages and most Linux systems this is the maximum allowed size
2441 for normal users. If the given size is greater than the allowed
2442 message queue size then a warning is issued and the maximum
2443 allowed size is used instead.
2444
2445 --mremap N
2446 start N workers continuously calling mmap(2), mremap(2) and mun‐
2447 map(2). The initial anonymous mapping is a large chunk (size
2448 specified by --mremap-bytes) and then iteratively halved in size
2449 by remapping all the way down to a page size and then back up to
2450 the original size. This worker is only available for Linux.
2451
2452 --mremap-ops N
2453 stop mremap stress workers after N bogo operations.
2454
2455 --mremap-bytes N
2456 initially allocate N bytes per remap stress worker, the default
2457 is 256MB. One can specify the size in units of Bytes, KBytes,
2458 MBytes and GBytes using the suffix b, k, m or g.
2459
2460 --mremap-mlock
2461 attempt to mlock remapped pages into memory prohibiting them
2462 from being paged out. This is a no-op if mlock(2) is not avail‐
2463 able.
2464
2465 --msg N
2466 start N sender and receiver processes that continually send and
2467 receive messages using System V message IPC.
2468
2469 --msg-ops N
2470 stop after N bogo message send operations completed.
2471
2472 --msg-types N
2473 select the quality of message types (mtype) to use. By default,
2474 msgsnd sends messages with a mtype of 1, this option allows one
2475 to send messages types in the range 1..N to exercise the message
2476 queue receive ordering. This will also impact throughput perfor‐
2477 mance.
2478
2479 --msync N
2480 start N stressors that msync data from a file backed memory map‐
2481 ping from memory back to the file and msync modified data from
2482 the file back to the mapped memory. This exercises the msync(2)
2483 MS_SYNC and MS_INVALIDATE sync operations.
2484
2485 --msync-ops N
2486 stop after N msync bogo operations completed.
2487
2488 --msync-bytes N
2489 allocate N bytes for the memory mapped file, the default is
2490 256MB. One can specify the size as % of total available memory
2491 or in units of Bytes, KBytes, MBytes and GBytes using the suffix
2492 b, k, m or g.
2493
2494 --nanosleep N
2495 start N workers that each run 256 pthreads that call nanosleep
2496 with random delays from 1 to 2^18 nanoseconds. This should exer‐
2497 cise the high resolution timers and scheduler.
2498
2499 --nanosleep-ops N
2500 stop the nanosleep stressor after N bobo nanosleep operations.
2501
2502 --netdev N
2503 start N workers that exercise various netdevice ioctl commands
2504 across all the available network devices. The ioctls exercised
2505 by this stressor are as follows: SIOCGIFCONF, SIOCGIFINDEX,
2506 SIOCGIFNAME, SIOCGIFFLAGS, SIOCGIFADDR, SIOCGIFNETMASK, SIOCGIF‐
2507 METRIC, SIOCGIFMTU, SIOCGIFHWADDR, SIOCGIFMAP and SIOCGIFTXQLEN.
2508 See netdevice(7) for more details of these ioctl commands.
2509
2510 --netdev-ops N
2511 stop after N netdev bogo operations completed.
2512
2513 --netlink-proc N
2514 start N workers that spawn child processes and monitor
2515 fork/exec/exit process events via the proc netlink connector.
2516 Each event received is counted as a bogo op. This stressor can
2517 only be run on Linux and requires CAP_NET_ADMIN capability.
2518
2519 --netlink-proc-ops N
2520 stop the proc netlink connector stressors after N bogo ops.
2521
2522 --netlink-task N
2523 start N workers that collect task statistics via the netlink
2524 taskstats interface. This stressor can only be run on Linux and
2525 requires CAP_NET_ADMIN capability.
2526
2527 --netlink-task-ops N
2528 stop the taskstats netlink connector stressors after N bogo ops.
2529
2530 --nice N
2531 start N cpu consuming workers that exercise the available nice
2532 levels. Each iteration forks off a child process that runs
2533 through the all the nice levels running a busy loop for 0.1 sec‐
2534 onds per level and then exits.
2535
2536 --nice-ops N
2537 stop after N nice bogo nice loops
2538
2539 --nop N
2540 start N workers that consume cpu cycles issuing no-op instruc‐
2541 tions. This stressor is available if the assembler supports the
2542 "nop" instruction.
2543
2544 --nop-ops N
2545 stop nop workers after N no-op bogo operations. Each bogo-opera‐
2546 tion is equivalent to 256 loops of 256 no-op instructions.
2547
2548 --null N
2549 start N workers writing to /dev/null.
2550
2551 --null-ops N
2552 stop null stress workers after N /dev/null bogo write opera‐
2553 tions.
2554
2555 --numa N
2556 start N workers that migrate stressors and a 4MB memory mapped
2557 buffer around all the available NUMA nodes. This uses
2558 migrate_pages(2) to move the stressors and mbind(2) and
2559 move_pages(2) to move the pages of the mapped buffer. After each
2560 move, the buffer is written to force activity over the bus which
2561 results cache misses. This test will only run on hardware with
2562 NUMA enabled and more than 1 NUMA node.
2563
2564 --numa-ops N
2565 stop NUMA stress workers after N bogo NUMA operations.
2566
2567 --oom-pipe N
2568 start N workers that create as many pipes as allowed and exer‐
2569 cise expanding and shrinking the pipes from the largest pipe
2570 size down to a page size. Data is written into the pipes and
2571 read out again to fill the pipe buffers. With the --aggressive
2572 mode enabled the data is not read out when the pipes are shrunk,
2573 causing the kernel to OOM processes aggressively. Running many
2574 instances of this stressor will force kernel to OOM processes
2575 due to the many large pipe buffer allocations.
2576
2577 --oom-pipe-ops N
2578 stop after N bogo pipe expand/shrink operations.
2579
2580 --opcode N
2581 start N workers that fork off children that execute randomly
2582 generated executable code. This will generate issues such as
2583 illegal instructions, bus errors, segmentation faults, traps,
2584 floating point errors that are handled gracefully by the stres‐
2585 sor.
2586
2587 --opcode-ops N
2588 stop after N attempts to execute illegal code.
2589
2590 --opcode-method [ inc | mixed | random | text ]
2591 select the opcode generation method. By default, random bytes
2592 are used to generate the executable code. This option allows one
2593 to select one of the three methods:
2594
2595 Method Description
2596 inc use incrementing 32 bit opcode patterns
2597 from 0x00000000 to 0xfffffff inclusive.
2598 mixed use a mix of incrementing 32 bit opcode
2599 patterns and random 32 bit opcode pat‐
2600 terns that are also inverted, encoded
2601 with gray encoding and bit reversed.
2602 random generate opcodes using random bytes from
2603 a mwc random generator.
2604 text copies random chunks of code from the
2605 stress-ng text segment and randomly
2606 flips single bits in a random choice of
2607 1/8th of the code.
2608
2609 -o N, --open N
2610 start N workers that perform open(2) and then close(2) opera‐
2611 tions on /dev/zero. The maximum opens at one time is system
2612 defined, so the test will run up to this maximum, or 65536 open
2613 file descriptors, which ever comes first.
2614
2615 --open-ops N
2616 stop the open stress workers after N bogo open operations.
2617
2618 --open-fd
2619 run a child process that scans /proc/$PID/fd and attempts to
2620 open the files that the stressor has opened. This exercises rac‐
2621 ing open/close operations on the proc interface.
2622
2623 --personality N
2624 start N workers that attempt to set personality and get all the
2625 available personality types (process execution domain types) via
2626 the personality(2) system call. (Linux only).
2627
2628 --personality-ops N
2629 stop personality stress workers after N bogo personality opera‐
2630 tions.
2631
2632 --physpage N
2633 start N workers that use /proc/self/pagemap and /proc/kpagecount
2634 to determine the physical page and page count of a virtual
2635 mapped page and a page that is shared among all the stressors.
2636 Linux only and requires the CAP_SYS_ADMIN capabilities.
2637
2638 --physpage-ops N
2639 stop physpage stress workers after N bogo physical address
2640 lookups.
2641
2642 --pidfd N
2643 start N workers that exercise signal sending via the
2644 pidfd_send_signal system call. This stressor creates child pro‐
2645 cesses and checks if they exist and can be stopped, restarted
2646 and killed using the pidfd_send_signal system call.
2647
2648 --pidfd-ops N
2649 stop pidfd stress workers after N child processes have been cre‐
2650 ated, tested and killed with pidfd_send_signal.
2651
2652 --ping-sock N
2653 start N workers that send small randomized ICMP messages to the
2654 localhost across a range of ports (1024..65535) using a "ping"
2655 socket with an AF_INET domain, a SOCK_DGRAM socket type and an
2656 IPPROTO_ICMP protocol.
2657
2658 --ping-sock-ops N
2659 stop the ping-sock stress workers after N ICMP messages are
2660 sent.
2661
2662 -p N, --pipe N
2663 start N workers that perform large pipe writes and reads to
2664 exercise pipe I/O. This exercises memory write and reads as
2665 well as context switching. Each worker has two processes, a
2666 reader and a writer.
2667
2668 --pipe-ops N
2669 stop pipe stress workers after N bogo pipe write operations.
2670
2671 --pipe-data-size N
2672 specifies the size in bytes of each write to the pipe (range
2673 from 4 bytes to 4096 bytes). Setting a small data size will
2674 cause more writes to be buffered in the pipe, hence reducing the
2675 context switch rate between the pipe writer and pipe reader pro‐
2676 cesses. Default size is the page size.
2677
2678 --pipe-size N
2679 specifies the size of the pipe in bytes (for systems that sup‐
2680 port the F_SETPIPE_SZ fcntl() command). Setting a small pipe
2681 size will cause the pipe to fill and block more frequently,
2682 hence increasing the context switch rate between the pipe writer
2683 and the pipe reader processes. Default size is 512 bytes.
2684
2685 --pipeherd N
2686 start N workers that pass a 64 bit token counter to/from 100
2687 child processes over a shared pipe. This forces a high context
2688 switch rate and can trigger a "thundering herd" of wakeups on
2689 processes that are blocked on pipe waits.
2690
2691 --pipeherd-ops N
2692 stop pipe stress workers after N bogo pipe write operations.
2693
2694 --pipeherd-yield
2695 force a scheduling yield after each write, this increases the
2696 context switch rate.
2697
2698 --pkey N
2699 start N workers that change memory protection using a protection
2700 key (pkey) and the pkey_mprotect call (Linux only). This will
2701 try to allocate a pkey and use this for the page protection,
2702 however, if this fails then the special pkey -1 will be used
2703 (and the kernel will use the normal mprotect mechanism instead).
2704 Various page protection mixes of read/write/exec/none will be
2705 cycled through on randomly chosen pre-allocated pages.
2706
2707 --pkey-ops N
2708 stop after N pkey_mprotect page protection cycles.
2709
2710 -P N, --poll N
2711 start N workers that perform zero timeout polling via the
2712 poll(2), ppoll(2), select(2), pselect(2) and sleep(3) calls.
2713 This wastes system and user time doing nothing.
2714
2715 --poll-ops N
2716 stop poll stress workers after N bogo poll operations.
2717
2718 --poll-fds N
2719 specify the number of file descriptors to poll/ppoll/select/pse‐
2720 lect on. The maximum number for select/pselect is limited by
2721 FD_SETSIZE and the upper maximum is also limited by the maximum
2722 number of pipe open descriptors allowed.
2723
2724 --prctl N
2725 start N workers that exercise the majority of the prctl(2) sys‐
2726 tem call options. Each batch of prctl calls is performed inside
2727 a new child process to ensure the limit of prctl is contained
2728 inside a new process every time. Some prctl options are archi‐
2729 tecture specific, however, this stressor will exercise these
2730 even if they are not implemented.
2731
2732 --prctl-ops N
2733 stop prctl workers after N batches of prctl calls
2734
2735 --procfs N
2736 start N workers that read files from /proc and recursively read
2737 files from /proc/self (Linux only).
2738
2739 --procfs-ops N
2740 stop procfs reading after N bogo read operations. Note, since
2741 the number of entries may vary between kernels, this bogo ops
2742 metric is probably very misleading.
2743
2744 --pthread N
2745 start N workers that iteratively creates and terminates multiple
2746 pthreads (the default is 1024 pthreads per worker). In each
2747 iteration, each newly created pthread waits until the worker has
2748 created all the pthreads and then they all terminate together.
2749
2750 --pthread-ops N
2751 stop pthread workers after N bogo pthread create operations.
2752
2753 --pthread-max N
2754 create N pthreads per worker. If the product of the number of
2755 pthreads by the number of workers is greater than the soft limit
2756 of allowed pthreads then the maximum is re-adjusted down to the
2757 maximum allowed.
2758
2759 --ptrace N
2760 start N workers that fork and trace system calls of a child
2761 process using ptrace(2).
2762
2763 --ptrace-ops N
2764 stop ptracer workers after N bogo system calls are traced.
2765
2766 --pty N
2767 start N workers that repeatedly attempt to open pseudoterminals
2768 and perform various pty ioctls upon the ptys before closing
2769 them.
2770
2771 --pty-ops N
2772 stop pty workers after N pty bogo operations.
2773
2774 --pty-max N
2775 try to open a maximum of N pseudoterminals, the default is
2776 65536. The allowed range of this setting is 8..65536.
2777
2778 -Q, --qsort N
2779 start N workers that sort 32 bit integers using qsort.
2780
2781 --qsort-ops N
2782 stop qsort stress workers after N bogo qsorts.
2783
2784 --qsort-size N
2785 specify number of 32 bit integers to sort, default is 262144
2786 (256 × 1024).
2787
2788 --quota N
2789 start N workers that exercise the Q_GETQUOTA, Q_GETFMT, Q_GET‐
2790 INFO, Q_GETSTATS and Q_SYNC quotactl(2) commands on all the
2791 available mounted block based file systems. Requires
2792 CAP_SYS_ADMIN capability to run.
2793
2794 --quota-ops N
2795 stop quota stress workers after N bogo quotactl operations.
2796
2797 --radixsort N
2798 start N workers that sort random 8 byte strings using radixsort.
2799
2800 --radixsort-ops N
2801 stop radixsort stress workers after N bogo radixsorts.
2802
2803 --radixsort-size N
2804 specify number of strings to sort, default is 262144 (256 ×
2805 1024).
2806
2807 --ramfs N
2808 start N workers mounting a memory based file system using ramfs
2809 and tmpfs (Linux only). This alternates between mounting and
2810 umounting a ramfs or tmpfs file system using the traditional
2811 mount(2) and umount(2) system call as well as the newer Linux
2812 5.2 fsopen(2), fsmount(2), fsconfig(2) and move_mount(2) system
2813 calls if they are available. The default ram file system size is
2814 2MB.
2815
2816 --ramfs-ops N
2817 stop after N ramfs mount operations.
2818
2819 --ramfs-size N
2820 set the ramfs size (must be multiples of the page size).
2821
2822 --rawdev N
2823 start N workers that read the underlying raw drive device using
2824 direct IO reads. The device (with minor number 0) that stores
2825 the current working directory is the raw device to be read by
2826 the stressor. The read size is exactly the size of the underly‐
2827 ing device block size. By default, this stressor will exercise
2828 all the of the rawdev methods (see the --rawdev-method option).
2829 This is a Linux only stressor and requires root privilege to be
2830 able to read the raw device.
2831
2832 --rawdev-ops N
2833 stop the rawdev stress workers after N raw device read bogo
2834 operations.
2835
2836 --rawdev-method M
2837 Available rawdev stress methods are described as follows:
2838
2839 Method Description
2840 all iterate over all the rawdev stress methods as
2841 listed below:
2842 sweep repeatedly read across the raw device from the
2843 0th block to the end block in steps of the num‐
2844 ber of blocks on the device / 128 and back to
2845 the start again.
2846 wiggle repeatedly read across the raw device in 128
2847 evenly steps with each step reading 1024 blocks
2848 backwards from each step.
2849 ends repeatedly read the first and last 128 start
2850 and end blocks of the raw device alternating
2851 from start of the device to the end of the
2852 device.
2853 random repeatedly read 256 random blocks
2854 burst repeatedly read 256 sequential blocks starting
2855 from a random block on the raw device.
2856
2857 --rawsock N
2858 start N workers that send and receive packet data using raw
2859 sockets on the localhost. Requires CAP_NET_RAW to run.
2860
2861 --rawsock-ops N
2862 stop rawsock workers after N packets are received.
2863
2864 --rawpkt N
2865 start N workers that sends and receives ethernet packets using
2866 raw packets on the localhost via the loopback device. Requires
2867 CAP_NET_RAW to run.
2868
2869 --rawpkt-ops N
2870 stop rawpkt workers after N packets from the sender process are
2871 received.
2872
2873 --rawpkt-port N
2874 start at port P. For N rawpkt worker processes, ports P to (P *
2875 4) - 1 are used. The default starting port is port 14000.
2876
2877 --rawudp N
2878 start N workers that send and receive UDP packets using raw
2879 sockets on the localhost. Requires CAP_NET_RAW to run.
2880
2881 --rawudp-ops N
2882 stop rawudp workers after N packets are received.
2883
2884 --rawudp-port N
2885 start at port P. For N rawudp worker processes, ports P to (P *
2886 4) - 1 are used. The default starting port is port 13000.
2887
2888 --rdrand N
2889 start N workers that read a random number from an on-chip random
2890 number generator This uses the rdrand instruction on Intel pro‐
2891 cessors or the darn instruction on Power9 processors.
2892
2893 --rdrand-ops N
2894 stop rdrand stress workers after N bogo rdrand operations (1
2895 bogo op = 2048 random bits successfully read).
2896
2897 --readahead N
2898 start N workers that randomly seek and perform 4096 byte
2899 read/write I/O operations on a file with readahead. The default
2900 file size is 64 MB. Readaheads and reads are batched into 16
2901 readaheads and then 16 reads.
2902
2903 --readahead-bytes N
2904 set the size of readahead file, the default is 1 GB. One can
2905 specify the size as % of free space on the file system or in
2906 units of Bytes, KBytes, MBytes and GBytes using the suffix b, k,
2907 m or g.
2908
2909 --readahead-ops N
2910 stop readahead stress workers after N bogo read operations.
2911
2912 --reboot N
2913 start N workers that exercise the reboot(2) system call. When
2914 possible, it will create a process in a PID namespace and per‐
2915 form a reboot power off command that should shutdown the
2916 process. Also, the stressor exercises invalid reboot magic val‐
2917 ues and invalid reboots when there are insufficient privileges
2918 that will not actually reboot the system.
2919
2920 --reboot-ops N
2921 stop the reboot stress workers after N bogo reboot cycles.
2922
2923 --remap N
2924 start N workers that map 512 pages and re-order these pages
2925 using the deprecated system call remap_file_pages(2). Several
2926 page re-orderings are exercised: forward, reverse, random and
2927 many pages to 1 page.
2928
2929 --remap-ops N
2930 stop after N remapping bogo operations.
2931
2932 -R N, --rename N
2933 start N workers that each create a file and then repeatedly
2934 rename it.
2935
2936 --rename-ops N
2937 stop rename stress workers after N bogo rename operations.
2938
2939 --resources N
2940 start N workers that consume various system resources. Each
2941 worker will spawn 1024 child processes that iterate 1024 times
2942 consuming shared memory, heap, stack, temporary files and vari‐
2943 ous file descriptors (eventfds, memoryfds, userfaultfds, pipes
2944 and sockets).
2945
2946 --resources-ops N
2947 stop after N resource child forks.
2948
2949 --revio N
2950 start N workers continually writing in reverse position order to
2951 temporary files. The default mode is to stress test reverse
2952 position ordered writes with randomly sized sparse holes between
2953 each write. With the --aggressive option enabled without any
2954 --revio-opts options the revio stressor will work through all
2955 the --revio-opt options one by one to cover a range of I/O
2956 options.
2957
2958 --revio-bytes N
2959 write N bytes for each revio process, the default is 1 GB. One
2960 can specify the size as % of free space on the file system or in
2961 units of Bytes, KBytes, MBytes and GBytes using the suffix b, k,
2962 m or g.
2963
2964 --revio-opts list
2965 specify various stress test options as a comma separated list.
2966 Options are the same as --hdd-opts but without the iovec option.
2967
2968 --revio-ops N
2969 stop revio stress workers after N bogo operations.
2970
2971 --revio-write-size N
2972 specify size of each write in bytes. Size can be from 1 byte to
2973 4MB.
2974
2975 --rlimit N
2976 start N workers that exceed CPU and file size resource imits,
2977 generating SIGXCPU and SIGXFSZ signals.
2978
2979 --rlimit-ops N
2980 stop after N bogo resource limited SIGXCPU and SIGXFSZ signals
2981 have been caught.
2982
2983 --rmap N
2984 start N workers that exercise the VM reverse-mapping. This cre‐
2985 ates 16 processes per worker that write/read multiple file-
2986 backed memory mappings. There are 64 lots of 4 page mappings
2987 made onto the file, with each mapping overlapping the previous
2988 by 3 pages and at least 1 page of non-mapped memory between each
2989 of the mappings. Data is synchronously msync'd to the file 1 in
2990 every 256 iterations in a random manner.
2991
2992 --rmap-ops N
2993 stop after N bogo rmap memory writes/reads.
2994
2995 --rseq N
2996 start N workers that exercise restartable sequences via the
2997 rseq(2) system call. This loops over a long duration critical
2998 section that is likely to be interrupted. A rseq abort handler
2999 keeps count of the number of interruptions and a SIGSEV handler
3000 also tracks any failed rseq aborts that can occur if there is a
3001 mistmatch in a rseq check signature. Linux only.
3002
3003 --rseq-ops N
3004 stop after N bogo rseq operations. Each bogo rseq operation is
3005 equivalent to 10000 iterations over a long duration rseq handled
3006 critical section.
3007
3008 --rtc N
3009 start N workers that exercise the real time clock (RTC) inter‐
3010 faces via /dev/rtc and /sys/class/rtc/rtc0. No destructive
3011 writes (modifications) are performed on the RTC. This is a Linux
3012 only stressor.
3013
3014 --rtc-ops N
3015 stop after N bogo RTC interface accesses.
3016
3017 --schedpolicy N
3018 start N workers that work set the worker to various available
3019 scheduling policies out of SCHED_OTHER, SCHED_BATCH, SCHED_IDLE,
3020 SCHED_FIFO, SCHED_RR and SCHED_DEADLINE. For the real time
3021 scheduling policies a random sched priority is selected between
3022 the minimum and maximum scheduling priority settings.
3023
3024 --schedpolicy-ops N
3025 stop after N bogo scheduling policy changes.
3026
3027 --sctp N
3028 start N workers that perform network sctp stress activity using
3029 the Stream Control Transmission Protocol (SCTP). This involves
3030 client/server processes performing rapid connect, send/receives
3031 and disconnects on the local host.
3032
3033 --sctp-domain D
3034 specify the domain to use, the default is ipv4. Currently ipv4
3035 and ipv6 are supported.
3036
3037 --sctp-ops N
3038 stop sctp workers after N bogo operations.
3039
3040 --sctp-port P
3041 start at sctp port P. For N sctp worker processes, ports P to (P
3042 * 4) - 1 are used for ipv4, ipv6 domains and ports P to P - 1
3043 are used for the unix domain.
3044
3045 --seal N
3046 start N workers that exercise the fcntl(2) SEAL commands on a
3047 small anonymous file created using memfd_create(2). After each
3048 SEAL command is issued the stressor also sanity checks if the
3049 seal operation has sealed the file correctly. (Linux only).
3050
3051 --seal-ops N
3052 stop after N bogo seal operations.
3053
3054 --seccomp N
3055 start N workers that exercise Secure Computing system call fil‐
3056 tering. Each worker creates child processes that write a short
3057 message to /dev/null and then exits. 2% of the child processes
3058 have a seccomp filter that disallows the write system call and
3059 hence it is killed by seccomp with a SIGSYS. Note that this
3060 stressor can generate many audit log messages each time the
3061 child is killed. Requires CAP_SYS_ADMIN to run.
3062
3063 --seccomp-ops N
3064 stop seccomp stress workers after N seccomp filter tests.
3065
3066 --secretmem N
3067 start N workers that mmap pages using file mapping off a
3068 memfd_secret file descriptor. Each stress loop iteration will
3069 expand the mappable region by 3 pages using ftruncate and mmap
3070 and touches the pages. The pages are then fragmented by unmap‐
3071 ping the middle page and then umapping the first and last pages.
3072 This tries to force page fragmentation and also trigger out of
3073 memory (OOM) kills of the stressor when the secret memory is
3074 exhausted. Note this is a Linux 5.11+ only stressor and the
3075 kernel needs to be booted with "secretmem=" option to allocate a
3076 secret memory reservation.
3077
3078 --secretmem-ops N
3079 stop secretmem stress workers after N stress loop iterations.
3080
3081 --seek N
3082 start N workers that randomly seeks and performs 512 byte
3083 read/write I/O operations on a file. The default file size is 16
3084 GB.
3085
3086 --seek-ops N
3087 stop seek stress workers after N bogo seek operations.
3088
3089 --seek-punch
3090 punch randomly located 8K holes into the file to cause more
3091 extents to force a more demanding seek stressor, (Linux only).
3092
3093 --seek-size N
3094 specify the size of the file in bytes. Small file sizes allow
3095 the I/O to occur in the cache, causing greater CPU load. Large
3096 file sizes force more I/O operations to drive causing more wait
3097 time and more I/O on the drive. One can specify the size in
3098 units of Bytes, KBytes, MBytes and GBytes using the suffix b, k,
3099 m or g.
3100
3101 --sem N
3102 start N workers that perform POSIX semaphore wait and post oper‐
3103 ations. By default, a parent and 4 children are started per
3104 worker to provide some contention on the semaphore. This
3105 stresses fast semaphore operations and produces rapid context
3106 switching.
3107
3108 --sem-ops N
3109 stop semaphore stress workers after N bogo semaphore operations.
3110
3111 --sem-procs N
3112 start N child workers per worker to provide contention on the
3113 semaphore, the default is 4 and a maximum of 64 are allowed.
3114
3115 --sem-sysv N
3116 start N workers that perform System V semaphore wait and post
3117 operations. By default, a parent and 4 children are started per
3118 worker to provide some contention on the semaphore. This
3119 stresses fast semaphore operations and produces rapid context
3120 switching.
3121
3122 --sem-sysv-ops N
3123 stop semaphore stress workers after N bogo System V semaphore
3124 operations.
3125
3126 --sem-sysv-procs N
3127 start N child processes per worker to provide contention on the
3128 System V semaphore, the default is 4 and a maximum of 64 are
3129 allowed.
3130
3131 --sendfile N
3132 start N workers that send an empty file to /dev/null. This oper‐
3133 ation spends nearly all the time in the kernel. The default
3134 sendfile size is 4MB. The sendfile options are for Linux only.
3135
3136 --sendfile-ops N
3137 stop sendfile workers after N sendfile bogo operations.
3138
3139 --sendfile-size S
3140 specify the size to be copied with each sendfile call. The
3141 default size is 4MB. One can specify the size in units of Bytes,
3142 KBytes, MBytes and GBytes using the suffix b, k, m or g.
3143
3144 --session N
3145 start N workers that create child and grandchild processes that
3146 set and get their session ids. 25% of the grandchild processes
3147 are not waited for by the child to create orphaned sessions that
3148 need to be reaped by init.
3149
3150 --session-ops N
3151 stop session workers after N child processes are spawned and
3152 reaped.
3153
3154 --set N
3155 start N workers that call system calls that try to set data in
3156 the kernel, currently these are: setgid, sethostname, setpgid,
3157 setpgrp, setuid, setgroups, setreuid, setregid, setresuid,
3158 setresgid and setrlimit. Some of these system calls are OS spe‐
3159 cific.
3160
3161 --set-ops N
3162 stop set workers after N bogo set operations.
3163
3164 --shellsort N
3165 start N workers that sort 32 bit integers using shellsort.
3166
3167 --shellsort-ops N
3168 stop shellsort stress workers after N bogo shellsorts.
3169
3170 --shellsort-size N
3171 specify number of 32 bit integers to sort, default is 262144
3172 (256 × 1024).
3173
3174 --shm N
3175 start N workers that open and allocate shared memory objects
3176 using the POSIX shared memory interfaces. By default, the test
3177 will repeatedly create and destroy 32 shared memory objects,
3178 each of which is 8MB in size.
3179
3180 --shm-ops N
3181 stop after N POSIX shared memory create and destroy bogo opera‐
3182 tions are complete.
3183
3184 --shm-bytes N
3185 specify the size of the POSIX shared memory objects to be cre‐
3186 ated. One can specify the size as % of total available memory or
3187 in units of Bytes, KBytes, MBytes and GBytes using the suffix b,
3188 k, m or g.
3189
3190 --shm-objs N
3191 specify the number of shared memory objects to be created.
3192
3193 --shm-sysv N
3194 start N workers that allocate shared memory using the System V
3195 shared memory interface. By default, the test will repeatedly
3196 create and destroy 8 shared memory segments, each of which is
3197 8MB in size.
3198
3199 --shm-sysv-ops N
3200 stop after N shared memory create and destroy bogo operations
3201 are complete.
3202
3203 --shm-sysv-bytes N
3204 specify the size of the shared memory segment to be created. One
3205 can specify the size as % of total available memory or in units
3206 of Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or
3207 g.
3208
3209 --shm-sysv-segs N
3210 specify the number of shared memory segments to be created. The
3211 default is 8 segments.
3212
3213 --sigabrt N
3214 start N workers that create children that are killed by SIGABRT
3215 signals or by calling abort(3).
3216
3217 --sigabrt-ops N
3218 stop the sigabrt workers after N SIGABRT signals are success‐
3219 fully handled.
3220
3221 --sigchld N
3222 start N workers that create children to generate SIGCHLD sig‐
3223 nals. This exercises children that exit (CLD_EXITED), get killed
3224 (CLD_KILLED), get stopped (CLD_STOPPED) or continued (CLD_CON‐
3225 TINUED).
3226
3227 --sigchld-ops N
3228 stop the sigchld workers after N SIGCHLD signals are success‐
3229 fully handled.
3230
3231 --sigfd N
3232 start N workers that generate SIGRT signals and are handled by
3233 reads by a child process using a file descriptor set up using
3234 signalfd(2). (Linux only). This will generate a heavy context
3235 switch load when all CPUs are fully loaded.
3236
3237 --sigfd-ops
3238 stop sigfd workers after N bogo SIGUSR1 signals are sent.
3239
3240 --sigfpe N
3241 start N workers that rapidly cause division by zero SIGFPE
3242 faults.
3243
3244 --sigfpe-ops N
3245 stop sigfpe stress workers after N bogo SIGFPE faults.
3246
3247 --sigio N
3248 start N workers that read data from a child process via a pipe
3249 and generate SIGIO signals. This exercises asynchronous I/O via
3250 SIGIO.
3251
3252 --sigio-ops N
3253 stop sigio stress workers after handling N SIGIO signals.
3254
3255 --signal N
3256 start N workers that exercise the signal system call three dif‐
3257 ferent signal handlers, SIG_IGN (ignore), a SIGCHLD handler and
3258 SIG_DFL (default action). For the SIGCHLD handler, the stressor
3259 sends itself a SIGCHLD signal and checks if it has been handled.
3260 For other handlers, the stressor checks that the SIGCHLD handler
3261 has not been called. This stress test calls the signal system
3262 call directly when possible and will try to avoid the C library
3263 attempt to replace signal with the more modern sigaction system
3264 call.
3265
3266 --signal-ops N
3267 stop signal stress workers after N rounds of signal handler set‐
3268 ting.
3269
3270 --sigpending N
3271 start N workers that check if SIGUSR1 signals are pending. This
3272 stressor masks SIGUSR1, generates a SIGUSR1 signal and uses sig‐
3273 pending(2) to see if the signal is pending. Then it unmasks the
3274 signal and checks if the signal is no longer pending.
3275
3276 --sigpending-ops N
3277 stop sigpending stress workers after N bogo sigpending pend‐
3278 ing/unpending checks.
3279
3280 --sigpipe N
3281 start N workers that repeatedly spawn off child process that
3282 exits before a parent can complete a pipe write, causing a SIG‐
3283 PIPE signal. The child process is either spawned using clone(2)
3284 if it is available or use the slower fork(2) instead.
3285
3286 --sigpipe-ops N
3287 stop N workers after N SIGPIPE signals have been caught and han‐
3288 dled.
3289
3290 --sigq N
3291 start N workers that rapidly send SIGUSR1 signals using
3292 sigqueue(3) to child processes that wait for the signal via sig‐
3293 waitinfo(2).
3294
3295 --sigq-ops N
3296 stop sigq stress workers after N bogo signal send operations.
3297
3298 --sigrt N
3299 start N workers that each create child processes to handle
3300 SIGRTMIN to SIGRMAX real time signals. The parent sends each
3301 child process a RT signal via siqueue(2) and the child process
3302 waits for this via sigwaitinfo(2). When the child receives the
3303 signal it then sends a RT signal to one of the other child pro‐
3304 cesses also via sigqueue(2).
3305
3306 --sigrt-ops N
3307 stop sigrt stress workers after N bogo sigqueue signal send
3308 operations.
3309
3310 --sigsegv N
3311 start N workers that rapidly create and catch segmentation
3312 faults.
3313
3314 --sigsegv-ops N
3315 stop sigsegv stress workers after N bogo segmentation faults.
3316
3317 --sigsuspend N
3318 start N workers that each spawn off 4 child processes that wait
3319 for a SIGUSR1 signal from the parent using sigsuspend(2). The
3320 parent sends SIGUSR1 signals to each child in rapid succession.
3321 Each sigsuspend wakeup is counted as one bogo operation.
3322
3323 --sigsuspend-ops N
3324 stop sigsuspend stress workers after N bogo sigsuspend wakeups.
3325
3326 --sigtrap N
3327 start N workers that exercise the SIGTRAP signal. For systems
3328 that support SIGTRAP, the signal is generated using raise(SIG‐
3329 TRAP). Only x86 Linux systems the SIGTRAP is also generated by
3330 an int 3 instruction.
3331
3332 --sigtrap-ops N
3333 stop sigtrap stress workers after N SIGTRAPs have been handled.
3334
3335 --skiplist N
3336 start N workers that store and then search for integers using a
3337 skiplist. By default, 65536 integers are added and searched.
3338 This is a useful method to exercise random access of memory and
3339 processor cache.
3340
3341 --skiplist-ops N
3342 stop the skiplist worker after N skiplist store and search
3343 cycles are completed.
3344
3345 --skiplist-size N
3346 specify the size (number of integers) to store and search in the
3347 skiplist. Size can be from 1K to 4M.
3348
3349 --sleep N
3350 start N workers that spawn off multiple threads that each per‐
3351 form multiple sleeps of ranges 1us to 0.1s. This creates multi‐
3352 ple context switches and timer interrupts.
3353
3354 --sleep-ops N
3355 stop after N sleep bogo operations.
3356
3357 --sleep-max P
3358 start P threads per worker. The default is 1024, the maximum
3359 allowed is 30000.
3360
3361 -S N, --sock N
3362 start N workers that perform various socket stress activity.
3363 This involves a pair of client/server processes performing rapid
3364 connect, send and receives and disconnects on the local host.
3365
3366 --sock-domain D
3367 specify the domain to use, the default is ipv4. Currently ipv4,
3368 ipv6 and unix are supported.
3369
3370 --sock-nodelay
3371 This disables the TCP Nagle algorithm, so data segments are
3372 always sent as soon as possible. This stops data from being
3373 buffered before being transmitted, hence resulting in poorer
3374 network utilisation and more context switches between the sender
3375 and receiver.
3376
3377 --sock-port P
3378 start at socket port P. For N socket worker processes, ports P
3379 to P - 1 are used.
3380
3381 --sock-ops N
3382 stop socket stress workers after N bogo operations.
3383
3384 --sock-opts [ random | send | sendmsg | sendmmsg ]
3385 by default, messages are sent using send(2). This option allows
3386 one to specify the sending method using send(2), sendmsg(2),
3387 sendmmsg(2) or a random selection of one of thse 3 on each iter‐
3388 ation. Note that sendmmsg is only available for Linux systems
3389 that support this system call.
3390
3391 --sock-type [ stream | seqpacket ]
3392 specify the socket type to use. The default type is stream. seq‐
3393 packet currently only works for the unix socket domain.
3394
3395 --sockabuse N
3396 start N workers that abuse a socket file descriptor with various
3397 file based system that don't normally act on sockets. The kernel
3398 should handle these illegal and unexpected calls gracefully.
3399
3400 --sockabuse-ops N
3401 stop after N iterations of the socket abusing stressor loop.
3402
3403 --sockdiag N
3404 start N workers that exercise the Linux sock_diag netlink socket
3405 diagnostics (Linux only). This currently requests diagnostics
3406 using UDIAG_SHOW_NAME, UDIAG_SHOW_VFS, UDIAG_SHOW_PEER,
3407 UDIAG_SHOW_ICONS, UDIAG_SHOW_RQLEN and UDIAG_SHOW_MEMINFO for
3408 the AF_UNIX family of socket connections.
3409
3410 --sockdiag-ops N
3411 stop after receiving N sock_diag diagnostic messages.
3412
3413 --sockfd N
3414 start N workers that pass file descriptors over a UNIX domain
3415 socket using the CMSG(3) ancillary data mechanism. For each
3416 worker, pair of client/server processes are created, the server
3417 opens as many file descriptors on /dev/null as possible and
3418 passing these over the socket to a client that reads these from
3419 the CMSG data and immediately closes the files.
3420
3421 --sockfd-ops N
3422 stop sockfd stress workers after N bogo operations.
3423
3424 --sockfd-port P
3425 start at socket port P. For N socket worker processes, ports P
3426 to P - 1 are used.
3427
3428 --sockmany N
3429 start N workers that use a client process to attempt to open as
3430 many as 100000 TCP/IP socket connections to a server on port
3431 10000.
3432
3433 --sockmany-ops N
3434 stop after N connections.
3435
3436 --sockpair N
3437 start N workers that perform socket pair I/O read/writes. This
3438 involves a pair of client/server processes performing randomly
3439 sized socket I/O operations.
3440
3441 --sockpair-ops N
3442 stop socket pair stress workers after N bogo operations.
3443
3444 --softlockup N
3445 start N workers that flip between with the "real-time" SCHED_FIO
3446 and SCHED_RR scheduling policies at the highest priority to
3447 force softlockups. This can only be run with CAP_SYS_NICE capa‐
3448 bility and for best results the number of stressors should be at
3449 least the number of online CPUs. Once running, this is practi‐
3450 cally impossible to stop and it will force softlockup issues and
3451 may trigger watchdog timeout reboots.
3452
3453 --softlockup-ops N
3454 stop softlockup stress workers after N bogo scheduler policy
3455 changes.
3456
3457 --spawn N
3458 start N workers continually spawn children using posix_spawn(3)
3459 that exec stress-ng and then exit almost immediately. Currently
3460 Linux only.
3461
3462 --spawn-ops N
3463 stop spawn stress workers after N bogo spawns.
3464
3465 --splice N
3466 move data from /dev/zero to /dev/null through a pipe without any
3467 copying between kernel address space and user address space
3468 using splice(2). This is only available for Linux.
3469
3470 --splice-ops N
3471 stop after N bogo splice operations.
3472
3473 --splice-bytes N
3474 transfer N bytes per splice call, the default is 64K. One can
3475 specify the size as % of total available memory or in units of
3476 Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g.
3477
3478 --stack N
3479 start N workers that rapidly cause and catch stack overflows by
3480 use of large recursive stack allocations. Much like the brk
3481 stressor, this can eat up pages rapidly and may trigger the ker‐
3482 nel OOM killer on the process, however, the killed stressor is
3483 respawned again by a monitoring parent process.
3484
3485 --stack-fill
3486 the default action is to touch the lowest page on each stack
3487 allocation. This option touches all the pages by filling the new
3488 stack allocation with zeros which forces physical pages to be
3489 allocated and hence is more aggressive.
3490
3491 --stack-mlock
3492 attempt to mlock stack pages into memory prohibiting them from
3493 being paged out. This is a no-op if mlock(2) is not available.
3494
3495 --stack-ops N
3496 stop stack stress workers after N bogo stack overflows.
3497
3498 --stackmmap N
3499 start N workers that use a 2MB stack that is memory mapped onto
3500 a temporary file. A recursive function works down the stack and
3501 flushes dirty stack pages back to the memory mapped file using
3502 msync(2) until the end of the stack is reached (stack overflow).
3503 This exercises dirty page and stack exception handling.
3504
3505 --stackmmap-ops N
3506 stop workers after N stack overflows have occurred.
3507
3508 --str N
3509 start N workers that exercise various libc string functions on
3510 random strings.
3511
3512 --str-method strfunc
3513 select a specific libc string function to stress. Available
3514 string functions to stress are: all, index, rindex, strcasecmp,
3515 strcat, strchr, strcoll, strcmp, strcpy, strlen, strncasecmp,
3516 strncat, strncmp, strrchr and strxfrm. See string(3) for more
3517 information on these string functions. The 'all' method is the
3518 default and will exercise all the string methods.
3519
3520 --str-ops N
3521 stop after N bogo string operations.
3522
3523 --stream N
3524 start N workers exercising a memory bandwidth stressor loosely
3525 based on the STREAM "Sustainable Memory Bandwidth in High Per‐
3526 formance Computers" benchmarking tool by John D. McCalpin, Ph.D.
3527 This stressor allocates buffers that are at least 4 times the
3528 size of the CPU L2 cache and continually performs rounds of fol‐
3529 lowing computations on large arrays of double precision floating
3530 point numbers:
3531
3532 Operation Description
3533 copy c[i] = a[i]
3534 scale b[i] = scalar * c[i]
3535 add c[i] = a[i] + b[i]
3536 triad a[i] = b[i] + (c[i] * scalar)
3537
3538 Since this is loosely based on a variant of the STREAM benchmark
3539 code, DO NOT submit results based on this as it is intended to
3540 in stress-ng just to stress memory and compute and NOT intended
3541 for STREAM accurate tuned or non-tuned benchmarking whatsoever.
3542 Use the official STREAM benchmarking tool if you desire accurate
3543 and standardised STREAM benchmarks.
3544
3545 --stream-ops N
3546 stop after N stream bogo operations, where a bogo operation is
3547 one round of copy, scale, add and triad operations.
3548
3549 --stream-index N
3550 specify number of stream indices used to index into the data
3551 arrays a, b and c. This adds indirection into the data lookup
3552 by using randomly shuffled indexing into the three data arrays.
3553 Level 0 (no indexing) is the default, and 3 is where all 3
3554 arrays are indexed via 3 different randomly shuffled indexes.
3555 The higher the index setting the more impact this has on L1, L2
3556 and L3 caching and hence forces higher memory read/write laten‐
3557 cies.
3558
3559 --stream-l3-size N
3560 Specify the CPU Level 3 cache size in bytes. One can specify
3561 the size in units of Bytes, KBytes, MBytes and GBytes using the
3562 suffix b, k, m or g. If the L3 cache size is not provided, then
3563 stress-ng will attempt to determine the cache size, and failing
3564 this, will default the size to 4MB.
3565
3566 --stream-madvise [ hugepage | nohugepage | normal ]
3567 Specify the madvise options used on the memory mapped buffer
3568 used in the stream stressor. Non-linux systems will only have
3569 the 'normal' madvise advice. The default is 'normal'.
3570
3571 --swap N
3572 start N workers that add and remove small randomly sizes swap
3573 partitions (Linux only). Note that if too many swap partitions
3574 are added then the stressors may exit with exit code 3 (not
3575 enough resources). Requires CAP_SYS_ADMIN to run.
3576
3577 --swap-ops N
3578 stop the swap workers after N swapon/swapoff iterations.
3579
3580 -s N, --switch N
3581 start N workers that send messages via pipe to a child to force
3582 context switching.
3583
3584 --switch-ops N
3585 stop context switching workers after N bogo operations.
3586
3587 --switch-rate R
3588 run the context switching at the rate of R context switches per
3589 second. Note that the specified switch rate may not be achieved
3590 because of CPU speed and memory bandwidth limitations.
3591
3592 --symlink N
3593 start N workers creating and removing symbolic links.
3594
3595 --symlink-ops N
3596 stop symlink stress workers after N bogo operations.
3597
3598 --sync-file N
3599 start N workers that perform a range of data syncs across a file
3600 using sync_file_range(2). Three mixes of syncs are performed,
3601 from start to the end of the file, from end of the file to the
3602 start, and a random mix. A random selection of valid sync types
3603 are used, covering the SYNC_FILE_RANGE_WAIT_BEFORE,
3604 SYNC_FILE_RANGE_WRITE and SYNC_FILE_RANGE_WAIT_AFTER flag bits.
3605
3606 --sync-file-ops N
3607 stop sync-file workers after N bogo sync operations.
3608
3609 --sync-file-bytes N
3610 specify the size of the file to be sync'd. One can specify the
3611 size as % of free space on the file system in units of Bytes,
3612 KBytes, MBytes and GBytes using the suffix b, k, m or g.
3613
3614 --sysbadaddr N
3615 start N workers that pass bad addresses to system calls to exer‐
3616 cise bad address and fault handling. The addresses used are null
3617 pointers, read only pages, write only pages, unmapped addresses,
3618 text only pages, unaligned addresses and top of memory
3619 addresses.
3620
3621 --sysbadaddr-ops N
3622 stop the sysbadaddr stressors after N bogo system calls.
3623
3624 --sysinfo N
3625 start N workers that continually read system and process spe‐
3626 cific information. This reads the process user and system times
3627 using the times(2) system call. For Linux systems, it also
3628 reads overall system statistics using the sysinfo(2) system call
3629 and also the file system statistics for all mounted file systems
3630 using statfs(2).
3631
3632 --sysinfo-ops N
3633 stop the sysinfo workers after N bogo operations.
3634
3635 --sysinval N
3636 start N workers that exercise system calls in random order with
3637 permutations of invalid arguments to force kernel error handling
3638 checks. The stress test autodetects system calls that cause pro‐
3639 cesses to crash or exit prematurely and will blocklist these
3640 after several repeated breakages. System call arguments that
3641 cause system calls to work successfully are also detected an
3642 blocklisted too. Linux only.
3643
3644 --sysinval-ops N
3645 stop systinval workers after N system call attempts.
3646
3647 --sysfs N
3648 start N workers that recursively read files from /sys (Linux
3649 only). This may cause specific kernel drivers to emit messages
3650 into the kernel log.
3651
3652 --sys-ops N
3653 stop sysfs reading after N bogo read operations. Note, since the
3654 number of entries may vary between kernels, this bogo ops metric
3655 is probably very misleading.
3656
3657 --tee N
3658 move data from a writer process to a reader process through
3659 pipes and to /dev/null without any copying between kernel
3660 address space and user address space using tee(2). This is only
3661 available for Linux.
3662
3663 --tee-ops N
3664 stop after N bogo tee operations.
3665
3666 -T N, --timer N
3667 start N workers creating timer events at a default rate of 1 MHz
3668 (Linux only); this can create a many thousands of timer clock
3669 interrupts. Each timer event is caught by a signal handler and
3670 counted as a bogo timer op.
3671
3672 --timer-ops N
3673 stop timer stress workers after N bogo timer events (Linux
3674 only).
3675
3676 --timer-freq F
3677 run timers at F Hz; range from 1 to 1000000000 Hz (Linux only).
3678 By selecting an appropriate frequency stress-ng can generate
3679 hundreds of thousands of interrupts per second. Note: it is
3680 also worth using --timer-slack 0 for high frequencies to stop
3681 the kernel from coalescing timer events.
3682
3683 --timer-rand
3684 select a timer frequency based around the timer frequency +/-
3685 12.5% random jitter. This tries to force more variability in the
3686 timer interval to make the scheduling less predictable.
3687
3688 --timerfd N
3689 start N workers creating timerfd events at a default rate of 1
3690 MHz (Linux only); this can create a many thousands of timer
3691 clock events. Timer events are waited for on the timer file
3692 descriptor using select(2) and then read and counted as a bogo
3693 timerfd op.
3694
3695 --timerfd-ops N
3696 stop timerfd stress workers after N bogo timerfd events (Linux
3697 only).
3698
3699 --timerfd-freq F
3700 run timers at F Hz; range from 1 to 1000000000 Hz (Linux only).
3701 By selecting an appropriate frequency stress-ng can generate
3702 hundreds of thousands of interrupts per second.
3703
3704 --timerfd-rand
3705 select a timerfd frequency based around the timer frequency +/-
3706 12.5% random jitter. This tries to force more variability in the
3707 timer interval to make the scheduling less predictable.
3708
3709 --tlb-shootdown N
3710 start N workers that force Translation Lookaside Buffer (TLB)
3711 shootdowns. This is achieved by creating up to 16 child pro‐
3712 cesses that all share a region of memory and these processes are
3713 shared amongst the available CPUs. The processes adjust the
3714 page mapping settings causing TLBs to be force flushed on the
3715 other processors, causing the TLB shootdowns.
3716
3717 --tlb-shootdown-ops N
3718 stop after N bogo TLB shootdown operations are completed.
3719
3720 --tmpfs N
3721 start N workers that create a temporary file on an available
3722 tmpfs file system and perform various file based mmap operations
3723 upon it.
3724
3725 --tmpfs-ops N
3726 stop tmpfs stressors after N bogo mmap operations.
3727
3728 --tmpfs-mmap-async
3729 enable file based memory mapping and use asynchronous msync'ing
3730 on each page, see --tmpfs-mmap-file.
3731
3732 --tmpfs-mmap-file
3733 enable tmpfs file based memory mapping and by default use syn‐
3734 chronous msync'ing on each page.
3735
3736 --tree N
3737 start N workers that exercise tree data structures. The default
3738 is to add, find and remove 250,000 64 bit integers into AVL
3739 (avl), Red-Black (rb), Splay (splay) and binary trees. The
3740 intention of this stressor is to exercise memory and cache with
3741 the various tree operations.
3742
3743 --tree-ops N
3744 stop tree stressors after N bogo ops. A bogo op covers the addi‐
3745 tion, finding and removing all the items into the tree(s).
3746
3747 --tree-size N
3748 specify the size of the tree, where N is the number of 64 bit
3749 integers to be added into the tree.
3750
3751 --tree-method [ all | avl | binary | rb | splay ]
3752 specify the tree to be used. By default, both the rb ad splay
3753 trees are used (the 'all' option).
3754
3755 --tsc N
3756 start N workers that read the Time Stamp Counter (TSC) 256 times
3757 per loop iteration (bogo operation). This exercises the tsc
3758 instruction for x86, the mftb instruction for ppc64 and the
3759 rdcycle instruction for RISC-V.
3760
3761 --tsc-ops N
3762 stop the tsc workers after N bogo operations are completed.
3763
3764 --tsearch N
3765 start N workers that insert, search and delete 32 bit integers
3766 on a binary tree using tsearch(3), tfind(3) and tdelete(3). By
3767 default, there are 65536 randomized integers used in the tree.
3768 This is a useful method to exercise random access of memory and
3769 processor cache.
3770
3771 --tsearch-ops N
3772 stop the tsearch workers after N bogo tree operations are com‐
3773 pleted.
3774
3775 --tsearch-size N
3776 specify the size (number of 32 bit integers) in the array to
3777 tsearch. Size can be from 1K to 4M.
3778
3779 --tun N
3780 start N workers that create a network tunnel device and sends
3781 and receives packets over the tunnel using UDP and then destroys
3782 it. A new random 192.168.*.* IPv4 address is used each time a
3783 tunnel is created.
3784
3785 --tun-ops N
3786 stop after N iterations of creating/sending/receiving/destroying
3787 a tunnel.
3788
3789 --tun-tap
3790 use network tap device using level 2 frames (bridging) rather
3791 than a tun device for level 3 raw packets (tunnelling).
3792
3793 --udp N
3794 start N workers that transmit data using UDP. This involves a
3795 pair of client/server processes performing rapid connect, send
3796 and receives and disconnects on the local host.
3797
3798 --udp-domain D
3799 specify the domain to use, the default is ipv4. Currently ipv4,
3800 ipv6 and unix are supported.
3801
3802 --udp-lite
3803 use the UDP-Lite (RFC 3828) protocol (only for ipv4 and ipv6
3804 domains).
3805
3806 --udp-ops N
3807 stop udp stress workers after N bogo operations.
3808
3809 --udp-port P
3810 start at port P. For N udp worker processes, ports P to P - 1
3811 are used. By default, ports 7000 upwards are used.
3812
3813 --udp-flood N
3814 start N workers that attempt to flood the host with UDP packets
3815 to random ports. The IP address of the packets are currently not
3816 spoofed. This is only available on systems that support
3817 AF_PACKET.
3818
3819 --udp-flood-domain D
3820 specify the domain to use, the default is ipv4. Currently ipv4
3821 and ipv6 are supported.
3822
3823 --udp-flood-ops N
3824 stop udp-flood stress workers after N bogo operations.
3825
3826 --unshare N
3827 start N workers that each fork off 32 child processes, each of
3828 which exercises the unshare(2) system call by disassociating
3829 parts of the process execution context. (Linux only).
3830
3831 --unshare-ops N
3832 stop after N bogo unshare operations.
3833
3834 --uprobe N
3835 start N workers that trace the entry to libc function getpid()
3836 using the Linux uprobe kernel tracing mechanism. This requires
3837 CAP_SYS_ADMIN capabilities and a modern Linux uprobe capable
3838 kernel.
3839
3840 --uprobe-ops N
3841 stop uprobe tracing after N trace events of the function that is
3842 being traced.
3843
3844 -u N, --urandom N
3845 start N workers reading /dev/urandom (Linux only). This will
3846 load the kernel random number source.
3847
3848 --urandom-ops N
3849 stop urandom stress workers after N urandom bogo read operations
3850 (Linux only).
3851
3852 --userfaultfd N
3853 start N workers that generate write page faults on a small
3854 anonymously mapped memory region and handle these faults using
3855 the user space fault handling via the userfaultfd mechanism.
3856 This will generate a large quantity of major page faults and
3857 also context switches during the handling of the page faults.
3858 (Linux only).
3859
3860 --userfaultfd-ops N
3861 stop userfaultfd stress workers after N page faults.
3862
3863 --userfaultfd-bytes N
3864 mmap N bytes per userfaultfd worker to page fault on, the
3865 default is 16MB. One can specify the size as % of total avail‐
3866 able memory or in units of Bytes, KBytes, MBytes and GBytes
3867 using the suffix b, k, m or g.
3868
3869 --utime N
3870 start N workers updating file timestamps. This is mainly CPU
3871 bound when the default is used as the system flushes metadata
3872 changes only periodically.
3873
3874 --utime-ops N
3875 stop utime stress workers after N utime bogo operations.
3876
3877 --utime-fsync
3878 force metadata changes on each file timestamp update to be
3879 flushed to disk. This forces the test to become I/O bound and
3880 will result in many dirty metadata writes.
3881
3882 --vdso N
3883 start N workers that repeatedly call each of the system call
3884 functions in the vDSO (virtual dynamic shared object). The vDSO
3885 is a shared library that the kernel maps into the address space
3886 of all user-space applications to allow fast access to kernel
3887 data to some system calls without the need of performing an
3888 expensive system call.
3889
3890 --vdso-ops N
3891 stop after N vDSO functions calls.
3892
3893 --vdso-func F
3894 Instead of calling all the vDSO functions, just call the vDSO
3895 function F. The functions depend on the kernel being used, but
3896 are typically clock_gettime, getcpu, gettimeofday and time.
3897
3898 --vecmath N
3899 start N workers that perform various unsigned integer math oper‐
3900 ations on various 128 bit vectors. A mix of vector math opera‐
3901 tions are performed on the following vectors: 16 × 8 bits, 8 ×
3902 16 bits, 4 × 32 bits, 2 × 64 bits. The metrics produced by this
3903 mix depend on the processor architecture and the vector math
3904 optimisations produced by the compiler.
3905
3906 --vecmath-ops N
3907 stop after N bogo vector integer math operations.
3908
3909 --verity N
3910 start N workers that exercise read-only file based authenticy
3911 protection using the verity ioctls FS_IOC_ENABLE_VERITY and
3912 FS_IOC_MEASURE_VERITY. This requires file systems with verity
3913 support (currently ext4 and f2fs on Linux) with the verity fea‐
3914 ture enabled. The test attempts to creates a small file with
3915 multiple small extents and enables verity on the file and veri‐
3916 fies it. It also checks to see if the file has verity enabled
3917 with the FS_VERITY_FL bit set on the file flags.
3918
3919 --verity-ops N
3920 stop the verity workers after N file create, enable verity,
3921 check verity and unlink cycles.
3922
3923 --vfork N
3924 start N workers continually vforking children that immediately
3925 exit.
3926
3927 --vfork-ops N
3928 stop vfork stress workers after N bogo operations.
3929
3930 --vfork-max P
3931 create P processes and then wait for them to exit per iteration.
3932 The default is just 1; higher values will create many temporary
3933 zombie processes that are waiting to be reaped. One can poten‐
3934 tially fill up the process table using high values for
3935 --vfork-max and --vfork.
3936
3937 --vforkmany N
3938 start N workers that spawn off a chain of vfork children until
3939 the process table fills up and/or vfork fails. vfork can
3940 rapidly create child processes and the parent process has to
3941 wait until the child dies, so this stressor rapidly fills up the
3942 process table.
3943
3944 --vforkmany-ops N
3945 stop vforkmany stressors after N vforks have been made.
3946
3947 -m N, --vm N
3948 start N workers continuously calling mmap(2)/munmap(2) and writ‐
3949 ing to the allocated memory. Note that this can cause systems to
3950 trip the kernel OOM killer on Linux systems if not enough physi‐
3951 cal memory and swap is not available.
3952
3953 --vm-bytes N
3954 mmap N bytes per vm worker, the default is 256MB. One can spec‐
3955 ify the size as % of total available memory or in units of
3956 Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g.
3957
3958 --vm-ops N
3959 stop vm workers after N bogo operations.
3960
3961 --vm-hang N
3962 sleep N seconds before unmapping memory, the default is zero
3963 seconds. Specifying 0 will do an infinite wait.
3964
3965 --vm-keep
3966 do not continually unmap and map memory, just keep on re-writing
3967 to it.
3968
3969 --vm-locked
3970 Lock the pages of the mapped region into memory using mmap
3971 MAP_LOCKED (since Linux 2.5.37). This is similar to locking
3972 memory as described in mlock(2).
3973
3974 --vm-madvise advice
3975 Specify the madvise 'advice' option used on the memory mapped
3976 regions used in the vm stressor. Non-linux systems will only
3977 have the 'normal' madvise advice, linux systems support 'dont‐
3978 need', 'hugepage', 'mergeable' , 'nohugepage', 'normal', 'ran‐
3979 dom', 'sequential', 'unmergeable' and 'willneed' advice. If this
3980 option is not used then the default is to pick random madvise
3981 advice for each mmap call. See madvise(2) for more details.
3982
3983 --vm-method m
3984 specify a vm stress method. By default, all the stress methods
3985 are exercised sequentially, however one can specify just one
3986 method to be used if required. Each of the vm workers have 3
3987 phases:
3988
3989 1. Initialised. The anonymously memory mapped region is set to a
3990 known pattern.
3991
3992 2. Exercised. Memory is modified in a known predictable way.
3993 Some vm workers alter memory sequentially, some use small or
3994 large strides to step along memory.
3995
3996 3. Checked. The modified memory is checked to see if it matches
3997 the expected result.
3998
3999 The vm methods containing 'prime' in their name have a stride of
4000 the largest prime less than 2^64, allowing to them to thoroughly
4001 step through memory and touch all locations just once while also
4002 doing without touching memory cells next to each other. This
4003 strategy exercises the cache and page non-locality.
4004
4005 Since the memory being exercised is virtually mapped then there
4006 is no guarantee of touching page addresses in any particular
4007 physical order. These workers should not be used to test that
4008 all the system's memory is working correctly either, use tools
4009 such as memtest86 instead.
4010
4011 The vm stress methods are intended to exercise memory in ways to
4012 possibly find memory issues and to try to force thermal errors.
4013
4014 Available vm stress methods are described as follows:
4015
4016 Method Description
4017 all iterate over all the vm stress methods
4018 as listed below.
4019 flip sequentially work through memory 8
4020 times, each time just one bit in memory
4021 flipped (inverted). This will effec‐
4022 tively invert each byte in 8 passes.
4023 galpat-0 galloping pattern zeros. This sets all
4024 bits to 0 and flips just 1 in 4096 bits
4025 to 1. It then checks to see if the 1s
4026 are pulled down to 0 by their neighbours
4027 or of the neighbours have been pulled up
4028 to 1.
4029 galpat-1 galloping pattern ones. This sets all
4030 bits to 1 and flips just 1 in 4096 bits
4031 to 0. It then checks to see if the 0s
4032 are pulled up to 1 by their neighbours
4033 or of the neighbours have been pulled
4034 down to 0.
4035
4036
4037 gray fill the memory with sequential gray
4038 codes (these only change 1 bit at a time
4039 between adjacent bytes) and then check
4040 if they are set correctly.
4041 incdec work sequentially through memory twice,
4042 the first pass increments each byte by a
4043 specific value and the second pass
4044 decrements each byte back to the origi‐
4045 nal start value. The increment/decrement
4046 value changes on each invocation of the
4047 stressor.
4048 inc-nybble initialise memory to a set value (that
4049 changes on each invocation of the stres‐
4050 sor) and then sequentially work through
4051 each byte incrementing the bottom 4 bits
4052 by 1 and the top 4 bits by 15.
4053 rand-set sequentially work through memory in 64
4054 bit chunks setting bytes in the chunk to
4055 the same 8 bit random value. The random
4056 value changes on each chunk. Check that
4057 the values have not changed.
4058 rand-sum sequentially set all memory to random
4059 values and then summate the number of
4060 bits that have changed from the original
4061 set values.
4062 read64 sequentially read memory using 32 x 64
4063 bit reads per bogo loop. Each loop
4064 equates to one bogo operation. This
4065 exercises raw memory reads.
4066 ror fill memory with a random pattern and
4067 then sequentially rotate 64 bits of mem‐
4068 ory right by one bit, then check the
4069 final load/rotate/stored values.
4070 swap fill memory in 64 byte chunks with ran‐
4071 dom patterns. Then swap each 64 chunk
4072 with a randomly chosen chunk. Finally,
4073 reverse the swap to put the chunks back
4074 to their original place and check if the
4075 data is correct. This exercises adjacent
4076 and random memory load/stores.
4077 move-inv sequentially fill memory 64 bits of mem‐
4078 ory at a time with random values, and
4079 then check if the memory is set cor‐
4080 rectly. Next, sequentially invert each
4081 64 bit pattern and again check if the
4082 memory is set as expected.
4083 modulo-x fill memory over 23 iterations. Each
4084 iteration starts one byte further along
4085 from the start of the memory and steps
4086 along in 23 byte strides. In each
4087 stride, the first byte is set to a ran‐
4088 dom pattern and all other bytes are set
4089 to the inverse. Then it checks see if
4090 the first byte contains the expected
4091 random pattern. This exercises cache
4092 store/reads as well as seeing if neigh‐
4093 bouring cells influence each other.
4094 prime-0 iterate 8 times by stepping through mem‐
4095 ory in very large prime strides clearing
4096 just on bit at a time in every byte.
4097 Then check to see if all bits are set to
4098 zero.
4099 prime-1 iterate 8 times by stepping through mem‐
4100 ory in very large prime strides setting
4101 just on bit at a time in every byte.
4102 Then check to see if all bits are set to
4103 one.
4104 prime-gray-0 first step through memory in very large
4105 prime strides clearing just on bit
4106 (based on a gray code) in every byte.
4107 Next, repeat this but clear the other 7
4108 bits. Then check to see if all bits are
4109 set to zero.
4110 prime-gray-1 first step through memory in very large
4111 prime strides setting just on bit (based
4112 on a gray code) in every byte. Next,
4113 repeat this but set the other 7 bits.
4114 Then check to see if all bits are set to
4115 one.
4116 rowhammer try to force memory corruption using the
4117 rowhammer memory stressor. This fetches
4118 two 32 bit integers from memory and
4119 forces a cache flush on the two
4120 addresses multiple times. This has been
4121 known to force bit flipping on some
4122 hardware, especially with lower fre‐
4123 quency memory refresh cycles.
4124
4125
4126
4127 walk-0d for each byte in memory, walk through
4128 each data line setting them to low (and
4129 the others are set high) and check that
4130 the written value is as expected. This
4131 checks if any data lines are stuck.
4132 walk-1d for each byte in memory, walk through
4133 each data line setting them to high (and
4134 the others are set low) and check that
4135 the written value is as expected. This
4136 checks if any data lines are stuck.
4137 walk-0a in the given memory mapping, work
4138 through a range of specially chosen
4139 addresses working through address lines
4140 to see if any address lines are stuck
4141 low. This works best with physical mem‐
4142 ory addressing, however, exercising
4143 these virtual addresses has some value
4144 too.
4145 walk-1a in the given memory mapping, work
4146 through a range of specially chosen
4147 addresses working through address lines
4148 to see if any address lines are stuck
4149 high. This works best with physical mem‐
4150 ory addressing, however, exercising
4151 these virtual addresses has some value
4152 too.
4153 write64 sequentially write memory using 32 x 64
4154 bit writes per bogo loop. Each loop
4155 equates to one bogo operation. This
4156 exercises raw memory writes. Note that
4157 memory writes are not checked at the end
4158 of each test iteration.
4159 zero-one set all memory bits to zero and then
4160 check if any bits are not zero. Next,
4161 set all the memory bits to one and check
4162 if any bits are not one.
4163
4164 --vm-populate
4165 populate (prefault) page tables for the memory mappings; this
4166 can stress swapping. Only available on systems that support
4167 MAP_POPULATE (since Linux 2.5.46).
4168
4169 --vm-addr N
4170 start N workers that exercise virtual memory addressing using
4171 various methods to walk through a memory mapped address range.
4172 This will exercise mapped private addresses from 8MB to 64MB per
4173 worker and try to generate cache and TLB inefficient addressing
4174 patterns. Each method will set the memory to a random pattern in
4175 a write phase and then sanity check this in a read phase.
4176
4177 --vm-addr-ops N
4178 stop N workers after N bogo addressing passes.
4179
4180 --vm-addr-method M
4181 specify a vm address stress method. By default, all the stress
4182 methods are exercised sequentially, however one can specify just
4183 one method to be used if required.
4184
4185 Available vm address stress methods are described as follows:
4186
4187 Method Description
4188 all iterate over all the vm stress methods
4189 as listed below.
4190 pwr2 work through memory addresses in steps
4191 of powers of two.
4192 pwr2inv like pwr2, but with the all relevant
4193 address bits inverted.
4194 gray work through memory with gray coded
4195 addresses so that each change of address
4196 just changes 1 bit compared to the pre‐
4197 vious address.
4198 grayinv like gray, but with the all relevant
4199 address bits inverted, hence all bits
4200 change apart from 1 in the address
4201 range.
4202 rev work through the address range with the
4203 bits in the address range reversed.
4204 revinv like rev, but with all the relevant
4205 address bits inverted.
4206 inc work through the address range forwards
4207 sequentially, byte by byte.
4208 incinv like inc, but with all the relevant
4209 address bits inverted.
4210 dec work through the address range backwards
4211 sequentially, byte by byte.
4212 decinv like dec, but with all the relevant
4213 address bits inverted.
4214
4215 --vm-rw N
4216 start N workers that transfer memory to/from a parent/child
4217 using process_vm_writev(2) and process_vm_readv(2). This is fea‐
4218 ture is only supported on Linux. Memory transfers are only ver‐
4219 ified if the --verify option is enabled.
4220
4221 --vm-rw-ops N
4222 stop vm-rw workers after N memory read/writes.
4223
4224 --vm-rw-bytes N
4225 mmap N bytes per vm-rw worker, the default is 16MB. One can
4226 specify the size as % of total available memory or in units of
4227 Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g.
4228
4229 --vm-segv N
4230 start N workers that create a child process that unmaps its
4231 address space causing a SIGSEGV on return from the unmap.
4232
4233 --vm-segv-ops N
4234 stop after N bogo vm-segv SIGSEGV faults.
4235
4236 --vm-splice N
4237 move data from memory to /dev/null through a pipe without any
4238 copying between kernel address space and user address space
4239 using vmsplice(2) and splice(2). This is only available for
4240 Linux.
4241
4242 --vm-splice-ops N
4243 stop after N bogo vm-splice operations.
4244
4245 --vm-splice-bytes N
4246 transfer N bytes per vmsplice call, the default is 64K. One can
4247 specify the size as % of total available memory or in units of
4248 Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g.
4249
4250 --wait N
4251 start N workers that spawn off two children; one spins in a
4252 pause(2) loop, the other continually stops and continues the
4253 first. The controlling process waits on the first child to be
4254 resumed by the delivery of SIGCONT using waitpid(2) and
4255 waitid(2).
4256
4257 --wait-ops N
4258 stop after N bogo wait operations.
4259
4260 --watchdog N
4261 start N workers that exercising the /dev/watchdog watchdog
4262 interface by opening it, perform various watchdog specific
4263 ioctl(2) commands on the device and close it. Before closing
4264 the special watchdog magic close message is written to the
4265 device to try and force it to never trip a watchdog reboot after
4266 the stressor has been run. Note that this stressor needs to be
4267 run as root with the --pathological option and is only available
4268 on Linux.
4269
4270 --watchdog-ops N
4271 stop after N bogo operations on the watchdog device.
4272
4273 --wcs N
4274 start N workers that exercise various libc wide character string
4275 functions on random strings.
4276
4277 --wcs-method wcsfunc
4278 select a specific libc wide character string function to stress.
4279 Available string functions to stress are: all, wcscasecmp,
4280 wcscat, wcschr, wcscoll, wcscmp, wcscpy, wcslen, wcsncasecmp,
4281 wcsncat, wcsncmp, wcsrchr and wcsxfrm. The 'all' method is the
4282 default and will exercise all the string methods.
4283
4284 --wcs-ops N
4285 stop after N bogo wide character string operations.
4286
4287 --x86syscall N
4288 start N workers that repeatedly exercise the x86-64 syscall
4289 instruction to call the getcpu(2), gettimeofday(2) and time(2)
4290 system using the Linux vsyscall handler. Only for Linux.
4291
4292 --x86syscall-ops N
4293 stop after N x86syscall system calls.
4294
4295 --x86syscall-func F
4296 Instead of exercising the 3 syscall system calls, just call the
4297 syscall function F. The function F must be one of getcpu, get‐
4298 timeofday and time.
4299
4300 --xattr N
4301 start N workers that create, update and delete batches of
4302 extended attributes on a file.
4303
4304 --xattr-ops N
4305 stop after N bogo extended attribute operations.
4306
4307 -y N, --yield N
4308 start N workers that call sched_yield(2). This stressor ensures
4309 that at least 2 child processes per CPU exercise shield_yield(2)
4310 no matter how many workers are specified, thus always ensuring
4311 rapid context switching.
4312
4313 --yield-ops N
4314 stop yield stress workers after N sched_yield(2) bogo opera‐
4315 tions.
4316
4317 --zero N
4318 start N workers reading /dev/zero.
4319
4320 --zero-ops N
4321 stop zero stress workers after N /dev/zero bogo read operations.
4322
4323 --zlib N
4324 start N workers compressing and decompressing random data using
4325 zlib. Each worker has two processes, one that compresses random
4326 data and pipes it to another process that decompresses the data.
4327 This stressor exercises CPU, cache and memory.
4328
4329 --zlib-ops N
4330 stop after N bogo compression operations, each bogo compression
4331 operation is a compression of 64K of random data at the highest
4332 compression level.
4333
4334 --zlib-level L
4335 specify the compression level (0..9), where 0 = no compression,
4336 1 = fastest compression and 9 = best compression.
4337
4338 --zlib-method method
4339 specify the type of random data to send to the zlib library. By
4340 default, the data stream is created from a random selection of
4341 the different data generation processes. However one can spec‐
4342 ify just one method to be used if required. Available zlib data
4343 generation methods are described as follows:
4344
4345 Method Description
4346 00ff randomly distributed 0x00 and 0xFF values.
4347 ascii01 randomly distributed ASCII 0 and 1 characters.
4348 asciidigits randomly distributed ASCII digits in the range
4349 of 0 and 9.
4350 bcd packed binary coded decimals, 0..99 packed into
4351 2 4-bit nybbles.
4352 binary 32 bit random numbers.
4353 brown 8 bit brown noise (Brownian motion/Random Walk
4354 noise).
4355 double double precision floating point numbers from
4356 sin(θ).
4357 fixed data stream is repeated 0x04030201.
4358 gray 16 bit gray codes generated from an increment‐
4359 ing counter.
4360 latin Random latin sentences from a sample of Lorem
4361 Ipsum text.
4362 logmap Values generated from a logistical map of the
4363 equation Χn+1 = r × Χn × (1 - Χn) where r > ≈
4364 3.56994567 to produce chaotic data. The values
4365 are scaled by a large arbitrary value and the
4366 lower 8 bits of this value are compressed.
4367 lfsr32 Values generated from a 32 bit Galois linear
4368 feedback shift register using the polynomial
4369 x↑32 + x↑31 + x↑29 + x + 1. This generates a
4370 ring of 2↑32 - 1 unique values (all 32 bit
4371 values except for 0).
4372 lrand48 Uniformly distributed pseudo-random 32 bit val‐
4373 ues generated from lrand48(3).
4374 morse Morse code generated from random latin sen‐
4375 tences from a sample of Lorem Ipsum text.
4376 nybble randomly distributed bytes in the range of 0x00
4377 to 0x0f.
4378 objcode object code selected from a random start point
4379 in the stress-ng text segment.
4380 parity 7 bit binary data with 1 parity bit.
4381 pink pink noise in the range 0..255 generated using
4382 the Gardner method with the McCartney selection
4383 tree optimization. Pink noise is where the
4384 power spectral density is inversely propor‐
4385 tional to the frequency of the signal and hence
4386 is slightly compressible.
4387 random segments of the data stream are created by ran‐
4388 domly calling the different data generation
4389 methods.
4390 rarely1 data that has a single 1 in every 32 bits, ran‐
4391 domly located.
4392 rarely0 data that has a single 0 in every 32 bits, ran‐
4393 domly located.
4394 text random ASCII text.
4395 utf8 random 8 bit data encoded to UTF-8.
4396 zero all zeros, compresses very easily.
4397
4398 --zlib-window-bits W
4399 specify the window bits used to specify the history buffer size.
4400 The value is specified as the base two logarithm of the buffer
4401 size (e.g. value 9 is 2^9 = 512 bytes). Default is 15.
4402
4403 Values:
4404 -8-(-15): raw deflate format
4405 8-15: zlib format
4406 24-31: gzip format
4407 40-47: inflate auto format detection using zlib deflate format
4408
4409 --zlib-mem-level L specify the reserved compression state memory for
4410 zlib. Default is 8.
4411
4412 Values:
4413 1 = minimum memory usage
4414 9 = maximum memory usage
4415
4416 --zlib-strategy S
4417 specifies the strategy to use when deflating data. This is used
4418 to tune the compression algorithm. Default is 0.
4419
4420 Values:
4421 0: used for normal data (Z_DEFAULT_STRATEGY)
4422 1: for data generated by a filter or predictor (Z_FILTERED)
4423 2: forces huffman encoding (Z_HUFFMAN_ONLY)
4424 3: Limit match distances to one run-length-encoding (Z_RLE)
4425 4: prevents dynamic huffman codes (Z_FIXED)
4426
4427 --zlib-stream-bytes S
4428 specify the amount of bytes to deflate until deflate should fin‐
4429 ish the block and return with Z_STREAM_END. One can specify the
4430 size in units of Bytes, KBytes, MBytes and GBytes using the suf‐
4431 fix b, k, m or g. Default is 0 which creates and endless stream
4432 until stressor ends.
4433
4434 Values:
4435 0: creates an endless deflate stream until stressor stops
4436 n: creates an stream of n bytes over and over again. Each block will be closed with Z_STREAM_END.
4437
4438
4439 --zombie N
4440 start N workers that create zombie processes. This will rapidly
4441 try to create a default of 8192 child processes that immediately
4442 die and wait in a zombie state until they are reaped. Once the
4443 maximum number of processes is reached (or fork fails because
4444 one has reached the maximum allowed number of children) the old‐
4445 est child is reaped and a new process is then created in a
4446 first-in first-out manner, and then repeated.
4447
4448 --zombie-ops N
4449 stop zombie stress workers after N bogo zombie operations.
4450
4451 --zombie-max N
4452 try to create as many as N zombie processes. This may not be
4453 reached if the system limit is less than N.
4454
4456 stress-ng --vm 8 --vm-bytes 80% -t 1h
4457
4458 run 8 virtual memory stressors that combined use 80% of the
4459 available memory for 1 hour. Thus each stressor uses 10% of the
4460 available memory.
4461
4462 stress-ng --cpu 4 --io 2 --vm 1 --vm-bytes 1G --timeout 60s
4463
4464 runs for 60 seconds with 4 cpu stressors, 2 io stressors and 1
4465 vm stressor using 1GB of virtual memory.
4466
4467 stress-ng --iomix 2 --iomix-bytes 10% -t 10m
4468
4469 runs 2 instances of the mixed I/O stressors using a total of 10%
4470 of the available file system space for 10 minutes. Each stressor
4471 will use 5% of the available file system space.
4472
4473 stress-ng --cyclic 1 --cyclic-dist 2500 --cyclic-method clock_ns
4474 --cyclic-prio 100 --cyclic-sleep 10000 --hdd 0 -t 1m
4475
4476 measures real time scheduling latencies created by the hdd
4477 stressor. This uses the high resolution nanosecond clock to mea‐
4478 sure latencies during sleeps of 10,000 nanoseconds. At the end
4479 of 1 minute of stressing, the latency distribution with 2500 ns
4480 intervals will be displayed. NOTE: this must be run with the
4481 CAP_SYS_NICE capability to enable the real time scheduling to
4482 get accurate measurements.
4483
4484 stress-ng --cpu 8 --cpu-ops 800000
4485
4486 runs 8 cpu stressors and stops after 800000 bogo operations.
4487
4488 stress-ng --sequential 2 --timeout 2m --metrics
4489
4490 run 2 simultaneous instances of all the stressors sequentially
4491 one by one, each for 2 minutes and summarise with performance
4492 metrics at the end.
4493
4494 stress-ng --cpu 4 --cpu-method fft --cpu-ops 10000 --metrics-brief
4495
4496 run 4 FFT cpu stressors, stop after 10000 bogo operations and
4497 produce a summary just for the FFT results.
4498
4499 stress-ng --cpu -1 --cpu-method all -t 1h --cpu-load 90
4500
4501 run cpu stressors on all online CPUs working through all the
4502 available CPU stressors for 1 hour, loading the CPUs at 90% load
4503 capacity.
4504
4505 stress-ng --cpu 0 --cpu-method all -t 20m
4506
4507 run cpu stressors on all configured CPUs working through all the
4508 available CPU stressors for 20 minutes
4509
4510 stress-ng --all 4 --timeout 5m
4511
4512 run 4 instances of all the stressors for 5 minutes.
4513
4514 stress-ng --random 64
4515
4516 run 64 stressors that are randomly chosen from all the available
4517 stressors.
4518
4519 stress-ng --cpu 64 --cpu-method all --verify -t 10m --metrics-brief
4520
4521 run 64 instances of all the different cpu stressors and verify
4522 that the computations are correct for 10 minutes with a bogo
4523 operations summary at the end.
4524
4525 stress-ng --sequential -1 -t 10m
4526
4527 run all the stressors one by one for 10 minutes, with the number
4528 of instances of each stressor matching the number of online
4529 CPUs.
4530
4531 stress-ng --sequential 8 --class io -t 5m --times
4532
4533 run all the stressors in the io class one by one for 5 minutes
4534 each, with 8 instances of each stressor running concurrently and
4535 show overall time utilisation statistics at the end of the run.
4536
4537 stress-ng --all -1 --maximize --aggressive
4538
4539 run all the stressors (1 instance of each per online CPU) simul‐
4540 taneously, maximize the settings (memory sizes, file alloca‐
4541 tions, etc.) and select the most demanding/aggressive options.
4542
4543 stress-ng --random 32 -x numa,hdd,key
4544
4545 run 32 randomly selected stressors and exclude the numa, hdd and
4546 key stressors
4547
4548 stress-ng --sequential 4 --class vm --exclude bigheap,brk,stack
4549
4550 run 4 instances of the VM stressors one after each other,
4551 excluding the bigheap, brk and stack stressors
4552
4553 stress-ng --taskset 0,2-3 --cpu 3
4554
4555 run 3 instances of the CPU stressor and pin them to CPUs 0, 2
4556 and 3.
4557
4559 Status Description
4560 0 Success.
4561 1 Error; incorrect user options or a fatal resource issue in
4562 the stress-ng stressor harness (for example, out of mem‐
4563 ory).
4564 2 One or more stressors failed.
4565 3 One or more stressors failed to initialise because of lack
4566 of resources, for example ENOMEM (no memory), ENOSPC (no
4567 space on file system) or a missing or unimplemented system
4568 call.
4569 4 One or more stressors were not implemented on a specific
4570 architecture or operating system.
4571 5 A stressor has been killed by an unexpected signal.
4572 6 A stressor exited by exit(2) which was not expected and
4573 timing metrics could not be gathered.
4574 7 The bogo ops metrics maybe untrustworthy. This is most
4575 likely to occur when a stress test is terminated during
4576 the update of a bogo-ops counter such as when it has been
4577 OOM killed. A less likely reason is that the counter ready
4578 indicator has been corrupted.
4579
4581 File bug reports at:
4582 https://launchpad.net/ubuntu/+source/stress-ng/+filebug
4583
4585 cpuburn(1), perf(1), stress(1), taskset(1)
4586
4588 stress-ng was written by Colin King <colin.king@canonical.com> and is a
4589 clean room re-implementation and extension of the original stress tool
4590 by Amos Waterland. Thanks also for contributions from Abdul Haleem,
4591 Adrian Ratiu, André Wild, Baruch Siach, Carlos Santos, Christian
4592 Ehrhardt, Chunyu Hu, David Turner, Dominik B Czarnota, Fabrice
4593 Fontaine, Helmut Grohne, James Hunt, Jianshen Liu, Jim Rowan, Joseph
4594 DeVincentis, Khalid Elmously, Khem Raj, Luca Pizzamiglio, Luis Hen‐
4595 riques, Manoj Iyer, Matthew Tippett, Mauricio Faria de Oliveira, Piyush
4596 Goyal, Ralf Ramsauer, Rob Colclaser, Thadeu Lima de Souza Cascardo,
4597 Thia Wyrod, Tim Gardner, Tim Orling, Tommi Rantala, Zhiyi Sun and oth‐
4598 ers.
4599
4601 Sending a SIGALRM, SIGINT or SIGHUP to stress-ng causes it to terminate
4602 all the stressor processes and ensures temporary files and shared mem‐
4603 ory segments are removed cleanly.
4604
4605 Sending a SIGUSR2 to stress-ng will dump out the current load average
4606 and memory statistics.
4607
4608 Note that the stress-ng cpu, io, vm and hdd tests are different imple‐
4609 mentations of the original stress tests and hence may produce different
4610 stress characteristics. stress-ng does not support any GPU stress
4611 tests.
4612
4613 The bogo operations metrics may change with each release because of
4614 bug fixes to the code, new features, compiler optimisations or changes
4615 in system call performance.
4616
4618 Copyright © 2013-2021 Canonical Ltd.
4619 This is free software; see the source for copying conditions. There is
4620 NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR
4621 PURPOSE.
4622
4623
4624
4625 Feb 25, 2021 STRESS-NG(1)