1STRESS-NG(1) General Commands Manual STRESS-NG(1)
2
3
4
6 stress-ng - a tool to load and stress a computer system
7
8
10 stress-ng [OPTION [ARG]] ...
11
12
14 stress-ng will stress test a computer system in various selectable
15 ways. It was designed to exercise various physical subsystems of a com‐
16 puter as well as the various operating system kernel interfaces.
17 stress-ng also has a wide range of CPU specific stress tests that exer‐
18 cise floating point, integer, bit manipulation and control flow.
19
20 stress-ng was originally intended to make a machine work hard and trip
21 hardware issues such as thermal overruns as well as operating system
22 bugs that only occur when a system is being thrashed hard. Use
23 stress-ng with caution as some of the tests can make a system run hot
24 on poorly designed hardware and also can cause excessive system thrash‐
25 ing which may be difficult to stop.
26
27 stress-ng can also measure test throughput rates; this can be useful to
28 observe performance changes across different operating system releases
29 or types of hardware. However, it has never been intended to be used as
30 a precise benchmark test suite, so do NOT use it in this manner.
31
32 Running stress-ng with root privileges will adjust out of memory set‐
33 tings on Linux systems to make the stressors unkillable in low memory
34 situations, so use this judiciously. With the appropriate privilege,
35 stress-ng can allow the ionice class and ionice levels to be adjusted,
36 again, this should be used with care.
37
38 One can specify the number of processes to invoke per type of stress
39 test; specifying a negative or zero value will select the number of
40 processors available as defined by sysconf(_SC_NPROCESSORS_CONF).
41
43 General stress-ng control options:
44
45 --aggressive
46 enables more file, cache and memory aggressive options. This may
47 slow tests down, increase latencies and reduce the number of
48 bogo ops as well as changing the balance of user time vs system
49 time used depending on the type of stressor being used.
50
51 -a N, --all N
52 start N instances of each stressor. If N is less than zero, then
53 the number of CPUs online is used for the number of instances.
54 If N is zero, then the number of CPUs in the system is used.
55
56 -b N, --backoff N
57 wait N microseconds between the start of each stress worker
58 process. This allows one to ramp up the stress tests over time.
59
60 --class name
61 specify the class of stressors to run. Stressors are classified
62 into one or more of the following classes: cpu, cpu-cache,
63 device, io, interrupt, filesystem, memory, network, os, pipe,
64 scheduler and vm. Some stressors fall into just one class. For
65 example the 'get' stressor is just in the 'os' class. Other
66 stressors fall into more than one class, for example, the
67 'lsearch' stressor falls into the 'cpu', 'cpu-cache' and 'mem‐
68 ory' classes as it exercises all these three. Selecting a spe‐
69 cific class will run all the stressors that fall into that class
70 only when run with the --sequential option.
71
72 Specifying a name followed by a question mark (for example
73 --class vm?) will print out all the stressors in that specific
74 class.
75
76 -n, --dry-run
77 parse options, but do not run stress tests. A no-op.
78
79 -h, --help
80 show help.
81
82 --ignite-cpu
83 alter kernel controls to try and maximize the CPU. This requires
84 root privilege to alter various /sys interface controls. Cur‐
85 rently this only works for Intel P-State enabled x86 systems on
86 Linux.
87
88 --ionice-class class
89 specify ionice class (only on Linux). Can be idle (default),
90 besteffort, be, realtime, rt.
91
92 --ionice-level level
93 specify ionice level (only on Linux). For idle, 0 is the only
94 possible option. For besteffort or realtime values 0 (highest
95 priority) to 7 (lowest priority). See ionice(1) for more
96 details.
97
98 -k, --keep-name
99 by default, stress-ng will attempt to change the name of the
100 stress processes according to their functionality; this option
101 disables this and keeps the process names to be the name of the
102 parent process, that is, stress-ng.
103
104 --log-brief
105 by default stress-ng will report the name of the program, the
106 message type and the process id as a prefix to all output. The
107 --log-brief option will output messages without these fields to
108 produce a less verbose output.
109
110 --log-file filename
111 write messages to the specified log file.
112
113 --maximize
114 overrides the default stressor settings and instead sets these
115 to the maximum settings allowed. These defaults can always be
116 overridden by the per stressor settings options if required.
117
118 --metrics
119 output number of bogo operations in total performed by the
120 stress processes. Note that these are not a reliable metric of
121 performance or throughput and have not been designed to be used
122 for benchmarking whatsoever. The metrics are just a useful way
123 to observe how a system behaves when under various kinds of
124 load.
125
126 The following columns of information are output:
127
128 Column Heading Explanation
129
130
131
132
133 bogo ops number of iterations of the stressor
134 during the run. This is metric of how
135 much overall "work" has been achieved
136 in bogo operations.
137 real time (secs) average wall clock duration (in sec‐
138 onds) of the stressor. This is the
139 total wall clock time of all the
140 instances of that particular stressor
141 divided by the number of these stres‐
142 sors being run.
143 usr time (secs) total user time (in seconds) consumed
144 running all the instances of the
145 stressor.
146 sys time (secs) total system time (in seconds) con‐
147 sumed running all the instances of
148 the stressor.
149 bogo ops/s (real time) total bogo operations per second
150 based on wall clock run time. The
151 wall clock time reflects the apparent
152 run time. The more processors one has
153 on a system the more the work load
154 can be distributed onto these and
155 hence the wall clock time will reduce
156 and the bogo ops rate will increase.
157 This is essentially the "apparent"
158 bogo ops rate of the system.
159 bogo ops/s (usr+sys time) total bogo operations per second
160 based on cumulative user and system
161 time. This is the real bogo ops rate
162 of the system taking into considera‐
163 tion the actual time execution time
164 of the stressor across all the pro‐
165 cessors. Generally this will
166 decrease as one adds more concurrent
167 stressors due to contention on cache,
168 memory, execution units, buses and
169 I/O devices.
170
171 --metrics-brief
172 enable metrics and only output metrics that are non-zero.
173
174 --minimize
175 overrides the default stressor settings and instead sets these
176 to the minimum settings allowed. These defaults can always be
177 overridden by the per stressor settings options if required.
178
179 --no-advise
180 from version 0.02.26 stress-ng automatically calls madvise(2)
181 with random advise options before each mmap and munmap to stress
182 the the vm subsystem a little harder. The --no-advise option
183 turns this default off.
184
185 --no-rand-seed
186 Do not seed the stress-ng psuedo-random number generator with a
187 quasi random start seed, but instead seed it with constant val‐
188 ues. This forces tests to run each time using the same start
189 conditions which can be useful when one requires reproduceable
190 stress tests.
191
192 --oomable
193 Do not respawn a stressor if it gets killed by the Out-of-Memory
194 (OOM) killer. The default behaviour is to restart a new
195 instance of a stressor if the kernel OOM killer terminates the
196 process. This option disables this default behaviour.
197
198 --page-in
199 touch allocated pages that are not in core, forcing them to be
200 paged back in. This is a useful option to force all the allo‐
201 cated pages to be paged in when using the bigheap, mmap and vm
202 stressors. It will severely degrade performance when the memory
203 in the system is less than the allocated buffer sizes. This
204 uses mincore(2) to determine the pages that are not in core and
205 hence need touching to page them back in.
206
207 --pathological
208 enable stressors that are known to hang systems. Some stressors
209 can quickly consume resources in such a way that they can
210 rapidly hang a system before the kernel can OOM kill them. These
211 stressors are not enabled by default, this option enables them,
212 but you probably don't want to do this. You have been warned.
213
214 --perf measure processor and system activity using perf events. Linux
215 only and caveat emptor, according to perf_event_open(2): "Always
216 double-check your results! Various generalized events have had
217 wrong values.". Note that with Linux 4.7 one needs to have
218 CAP_SYS_ADMIN capabilities for this option to work, or adjust
219 /proc/sys/kernel/perf_event_paranoid to below 2 to use this
220 without CAP_SYS_ADMIN.
221
222 -q, --quiet
223 do not show any output.
224
225 -r N, --random N
226 start N random stress workers. If N is 0, then the number of
227 configured processors is used for N.
228
229 --sched scheduler
230 select the named scheduler (only on Linux). To see the list of
231 available schedulers use: stress-ng --sched which
232
233 --sched-prio prio
234 select the scheduler priority level (only on Linux). If the
235 scheduler does not support this then the default priority level
236 of 0 is chosen.
237
238 --sequential N
239 sequentially run all the stressors one by one for a default of
240 60 seconds. The number of instances of each of the individual
241 stressors to be started is N. If N is less than zero, then the
242 number of CPUs online is used for the number of instances. If N
243 is zero, then the number of CPUs in the system is used. Use the
244 --timeout option to specify the duration to run each stressor.
245
246 --stressors
247 output the names of the available stressors.
248
249 --syslog
250 log output (except for verbose -v messages) to the syslog.
251
252 --taskset list
253 set CPU affinity based on the list of CPUs provided; stress-ng
254 is bound to just use these CPUs (Linux only). The CPUs to be
255 used are specified by a comma separated list of CPU (0 to N-1).
256 One can specify a range of CPUs using '-', for example:
257 --taskset 0,2-3,6,7-11
258
259 --temp-path path
260 specify a path for stress-ng temporary directories and temporary
261 files; the default path is the current working directory. This
262 path must have read and write access for the stress-ng stress
263 processes.
264
265 --thrash
266 This can only be used when running on Linux and with root privi‐
267 lege. This option starts a background thrasher process that
268 works through all the processes on a system and tries to page as
269 many pages in the processes as possible. This will cause con‐
270 siderable amount of thrashing of swap on an over-committed sys‐
271 tem.
272
273 -t N, --timeout N
274 stop stress test after N seconds. One can also specify the units
275 of time in seconds, minutes, hours, days or years with the suf‐
276 fix s, m, h, d or y.
277
278 --timer-slack N
279 adjust the per process timer slack to N nanoseconds (Linux
280 only). Increasing the timer slack allows the kernel to coalesce
281 timer events by adding some fuzzinesss to timer expiration times
282 and hence reduce wakeups. Conversely, decreasing the timer
283 slack will increase wakeups. A value of 0 for the timer-slack
284 will set the system default of 50,000 nanoseconds.
285
286 --times
287 show the cumulative user and system times of all the child pro‐
288 cesses at the end of the stress run. The percentage of utilisa‐
289 tion of available CPU time is also calculated from the number of
290 on-line CPUs in the system.
291
292 --tz collect temperatures from the available thermal zones on the
293 machine (Linux only). Some devices may have one or more thermal
294 zones, where as others may have none.
295
296 -v, --verbose
297 show all debug, warnings and normal information output.
298
299 --verify
300 verify results when a test is run. This is not available on all
301 tests. This will sanity check the computations or memory con‐
302 tents from a test run and report to stderr any unexpected fail‐
303 ures.
304
305 -V, --version
306 show version.
307
308 -x, --exclude list
309 specify a list of one or more stressors to exclude (that is, do
310 not run them). This is useful to exclude specific stressors
311 when one selects many stressors to run using the --class option,
312 --sequential, --all and --random options. Example, run the cpu
313 class stressors concurrently and exclude the numa and search
314 stressors:
315
316 stress-ng --class cpu --all 1 -x numa,bsearch,hsearch,lsearch
317
318 -Y, --yaml filename
319 output gathered statistics to a YAML formatted file named 'file‐
320 name'.
321
322
323
324 Stressor specific options:
325
326 --affinity N
327 start N workers that rapidly change CPU affinity (only on
328 Linux). Rapidly switching CPU affinity can contribute to poor
329 cache behaviour.
330
331 --affinity-ops N
332 stop affinity workers after N bogo affinity operations (only on
333 Linux).
334
335 --affinity-rand
336 switch CPU affinity randomly rather than the default of sequen‐
337 tially.
338
339 --af-alg N
340 start N workers that exercise the AF_ALG socket domain by hash‐
341 ing and encrypting various sized random messages. This exercises
342 the SHA1, SHA224, SHA256, SHA384, SHA512, MD4, MD5, RMD128,
343 RMD160, RMD256, RMD320, WP256, WP384, WP512, TGR128, TGR160,
344 TGR192 hashes and the cbc(aes), lrw(aes), ofb(aes),
345 xts(twofish), xts(serpent), xts(cast6), xts(camellia),
346 lrw(twofish), lrw(cast6), lrw(camellia), salsa20 skcipers.
347
348 --af-alg-ops N
349 stop af-alg workers after N AF_ALG messages are hashed.
350
351 --aio N
352 start N workers that issue multiple small asynchronous I/O
353 writes and reads on a relatively small temporary file using the
354 POSIX aio interface. This will just hit the file system cache
355 and soak up a lot of user and kernel time in issuing and han‐
356 dling I/O requests. By default, each worker process will handle
357 16 concurrent I/O requests.
358
359 --aio-ops N
360 stop POSIX asynchronous I/O workers after N bogo asynchronous
361 I/O requests.
362
363 --aio-requests N
364 specify the number of POSIX asynchronous I/O requests each
365 worker should issue, the default is 16; 1 to 4096 are allowed.
366
367 --aiol N
368 start N workers that issue multiple 4K random asynchronous I/O
369 writes using the Linux aio system calls io_setup(2), io_sub‐
370 mit(2), io_getevents(2) and io_destroy(2). By default, each
371 worker process will handle 16 concurrent I/O requests.
372
373 --aiol-ops N
374 stop Linux asynchronous I/O workers after N bogo asynchronous
375 I/O requests.
376
377 --aiol-requests N
378 specify the number of Linux asynchronous I/O requests each
379 worker should issue, the default is 16; 1 to 4096 are allowed.
380
381 --apparmor N
382 start N workers that exercise various parts of the AppArmor
383 interface. Currently one needs root permission to run this par‐
384 ticular test. This test is only available on Linux systems with
385 AppArmor support.
386
387 --apparmor-ops
388 stop the AppArmor workers after N bogo operations.
389
390 --atomic N
391 start N workers that exercise various GCC __atomic_*() built in
392 operations on 8, 16, 32 and 64 bit intergers that are shared
393 among the N workers. This stressor is only available for builds
394 using GCC 4.7.4 or higher. The stressor forces many front end
395 cache stalls and cache references.
396
397 --atomic-ops N
398 stop the atomic workers after N bogo atomic operations.
399
400 -B N, --bigheap N
401 start N workers that grow their heaps by reallocating memory. If
402 the out of memory killer (OOM) on Linux kills the worker or the
403 allocation fails then the allocating process starts all over
404 again. Note that the OOM adjustment for the worker is set so
405 that the OOM killer will treat these workers as the first candi‐
406 date processes to kill.
407
408 --bigheap-ops N
409 stop the big heap workers after N bogo allocation operations are
410 completed.
411
412 --bigheap-growth N
413 specify amount of memory to grow heap by per iteration. Size can
414 be from 4K to 64MB. Default is 64K.
415
416 --bind-mount N
417 start N workers that repeatedly bind mount / to / inside a user
418 namespace. This can consume resources rapidly, forcing out of
419 memory situations. Do not use this stressor unless you want to
420 risk hanging your machine.
421
422 --bind-mount-ops N
423 stop after N bind mount bogo operations.
424
425 --brk N
426 start N workers that grow the data segment by one page at a time
427 using multiple brk(2) calls. Each successfully allocated new
428 page is touched to ensure it is resident in memory. If an out
429 of memory condition occurs then the test will reset the data
430 segment to the point before it started and repeat the data seg‐
431 ment resizing over again. The process adjusts the out of memory
432 setting so that it may be killed by the out of memory (OOM)
433 killer before other processes. If it is killed by the OOM
434 killer then it will be automatically re-started by a monitoring
435 parent process.
436
437 --brk-ops N
438 stop the brk workers after N bogo brk operations.
439
440 --brk-notouch
441 do not touch each newly allocated data segment page. This dis‐
442 ables the default of touching each newly allocated page and
443 hence avoids the kernel from necessarily backing the page with
444 real physical memory.
445
446 --bsearch N
447 start N workers that binary search a sorted array of 32 bit
448 integers using bsearch(3). By default, there are 65536 elements
449 in the array. This is a useful method to exercise random access
450 of memory and processor cache.
451
452 --bsearch-ops N
453 stop the bsearch worker after N bogo bsearch operations are com‐
454 pleted.
455
456 --bsearch-size N
457 specify the size (number of 32 bit integers) in the array to
458 bsearch. Size can be from 1K to 4M.
459
460 -C N, --cache N
461 start N workers that perform random wide spread memory read and
462 writes to thrash the CPU cache. The code does not intelligently
463 determine the CPU cache configuration and so it may be sub-opti‐
464 mal in producing hit-miss read/write activity for some proces‐
465 sors.
466
467 --cache-fence
468 force write serialization on each store operation (x86 only).
469 This is a no-op for non-x86 architectures.
470
471 --cache-flush
472 force flush cache on each store operation (x86 only). This is a
473 no-op for non-x86 architectures.
474
475 --cache-level N
476 specify level of cache to exercise (1=L1 cache, 2=L2 cache,
477 3=L3/LLC cache (the default)). If the cache hierarchy cannot be
478 determined, built-in defaults will apply.
479
480 --cache-no-affinity
481 do not change processor affinity when --cache is in effect.
482
483 --cache-ops N
484 stop cache thrash workers after N bogo cache thrash operations.
485
486 --cache-prefetch
487 force read prefetch on next read address on architectures that
488 support prefetching.
489
490 --cache-ways N
491 specify the number of cache ways to exercise. This allows a sub‐
492 set of the overall cache size to be exercised.
493
494 --cap N
495 start N workers that read per process capabililties via calls to
496 capget(2) (Linux only).
497
498 --cap-ops N
499 stop after N cap bogo operations.
500
501 --chdir N
502 start N workers that change directory between directories using
503 chdir(2).
504
505 --chdir-ops N
506 stop after N chdir bogo operations.
507
508 --chdir-dirs N
509 exercise chdir on N directories. The default is 8192 directo‐
510 ries, this allows 64 to 65536 directories to be used instead.
511
512 --chmod N
513 start N workers that change the file mode bits via chmod(2) and
514 fchmod(2) on the same file. The greater the value for N then the
515 more contention on the single file. The stressor will work
516 through all the combination of mode bits.
517
518 --chmod-ops N
519 stop after N chmod bogo operations.
520
521 --chown N
522 start N workers that exercise chown(2) on the same file. The
523 greater the value for N then the more contention on the single
524 file.
525
526 --chown-ops N
527 stop the chown workers after N bogo chown(2) operations.
528
529 --chroot N
530 start N workers that exercise chroot(2) on various valid and
531 invalid chroot paths. (Linux only).
532
533 --chroot-ops N
534 stop the chroot workers after N bogo chroot(2) operations.
535
536 --clock N
537 start N workers exercising clocks and POSIX timers. For all
538 known clock types this will exercise clock_getres(2), clock_get‐
539 time(2) and clock_nanosleep(2). For all known timers it will
540 create a 50000ns timer and busy poll this until it expires.
541 This stressor will cause frequent context switching.
542
543 --clock-ops N
544 stop clock stress workers after N bogo operations.
545
546 --clone N
547 start N workers that create clones (via the clone(2) system
548 call). This will rapidly try to create a default of 8192 clones
549 that immediately die and wait in a zombie state until they are
550 reaped. Once the maximum number of clones is reached (or clone
551 fails because one has reached the maximum allowed) the oldest
552 clone thread is reaped and a new clone is then created in a
553 first-in first-out manner, and then repeated. A random clone
554 flag is selected for each clone to try to exercise different
555 clone operarions. The clone stressor is a Linux only option.
556
557 --clone-ops N
558 stop clone stress workers after N bogo clone operations.
559
560 --clone-max N
561 try to create as many as N clone threads. This may not be
562 reached if the system limit is less than N.
563
564 --context N
565 start N workers that run three threads that use swapcontext(3)
566 to implement the thread-to-thread context switching. This exer‐
567 cises rapid process context saving and restoring and is band‐
568 width limited by register and memory save and restore rates.
569
570 --context-ops N
571 stop N context workers after N bogo context switches. In this
572 stressor, 1 bogo op is equivalent to 1000 swapcontext calls.
573
574 --copy-file N
575 start N stressors that copy a file using the Linux
576 copy_file_range(2) system call. 2MB chunks of data are copyied
577 from random locations from one file to random locations to a
578 destination file. By default, the files are 256 MB in size.
579 Data is sync'd to the filesystem after each copy_file_range(2)
580 call.
581
582 --copy-file-ops N
583 stop after N copy_file_range() calls.
584
585 --copy-file-bytes N
586 copy file size, the default is 256 MB. One can specify the size
587 as % of free space on the file system or in units of Bytes,
588 KBytes, MBytes and GBytes using the suffix b, k, m or g.
589
590 -c N, --cpu N
591 start N workers exercising the CPU by sequentially working
592 through all the different CPU stress methods. Instead of exer‐
593 cising all the CPU stress methods, one can specify a specific
594 CPU stress method with the --cpu-method option.
595
596 --cpu-ops N
597 stop cpu stress workers after N bogo operations.
598
599 -l P, --cpu-load P
600 load CPU with P percent loading for the CPU stress workers. 0 is
601 effectively a sleep (no load) and 100 is full loading. The
602 loading loop is broken into compute time (load%) and sleep time
603 (100% - load%). Accuracy depends on the overall load of the pro‐
604 cessor and the responsiveness of the scheduler, so the actual
605 load may be different from the desired load. Note that the num‐
606 ber of bogo CPU operations may not be linearly scaled with the
607 load as some systems employ CPU frequency scaling and so heavier
608 loads produce an increased CPU frequency and greater CPU bogo
609 operations.
610
611 Note: This option only applies to the --cpu stressor option and
612 not to all of the cpu class of stressors.
613
614 --cpu-load-slice S
615 note - this option is only useful when --cpu-load is less than
616 100%. The CPU load is broken into multiple busy and idle cycles.
617 Use this option to specify the duration of a busy time slice. A
618 negative value for S specifies the number of iterations to run
619 before idling the CPU (e.g. -30 invokes 30 iterations of a CPU
620 stress loop). A zero value selects a random busy time between 0
621 and 0.5 seconds. A positive value for S specifies the number of
622 milliseconds to run before idling the CPU (e.g. 100 keeps the
623 CPU busy for 0.1 seconds). Specifying small values for S lends
624 to small time slices and smoother scheduling. Setting
625 --cpu-load as a relatively low value and --cpu-load-slice to be
626 large will cycle the CPU between long idle and busy cycles and
627 exercise different CPU frequencies. The thermal range of the
628 CPU is also cycled, so this is a good mechanism to exercise the
629 scheduler, frequency scaling and passive/active thermal cooling
630 mechanisms.
631
632 Note: This option only applies to the --cpu stressor option and
633 not to all of the cpu class of stressors.
634
635 --cpu-method method
636 specify a cpu stress method. By default, all the stress methods
637 are exercised sequentially, however one can specify just one
638 method to be used if required. Available cpu stress methods are
639 described as follows:
640
641 Method Description
642 all iterate over all the below cpu stress methods
643 ackermann Ackermann function: compute A(3, 10), where:
644 A(m, n) = n + 1 if m = 0;
645 A(m - 1, 1) if m > 0 and n = 0;
646 A(m - 1, A(m, n - 1)) if m > 0 and n > 0
647 bitops various bit operations from bithack, namely:
648 reverse bits, parity check, bit count, round to
649 nearest power of 2
650 callfunc recursively call 8 argument C function to a
651 depth of 1024 calls and unwind
652 cfloat 1000 iterations of a mix of floating point com‐
653 plex operations
654 cdouble 1000 iterations of a mix of double floating
655 point complex operations
656 clongdouble 1000 iterations of a mix of long double float‐
657 ing point complex operations
658 correlate perform a 16384 × 1024 correlation of random
659 doubles
660 crc16 compute 1024 rounds of CCITT CRC16 on random
661 data
662 decimal32 1000 iterations of a mix of 32 bit decimal
663 floating point operations (GCC only)
664 decimal64 1000 iterations of a mix of 64 bit decimal
665 floating point operations (GCC only)
666 decimal128 1000 iterations of a mix of 128 bit decimal
667 floating point operations (GCC only)
668 dither Floyd–Steinberg dithering of a 1024 × 768 ran‐
669 dom image from 8 bits down to 1 bit of depth.
670 djb2a 128 rounds of hash DJB2a (Dan Bernstein hash
671 using the xor variant) on 128 to 1 bytes of
672 random strings
673 double 1000 iterations of a mix of double precision
674 floating point operations
675 euler compute e using n = (1 + (1 ÷ n)) ↑ n
676 explog iterate on n = exp(log(n) ÷ 1.00002)
677 fibonacci compute Fibonacci sequence of 0, 1, 1, 2, 5,
678 8...
679 fft 4096 sample Fast Fourier Transform
680 float 1000 iterations of a mix of floating point
681 operations
682 fnv1a 128 rounds of hash FNV-1a (Fowler–Noll–Vo hash
683 using the xor then multiply variant) on 128 to
684 1 bytes of random strings
685 gamma calculate the Euler-Mascheroni constant γ using
686 the limiting difference between the harmonic
687 series (1 + 1/2 + 1/3 + 1/4 + 1/5 ... + 1/n)
688 and the natural logarithm ln(n), for n = 80000.
689 gcd compute GCD of integers
690
691 gray calculate binary to gray code and gray code
692 back to binary for integers from 0 to 65535
693 hamming compute Hamming H(8,4) codes on 262144 lots of
694 4 bit data. This turns 4 bit data into 8 bit
695 Hamming code containing 4 parity bits. For data
696 bits d1..d4, parity bits are computed as:
697 p1 = d2 + d3 + d4
698 p2 = d1 + d3 + d4
699 p3 = d1 + d2 + d4
700 p4 = d1 + d2 + d3
701 hanoi solve a 21 disc Towers of Hanoi stack using the
702 recursive solution
703 hyperbolic compute sinh(θ) × cosh(θ) + sinh(2θ) + cosh(3θ)
704 for float, double and long double hyperbolic
705 sine and cosine functions where θ = 0 to 2π in
706 1500 steps
707 idct 8 × 8 IDCT (Inverse Discrete Cosine Transform)
708 int8 1000 iterations of a mix of 8 bit integer oper‐
709 ations
710 int16 1000 iterations of a mix of 16 bit integer
711 operations
712 int32 1000 iterations of a mix of 32 bit integer
713 operations
714 int64 1000 iterations of a mix of 64 bit integer
715 operations
716 int128 1000 iterations of a mix of 128 bit integer
717 operations (GCC only)
718 int32float 1000 iterations of a mix of 32 bit integer and
719 floating point operations
720 int32double 1000 iterations of a mix of 32 bit integer and
721 double precision floating point operations
722 int32longdouble 1000 iterations of a mix of 32 bit integer and
723 long double precision floating point operations
724 int64float 1000 iterations of a mix of 64 bit integer and
725 floating point operations
726 int64double 1000 iterations of a mix of 64 bit integer and
727 double precision floating point operations
728 int64longdouble 1000 iterations of a mix of 64 bit integer and
729 long double precision floating point operations
730 int128float 1000 iterations of a mix of 128 bit integer and
731 floating point operations (GCC only)
732 int128double 1000 iterations of a mix of 128 bit integer and
733 double precision floating point operations (GCC
734 only)
735 int128longdouble 1000 iterations of a mix of 128 bit integer and
736 long double precision floating point operations
737 (GCC only)
738 int128decimal32 1000 iterations of a mix of 128 bit integer and
739 32 bit decimal floating point operations (GCC
740 only)
741 int128decimal64 1000 iterations of a mix of 128 bit integer and
742 64 bit decimal floating point operations (GCC
743 only)
744 int128decimal128 1000 iterations of a mix of 128 bit integer and
745 128 bit decimal floating point operations (GCC
746 only)
747 jenkin Jenkin's integer hash on 128 rounds of 128..1
748 bytes of random data
749 jmp Simple unoptimised compare >, <, == and jmp
750 branching
751 ln2 compute ln(2) based on series:
752 1 - 1/2 + 1/3 - 1/4 + 1/5 - 1/6 ...
753 longdouble 1000 iterations of a mix of long double preci‐
754 sion floating point operations
755 loop simple empty loop
756
757
758
759
760
761
762 matrixprod matrix product of two 128 × 128 matrices of
763 double floats. Testing on 64 bit x86 hardware
764 shows that this is provides a good mix of mem‐
765 ory, cache and floating point operations and is
766 probably the best CPU method to use to make a
767 CPU run hot.
768 nsqrt compute sqrt() of long doubles using Newton-
769 Raphson
770 omega compute the omega constant defined by Ωe↑Ω = 1
771 using efficient iteration of Ωn+1 = (1 + Ωn) /
772 (1 + e↑Ωn)
773 parity compute parity using various methods from the
774 Standford Bit Twiddling Hacks. Methods
775 employed are: the naïve way, the naïve way with
776 the Brian Kernigan bit counting optimisation,
777 the multiply way, the parallel way, and the
778 lookup table ways (2 variations).
779 phi compute the Golden Ratio ϕ using series
780 pi compute π using the Srinivasa Ramanujan fast
781 convergence algorithm
782 pjw 128 rounds of hash pjw function on 128 to 1
783 bytes of random strings
784 prime find all the primes in the range 1..1000000
785 using a slightly optimised brute force naïve
786 trial division search
787 psi compute ψ (the reciprocal Fibonacci constant)
788 using the sum of the reciprocals of the
789 Fibonacci numbers
790 queens compute all the solutions of the classic 8
791 queens problem for board sizes 1..12
792 rand 16384 iterations of rand(), where rand is the
793 MWC pseudo random number generator. The MWC
794 random function concatenates two 16 bit multi‐
795 ply-with-carry generators:
796 x(n) = 36969 × x(n - 1) + carry,
797 y(n) = 18000 × y(n - 1) + carry mod 2 ↑ 16
798
799 and has period of around 2 ↑ 60
800 rand48 16384 iterations of drand48(3) and lrand48(3)
801 rgb convert RGB to YUV and back to RGB (CCIR 601)
802 sdbm 128 rounds of hash sdbm (as used in the SDBM
803 database and GNU awk) on 128 to 1 bytes of ran‐
804 dom strings
805 sieve find the primes in the range 1..10000000 using
806 the sieve of Eratosthenes
807 sqrt compute sqrt(rand()), where rand is the MWC
808 pseudo random number generator
809 trig compute sin(θ) × cos(θ) + sin(2θ) + cos(3θ) for
810 float, double and long double sine and cosine
811 functions where θ = 0 to 2π in 1500 steps
812 union perform integer arithmetic on a mix of bit
813 fields in a C union. This exercises how well
814 the compiler and CPU can perform integer bit
815 field loads and stores.
816 zeta compute the Riemann Zeta function ζ(s) for s =
817 2.0..10.0
818
819 Note that some of these methods try to exercise the CPU with
820 computations found in some real world use cases. However, the
821 code has not been optimised on a per-architecture basis, so may
822 be a sub-optimal compared to hand-optimised code used in some
823 applications. They do try to represent the typical instruction
824 mixes found in these use cases.
825
826 --cpu-online N
827 start N workers that put randomly selected CPUs offline and
828 online. This Linux only stressor requires root privilege to per‐
829 form this action.
830
831 --cpu-online-ops N
832 stop after offline/online operations.
833
834 --crypt N
835 start N workers that encrypt a 16 character random password
836 using crypt(3). The password is encrypted using MD5, SHA-256
837 and SHA-512 encryption methods.
838
839 --crypt-ops N
840 stop after N bogo encryption operations.
841
842 --daemon N
843 start N workers that each create a daemon that dies immediately
844 after creating another daemon and so on. This effectively works
845 through the process table with short lived processes that do not
846 have a parent and are waited for by init. This puts pressure on
847 init to do rapid child reaping. The daemon processes perform
848 the usual mix of calls to turn into typical UNIX daemons, so
849 this artificially mimics very heavy daemon system stress.
850
851 --daemon-ops N
852 stop daemon workers after N daemons have been created.
853
854 --dccp N
855 start N workers that send and receive data using the Datagram
856 Congestion Control Protocol (DCCP) (RFC4340). This involves a
857 pair of client/server processes performing rapid connect, send
858 and receives and disconnects on the local host.
859
860 --dccp-domain D
861 specify the domain to use, the default is ipv4. Currently ipv4
862 and ipv6 are supported.
863
864 --dccp-port P
865 start DCCP at port P. For N dccp worker processes, ports P to P
866 - 1 are used.
867
868 --dccp-ops N
869 stop dccp stress workers after N bogo operations.
870
871 --dccp-opts [ send | sendmsg | sendmmsg ]
872 by default, messages are sent using send(2). This option allows
873 one to specify the sending method using send(2), sendmsg(2) or
874 sendmmsg(2). Note that sendmmsg is only available for Linux
875 systems that support this system call.
876
877 -D N, --dentry N
878 start N workers that create and remove directory entries. This
879 should create file system meta data activity. The directory
880 entry names are suffixed by a gray-code encoded number to try to
881 mix up the hashing of the namespace.
882
883 --dentry-ops N
884 stop denty thrash workers after N bogo dentry operations.
885
886 --dentry-order [ forward | reverse | stride | random ]
887 specify unlink order of dentries, can be one of forward,
888 reverse, stride or random. By default, dentries are unlinked in
889 random order. The forward order will unlink them from first to
890 last, reverse order will unlink them from last to first, stride
891 order will unlink them by stepping around order in a quasi-ran‐
892 dom pattern and random order will randomly select one of for‐
893 ward, reverse or stride orders.
894
895 --dentries N
896 create N dentries per dentry thrashing loop, default is 2048.
897
898 --dir N
899 start N workers that create and remove directories using mkdir
900 and rmdir.
901
902 --dir-ops N
903 stop directory thrash workers after N bogo directory operations.
904
905 --dir-dirs N
906 exercise dir on N directories. The default is 8192 directories,
907 this allows 64 to 65536 directories to be used instead.
908
909 --dirdeep N
910 start N workers that create multiple levels of directories to a
911 maximum depth as limited by PATH_MAX or ENAMETOOLONG (which ever
912 occurs first).
913
914 --dirdeep-ops N
915 stop directory depth workers after N bogo directory operations.
916
917 --dnotify N
918 start N workers performing file system activities such as mak‐
919 ing/deleting files/directories, renaming files, etc. to stress
920 exercise the various dnotify events (Linux only).
921
922 --dnotify-ops N
923 stop inotify stress workers after N dnotify bogo operations.
924
925 --dup N
926 start N workers that perform dup(2) and then close(2) operations
927 on /dev/zero. The maximum opens at one time is system defined,
928 so the test will run up to this maximum, or 65536 open file
929 descriptors, which ever comes first.
930
931 --dup-ops N
932 stop the dup stress workers after N bogo open operations.
933
934 --epoll N
935 start N workers that perform various related socket stress
936 activity using epoll_wait(2) to monitor and handle new connec‐
937 tions. This involves client/server processes performing rapid
938 connect, send/receives and disconnects on the local host. Using
939 epoll allows a large number of connections to be efficiently
940 handled, however, this can lead to the connection table filling
941 up and blocking further socket connections, hence impacting on
942 the epoll bogo op stats. For ipv4 and ipv6 domains, multiple
943 servers are spawned on multiple ports. The epoll stressor is for
944 Linux only.
945
946 --epoll-domain D
947 specify the domain to use, the default is unix (aka local). Cur‐
948 rently ipv4, ipv6 and unix are supported.
949
950 --epoll-port P
951 start at socket port P. For N epoll worker processes, ports P to
952 (P * 4) - 1 are used for ipv4, ipv6 domains and ports P to P - 1
953 are used for the unix domain.
954
955 --epoll-ops N
956 stop epoll workers after N bogo operations.
957
958 --eventfd N
959 start N parent and child worker processes that read and write 8
960 byte event messages between them via the eventfd mechanism
961 (Linux only).
962
963 --eventfd-ops N
964 stop eventfd workers after N bogo operations.
965
966 --exec N
967 start N workers continually forking children that exec stress-ng
968 and then exit almost immediately.
969
970 --exec-ops N
971 stop exec stress workers after N bogo operations.
972
973 --exec-max P
974 create P child processes that exec stress-ng and then wait for
975 them to exit per iteration. The default is just 1; higher values
976 will create many temporary zombie processes that are waiting to
977 be reaped. One can potentially fill up the process table using
978 high values for --exec-max and --exec.
979
980 -F N, --fallocate N
981 start N workers continually fallocating (preallocating file
982 space) and ftuncating (file truncating) temporary files. If the
983 file is larger than the free space, fallocate will produce an
984 ENOSPC error which is ignored by this stressor.
985
986 --fallocate-bytes N
987 allocated file size, the default is 1 GB. One can specify the
988 size as % of free space on the file system or in units of Bytes,
989 KBytes, MBytes and GBytes using the suffix b, k, m or g.
990
991 --fallocate-ops N
992 stop fallocate stress workers after N bogo fallocate operations.
993
994 --fanotify N
995 start N workers performing file system activities such as creat‐
996 ing, opening, writing, reading and unlinking files to exercise
997 the fanotify event monitoring interface (Linux only). Each
998 stressor runs a child process to generate file events and a par‐
999 ent process to read file events using fanotify.
1000
1001 --fanotify-ops N
1002 stop fanotify stress workers after N bogo fanotify events.
1003
1004 --fault N
1005 start N workers that generates minor and major page faults.
1006
1007 --fault-ops N
1008 stop the page fault workers after N bogo page fault operations.
1009
1010 --fcntl N
1011 start N workers that perform fcntl(2) calls with various com‐
1012 mands. The exercised commands (if available) are: F_DUPFD,
1013 F_DUPFD_CLOEXEC, F_GETFD, F_SETFD, F_GETFL, F_SETFL, F_GETOWN,
1014 F_SETOWN, F_GETOWN_EX, F_SETOWN_EX, F_GETSIG, F_SETSIG, F_GETLK,
1015 F_SETLK, F_SETLKW, F_OFD_GETLK, F_OFD_SETLK and F_OFD_SETLKW.
1016
1017 --fcntl-ops N
1018 stop the fcntl workers after N bogo fcntl operations.
1019
1020 --fiemap N
1021 start N workers that each create a file with many randomly
1022 changing extents and has 4 child processes per worker that
1023 gather the extent information using the FS_IOC_FIEMAP ioctl(2).
1024
1025 --fiemap-ops N
1026 stop after N fiemap bogo operations.
1027
1028 --fiemap-bytes N
1029 specify the size of the fiemap'd file in bytes. One can specify
1030 the size as % of free space on the file system or in units of
1031 Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g.
1032 Larger files will contain more extents, causing more stress when
1033 gathering extent information.
1034
1035 --fifo N
1036 start N workers that exercise a named pipe by transmitting 64
1037 bit integers.
1038
1039 --fifo-ops N
1040 stop fifo workers after N bogo pipe write operations.
1041
1042 --fifo-readers N
1043 for each worker, create N fifo reader workers that read the
1044 named pipe using simple blocking reads.
1045
1046 --filename N
1047 start N workers that exercise file creation using various length
1048 filenames containing a range of allower filename characters.
1049 This will try to see if it can exceed the file system allowed
1050 filename length was well as test various filename lengths
1051 between 1 and the maximum allowed by the file system.
1052
1053 --filename-ops N
1054 stop filename workers after N bogo filename tests.
1055
1056 --filename-opts opt
1057 use characters in the filename based on option 'opt'. Valid
1058 options are:
1059
1060 Option Description
1061 probe default option, probe the file system for valid
1062 allowed characters in a file name and use these
1063 posix use characters as specifed by The Open Group
1064 Base Specifications Issue 7, POSIX.1-2008,
1065 3.278 Portable Filename Character Set
1066 ext use characters allowed by the ext2, ext3, ext4
1067 file systems, namely any 8 bit character apart
1068 from NUL and /
1069
1070 --flock N
1071 start N workers locking on a single file.
1072
1073 --flock-ops N
1074 stop flock stress workers after N bogo flock operations.
1075
1076 -f N, --fork N
1077 start N workers continually forking children that immediately
1078 exit.
1079
1080 --fork-ops N
1081 stop fork stress workers after N bogo operations.
1082
1083 --fork-max P
1084 create P child processes and then wait for them to exit per
1085 iteration. The default is just 1; higher values will create many
1086 temporary zombie processes that are waiting to be reaped. One
1087 can potentially fill up the the process table using high values
1088 for --fork-max and --fork.
1089
1090 --fp-error N
1091 start N workers that generate floating point exceptions. Compu‐
1092 tations are performed to force and check for the FE_DIVBYZERO,
1093 FE_INEXACT, FE_INVALID, FE_OVERFLOW and FE_UNDERFLOW exceptions.
1094 EDOM and ERANGE errors are also checked.
1095
1096 --fp-error-ops N
1097 stop after N bogo floating point exceptions.
1098
1099 --fstat N
1100 start N workers fstat'ing files in a directory (default is
1101 /dev).
1102
1103 --fstat-ops N
1104 stop fstat stress workers after N bogo fstat operations.
1105
1106 --fstat-dir directory
1107 specify the directory to fstat to override the default of /dev.
1108 All the files in the directory will be fstat'd repeatedly.
1109
1110 --full N
1111 start N workers that exercise /dev/full. This attempts to write
1112 to the device (which should always get error ENOSPC), to read
1113 from the device (which should always return a buffer of zeros)
1114 and to seek randomly on the device (which should always suc‐
1115 ceed). (Linux only).
1116
1117 --full-ops N
1118 stop the stress full workers after N bogo I/O operations.
1119
1120 --futex N
1121 start N workers that rapidly exercise the futex system call.
1122 Each worker has two processes, a futex waiter and a futex waker.
1123 The waiter waits with a very small timeout to stress the timeout
1124 and rapid polled futex waiting. This is a Linux specific stress
1125 option.
1126
1127 --futex-ops N
1128 stop futex workers after N bogo successful futex wait opera‐
1129 tions.
1130
1131 --get N
1132 start N workers that call system calls that fetch data from the
1133 kernel, currently these are: getpid, getppid, getcwd, getgid,
1134 getegid, getuid, getgroups, getpgrp, getpgid, getpriority,
1135 getresgid, getresuid, getrlimit, prlimit, getrusage, getsid,
1136 gettid, getcpu, gettimeofday, uname, adjtimex, sysfs. Some of
1137 these system calls are OS specific.
1138
1139 --get-ops N
1140 stop get workers after N bogo get operations.
1141
1142 --getdent N
1143 start N workers that recursively read directories /proc, /dev/,
1144 /tmp, /sys and /run using getdents and getdents64 (Linux only).
1145
1146 --getdent-ops N
1147 stop getdent workers after N bogo getdent bogo operations.
1148
1149 --getrandom N
1150 start N workers that get 8192 random bytes from the /dev/urandom
1151 pool using the getrandom(2) system call (Linux) or getentropy(2)
1152 (OpenBSD).
1153
1154 --getrandom-ops N
1155 stop getrandom workers after N bogo get operations.
1156
1157 --handle N
1158 start N workers that exercise the name_to_handle_at(2) and
1159 open_by_handle_at(2) system calls. (Linux only).
1160
1161 --handle-ops N
1162 stop after N handle bogo operations.
1163
1164 -d N, --hdd N
1165 start N workers continually writing, reading and removing tempo‐
1166 rary files. The default mode is to stress test sequential writes
1167 and reads. With the --ggressive option enabled without any
1168 --hdd-opts options the hdd stressor will work through all the
1169 --hdd-opt options one by one to cover a range of I/O options.
1170
1171 --hdd-bytes N
1172 write N bytes for each hdd process, the default is 1 GB. One can
1173 specify the size as % of free space on the file system or in
1174 units of Bytes, KBytes, MBytes and GBytes using the suffix b, k,
1175 m or g.
1176
1177 --hdd-opts list
1178 specify various stress test options as a comma separated list.
1179 Options are as follows:
1180
1181 Option Description
1182 direct try to minimize cache effects of the I/O. File
1183 I/O writes are performed directly from user
1184 space buffers and synchronous transfer is also
1185 attempted. To guarantee synchronous I/O, also
1186 use the sync option.
1187 dsync ensure output has been transferred to underly‐
1188 ing hardware and file metadata has been updated
1189 (using the O_DSYNC open flag). This is equiva‐
1190 lent to each write(2) being followed by a call
1191 to fdatasync(2). See also the fdatasync option.
1192 fadv-dontneed advise kernel to expect the data will not be
1193 accessed in the near future.
1194 fadv-noreuse advise kernel to expect the data to be accessed
1195 only once.
1196 fadv-normal advise kernel there are no explicit access pat‐
1197 tern for the data. This is the default advice
1198 assumption.
1199 fadv-rnd advise kernel to expect random access patterns
1200 for the data.
1201 fadv-seq advise kernel to expect sequential access pat‐
1202 terns for the data.
1203 fadv-willneed advise kernel to expect the data to be accessed
1204 in the near future.
1205 fsync flush all modified in-core data after each
1206 write to the output device using an explicit
1207 fsync(2) call.
1208
1209
1210
1211 fdatasync similar to fsync, but do not flush the modified
1212 metadata unless metadata is required for later
1213 data reads to be handled correctly. This uses
1214 an explicit fdatasync(2) call.
1215 iovec use readv/writev multiple buffer I/Os rather
1216 than read/write. Instead of 1 read/write opera‐
1217 tion, the buffer is broken into an iovec of 16
1218 buffers.
1219 noatime do not update the file last access timestamp,
1220 this can reduce metadata writes.
1221 sync ensure output has been transferred to underly‐
1222 ing hardware (using the O_SYNC open flag). This
1223 is equivalent to a each write(2) being followed
1224 by a call to fsync(2). See also the fsync
1225 option.
1226 rd-rnd read data randomly. By default, written data is
1227 not read back, however, this option will force
1228 it to be read back randomly.
1229 rd-seq read data sequentially. By default, written
1230 data is not read back, however, this option
1231 will force it to be read back sequentially.
1232 syncfs write all buffered modifications of file meta‐
1233 data and data on the filesystem that contains
1234 the hdd worker files.
1235 utimes force update of file timestamp which may
1236 increase metadata writes.
1237 wr-rnd write data randomly. The wr-seq option cannot
1238 be used at the same time.
1239 wr-seq write data sequentially. This is the default if
1240 no write modes are specified.
1241
1242 Note that some of these options are mutually exclusive, for example,
1243 there can be only one method of writing or reading. Also, fadvise
1244 flags may be mutually exclusive, for example fadv-willneed cannot be
1245 used with fadv-dontneed.
1246
1247 --hdd-ops N
1248 stop hdd stress workers after N bogo operations.
1249
1250 --hdd-write-size N
1251 specify size of each write in bytes. Size can be from 1 byte to
1252 4MB.
1253
1254 --heapsort N
1255 start N workers that sort 32 bit integers using the BSD heap‐
1256 sort.
1257
1258 --heapsort-ops N
1259 stop heapsort stress workers after N bogo heapsorts.
1260
1261 --heapsort-size N
1262 specify number of 32 bit integers to sort, default is 262144
1263 (256 × 1024).
1264
1265 --hsearch N
1266 start N workers that search a 80% full hash table using
1267 hsearch(3). By default, there are 8192 elements inserted into
1268 the hash table. This is a useful method to exercise access of
1269 memory and processor cache.
1270
1271 --hsearch-ops N
1272 stop the hsearch workers after N bogo hsearch operations are
1273 completed.
1274
1275 --hsearch-size N
1276 specify the number of hash entries to be inserted into the hash
1277 table. Size can be from 1K to 4M.
1278
1279 --icache N
1280 start N workers that stress the instruction cache by forcing
1281 instruction cache reloads. This is achieved by modifying an
1282 instruction cache line, causing the processor to reload it when
1283 we call a function in inside it. Currently only verified and
1284 enabled for Intel x86 CPUs.
1285
1286 --icache-ops N
1287 stop the icache workers after N bogo icache operations are com‐
1288 pleted.
1289
1290 --icmp-flood N
1291 start N workers that flood localhost with randonly sized ICMP
1292 ping packets. This option can only be run as root.
1293
1294 --icmp-flood-ops N
1295 stop icmp flood workers after N ICMP ping packets have been
1296 sent.
1297
1298 --inotify N
1299 start N workers performing file system activities such as mak‐
1300 ing/deleting files/directories, moving files, etc. to stress
1301 exercise the various inotify events (Linux only).
1302
1303 --inotify-ops N
1304 stop inotify stress workers after N inotify bogo operations.
1305
1306 -i N, --io N
1307 start N workers continuously calling sync(2) to commit buffer
1308 cache to disk. This can be used in conjunction with the --hdd
1309 options.
1310
1311 --io-ops N
1312 stop io stress workers after N bogo operations.
1313
1314 --iomix N
1315 start N workers that perform a mix of sequential, random and
1316 memory mapped read/write operations as well as forced sync'ing
1317 and (if run as root) cache dropping. Multiple child processes
1318 are spawned to all share a single file and perform different I/O
1319 operations on the same file.
1320
1321 --iomix-bytes N
1322 write N bytes for each iomix worker process, the default is 1
1323 GB. One can specify the size as % of free space on the file sys‐
1324 tem or in units of Bytes, KBytes, MBytes and GBytes using the
1325 suffix b, k, m or g.
1326
1327 --iomix-ops N
1328 stop iomix stress workers after N bogo iomix I/O operations.
1329
1330 --ioprio N
1331 start N workers that exercise the ioprio_get(2) and
1332 ioprio_set(2) system calls (Linux only).
1333
1334 --ioprio-ops N
1335 stop after N io priority bogo operations.
1336
1337 --itimer N
1338 start N workers that exercise the system interval timers. This
1339 sets up an ITIMER_PROF itimer that generates a SIGPROF signal.
1340 The default frequency for the itimer is 1 MHz, however, the
1341 Linux kernel will set this to be no more that the jiffy setting,
1342 hence high frequency SIGPROF signals are not normally possible.
1343 A busy loop spins on getitimer(2) calls to consume CPU and hence
1344 decrement the itimer based on amount of time spent in CPU and
1345 system time.
1346
1347 --itimer-ops N
1348 stop itimer stress workers after N bogo itimer SIGPROF signals.
1349
1350 --itimer-freq F
1351 run itimer at F Hz; range from 1 to 1000000 Hz. Normally the
1352 highest frequency is limited by the number of jiffy ticks per
1353 second, so running above 1000 Hz is difficult to attain in prac‐
1354 tice.
1355
1356 --kcmp N
1357 start N workers that use kcmp(2) to compare parent and child
1358 processes to determine if they share kernel resources (Linux
1359 only).
1360
1361 --kcmp-ops N
1362 stop kcmp workers after N bogo kcmp operations.
1363
1364 --key N
1365 start N workers that create and manipulate keys using add_key(2)
1366 and ketctl(2). As many keys are created as the per user limit
1367 allows and then the following keyctl commands are exercised on
1368 each key: KEYCTL_SET_TIMEOUT, KEYCTL_DESCRIBE, KEYCTL_UPDATE,
1369 KEYCTL_READ, KEYCTL_CLEAR and KEYCTL_INVALIDATE.
1370
1371 --key-ops N
1372 stop key workers after N bogo key operations.
1373
1374 --kill N
1375 start N workers sending SIGUSR1 kill signals to a SIG_IGN signal
1376 handler. Most of the process time will end up in kernel space.
1377
1378 --kill-ops N
1379 stop kill workers after N bogo kill operations.
1380
1381 --klog N
1382 start N workers exercising the kernel syslog(2) system call.
1383 This will attempt to read the kernel log with various sized read
1384 buffers. Linux only.
1385
1386 --klog-ops N
1387 stop klog workers after N syslog operations.
1388
1389 --lease N
1390 start N workers locking, unlocking and breaking leases via the
1391 fcntl(2) F_SETLEASE operation. The parent processes continually
1392 lock and unlock a lease on a file while a user selectable number
1393 of child processes open the file with a non-blocking open to
1394 generate SIGIO lease breaking notifications to the parent. This
1395 stressor is only available if F_SETLEASE, F_WRLCK and F_UNLCK
1396 support is provided by fcntl(2).
1397
1398 --lease-ops N
1399 stop lease workers after N bogo operations.
1400
1401 --lease-breakers N
1402 start N lease breaker child processes per lease worker. Nor‐
1403 mally one child is plenty to force many SIGIO lease breaking
1404 notification signals to the parent, however, this option allows
1405 one to specify more child processes if required.
1406
1407 --link N
1408 start N workers creating and removing hardlinks.
1409
1410 --link-ops N
1411 stop link stress workers after N bogo operations.
1412
1413 --lockbus N
1414 start N workers that rapidly lock and increment 64 bytes of ran‐
1415 domly chosen memory from a 16MB mmap'd region (Intel x86 CPUs
1416 only). This will cause cacheline misses and stalling of CPUs.
1417
1418 --lockbus-ops N
1419 stop lockbus workers after N bogo operations.
1420
1421 --locka N
1422 start N workers that randomly lock and unlock regions of a file
1423 using the POSIX advisory locking mechanism (see fcntl(2),
1424 F_SETLK, F_GETLK). Each worker creates a 1024 KB file and
1425 attempts to hold a maximum of 1024 concurrent locks with a child
1426 process that also tries to hold 1024 concurrent locks. Old locks
1427 are unlocked in a first-in, first-out basis.
1428
1429 --locka-ops N
1430 stop locka workers after N bogo locka operations.
1431
1432 --lockf N
1433 start N workers that randomly lock and unlock regions of a file
1434 using the POSIX lockf(3) locking mechanism. Each worker creates
1435 a 64 KB file and attempts to hold a maximum of 1024 concurrent
1436 locks with a child process that also tries to hold 1024 concur‐
1437 rent locks. Old locks are unlocked in a first-in, first-out
1438 basis.
1439
1440 --lockf-ops N
1441 stop lockf workers after N bogo lockf operations.
1442
1443 --lockf-nonblock
1444 instead of using blocking F_LOCK lockf(3) commands, use non-
1445 blocking F_TLOCK commands and re-try if the lock failed. This
1446 creates extra system call overhead and CPU utilisation as the
1447 number of lockf workers increases and should increase locking
1448 contention.
1449
1450 --lockofd N
1451 start N workers that randomly lock and unlock regions of a file
1452 using the Linux open file description locks (see fcntl(2),
1453 F_OFD_SETLK, F_OFD_GETLK). Each worker creates a 1024 KB file
1454 and attempts to hold a maximum of 1024 concurrent locks with a
1455 child process that also tries to hold 1024 concurrent locks. Old
1456 locks are unlocked in a first-in, first-out basis.
1457
1458 --lockofd-ops N
1459 stop lockofd workers after N bogo lockofd operations.
1460
1461 --longjmp N
1462 start N workers that exercise setjmp(3)/longjmp(3) by rapid
1463 looping on longjmp calls.
1464
1465 --longjmp-ops N
1466 stop longjmp stress workers after N bogo longjmp operations (1
1467 bogo op is 1000 longjmp calls).
1468
1469 --lsearch N
1470 start N workers that linear search a unsorted array of 32 bit
1471 integers using lsearch(3). By default, there are 8192 elements
1472 in the array. This is a useful method to exercise sequential
1473 access of memory and processor cache.
1474
1475 --lsearch-ops N
1476 stop the lsearch workers after N bogo lsearch operations are
1477 completed.
1478
1479 --lsearch-size N
1480 specify the size (number of 32 bit integers) in the array to
1481 lsearch. Size can be from 1K to 4M.
1482
1483 --madvise N
1484 start N workers that apply random madvise(2) advise settings on
1485 pages of a 4MB file backed shared memory mapping.
1486
1487 --madvice-ops N
1488 stop madvise stressors after N bogo madvise operations.
1489
1490 --malloc N
1491 start N workers continuously calling malloc(3), calloc(3), real‐
1492 loc(3) and free(3). By default, up to 65536 allocations can be
1493 active at any point, but this can be altered with the --mal‐
1494 loc-max option. Allocation, reallocation and freeing are chosen
1495 at random; 50% of the time memory is allocation (via malloc,
1496 calloc or realloc) and 50% of the time allocations are free'd.
1497 Allocation sizes are also random, with the maximum allocation
1498 size controlled by the --malloc-bytes option, the default size
1499 being 64K. The worker is re-started if it is killed by the out
1500 of mememory (OOM) killer.
1501
1502 --malloc-bytes N
1503 maximum per allocation/reallocation size. Allocations are ran‐
1504 domly selected from 1 to N bytes. One can specify the size as %
1505 of total available memory or in units of Bytes, KBytes, MBytes
1506 and GBytes using the suffix b, k, m or g. Large allocation
1507 sizes cause the memory allocator to use mmap(2) rather than
1508 expanding the heap using brk(2).
1509
1510 --malloc-max N
1511 maximum number of active allocations allowed. Allocations are
1512 chosen at random and placed in an allocation slot. Because about
1513 50%/50% split between allocation and freeing, typically half of
1514 the allocation slots are in use at any one time.
1515
1516 --malloc-ops N
1517 stop after N malloc bogo operations. One bogo operations relates
1518 to a successful malloc(3), calloc(3) or realloc(3).
1519
1520 --malloc-thresh N
1521 specify the threshold where malloc uses mmap(2) instead of
1522 sbrk(2) to allocate more memory. This is only available on sys‐
1523 tems that provide the GNU C mallopt(3) tuning function.
1524
1525 --matrix N
1526 start N workers that perform various matrix operations on float‐
1527 ing point values. By default, this will exercise all the matrix
1528 stress methods one by one. One can specify a specific matrix
1529 stress method with the --matrix-method option.
1530
1531 --matrix-ops N
1532 stop matrix stress workers after N bogo operations.
1533
1534 --matrix-method method
1535 specify a matrix stress method. Available matrix stress methods
1536 are described as follows:
1537
1538 Method Description
1539 all iterate over all the below matrix stress meth‐
1540 ods
1541 add add two N × N matrices
1542 copy copy one N × N matrix to another
1543 div divide an N × N matrix by a scalar
1544 hadamard Hadamard product of two N × N matrices
1545 frobenius Frobenius product of two N × N matrices
1546 mean arithmetic mean of two N × N matrices
1547 mult multiply an N × N matrix by a scalar
1548 prod product of two N × N matrices
1549 sub subtract one N × N matrix from another N × N
1550 matrix
1551 trans transpose an N × N matrix
1552
1553 --matrix-size N
1554 specify the N × N size of the matrices. Smaller values result
1555 in a floating point compute throughput bound stressor, where as
1556 large values result in a cache and/or memory bandwidth bound
1557 stressor.
1558
1559 --membarrier N
1560 start N workers that exercise the membarrier system call (Linux
1561 only).
1562
1563 --membarrier-ops N
1564 stop membarrier stress workers after N bogo membarrier opera‐
1565 tions.
1566
1567 --memcpy N
1568 start N workers that copy 2MB of data from a shared region to a
1569 buffer using memcpy(3) and then move the data in the buffer with
1570 memmove(3) with 3 different alignments. This will exercise pro‐
1571 cessor cache and system memory.
1572
1573 --memcpy-ops N
1574 stop memcpy stress workers after N bogo memcpy operations.
1575
1576 --memfd N
1577 start N workers that create allocations of 1024 pages using
1578 memfd_create(2) and ftruncate(2) for allocation and mmap(2) to
1579 map the allocation into the process address space. (Linux
1580 only).
1581
1582 --memfd-bytes N
1583 allocate N bytes per memfd stress worker, the default is 256MB.
1584 One can specify the size in as % of total available memory or in
1585 units of Bytes, KBytes, MBytes and GBytes using the suffix b, k,
1586 m or g.
1587
1588 --memfd-fds N
1589 create N memfd file descriptors, the default is 256. One can
1590 select 8 to 4096 memfd file descriptions with this option.
1591
1592 --memfd-ops N
1593 stop after N memfd-create(2) bogo operations.
1594
1595 --mergesort N
1596 start N workers that sort 32 bit integers using the BSD merge‐
1597 sort.
1598
1599 --mergesort-ops N
1600 stop mergesort stress workers after N bogo mergesorts.
1601
1602 --mergesort-size N
1603 specify number of 32 bit integers to sort, default is 262144
1604 (256 × 1024).
1605
1606 --mincore N
1607 start N workers that walk through all of memory 1 page at a time
1608 checking of the page mapped and also is resident in memory using
1609 mincore(2).
1610
1611 --mincore-ops N
1612 stop after N mincore bogo operations. One mincore bogo op is
1613 equivalent to a 1000 mincore(2) calls.
1614
1615 --mincore-random
1616 instead of walking through pages sequentially, select pages at
1617 random. The chosen address is iterated over by shifting it right
1618 one place and checked by mincore until the address is less or
1619 equal to the page size.
1620
1621 --mknod N
1622 start N workers that create and remove fifos, empty files and
1623 named sockets using mknod and unlink.
1624
1625 --mknod-ops N
1626 stop directory thrash workers after N bogo mknod operations.
1627
1628 --mlock N
1629 start N workers that lock and unlock memory mapped pages using
1630 mlock(2), munlock(2), mlockall(2) and munlockall(2). This is
1631 achieved by the mapping of three contiguous pages and then lock‐
1632 ing the second page, hence ensuring non-contiguous pages are
1633 locked . This is then repeated until the maximum allowed mlocks
1634 or a maximum of 262144 mappings are made. Next, all future map‐
1635 pings are mlocked and the worker attempts to map 262144 pages,
1636 then all pages are munlocked and the pages are unmapped.
1637
1638 --mlock-ops N
1639 stop after N mlock bogo operations.
1640
1641 --mmap N
1642 start N workers continuously calling mmap(2)/munmap(2). The
1643 initial mapping is a large chunk (size specified by
1644 --mmap-bytes) followed by pseudo-random 4K unmappings, then
1645 pseudo-random 4K mappings, and then linear 4K unmappings. Note
1646 that this can cause systems to trip the kernel OOM killer on
1647 Linux systems if not enough physical memory and swap is not
1648 available. The MAP_POPULATE option is used to populate pages
1649 into memory on systems that support this. By default, anonymous
1650 mappings are used, however, the --mmap-file and --mmap-async
1651 options allow one to perform file based mappings if desired.
1652
1653 --mmap-ops N
1654 stop mmap stress workers after N bogo operations.
1655
1656 --mmap-async
1657 enable file based memory mapping and use asynchronous msync'ing
1658 on each page, see --mmap-file.
1659
1660 --mmap-bytes N
1661 allocate N bytes per mmap stress worker, the default is 256MB.
1662 One can specify the size as % of total available memory or in
1663 units of Bytes, KBytes, MBytes and GBytes using the suffix b, k,
1664 m or g.
1665
1666 --mmap-file
1667 enable file based memory mapping and by default use synchronous
1668 msync'ing on each page.
1669
1670 --mmap-mprotect
1671 change protection settings on each page of memory. Each time a
1672 page or a group of pages are mapped or remapped then this option
1673 will make the pages read-only, write-only, exec-only, and read-
1674 write.
1675
1676 --mmapfork N
1677 start N workers that each fork off 32 child processes, each of
1678 which tries to allocate some of the free memory left in the sys‐
1679 tem (and trying to avoid any swapping). The child processes
1680 then hint that the allocation will be needed with madvise(2) and
1681 then memset it to zero and hint that it is no longer needed with
1682 madvise before exiting. This produces significant amounts of VM
1683 activity, a lot of cache misses and with minimal swapping.
1684
1685 --mmapfork-ops N
1686 stop after N mmapfork bogo operations.
1687
1688 --mmapmany N
1689 start N workers that attempt to create the maximum allowed per-
1690 process memory mappings. This is achieved by mapping 3 contigu‐
1691 ous pages and then unmapping the middle page hence splitting the
1692 mapping into two. This is then repeated until the maximum
1693 allowed mappings or a maximum of 262144 mappings are made.
1694
1695 --mmapmany-ops N
1696 stop after N mmapmany bogo operations.
1697
1698 --mremap N
1699 start N workers continuously calling mmap(2), mremap(2) and mun‐
1700 map(2). The initial anonymous mapping is a large chunk (size
1701 specified by --mremap-bytes) and then iteratively halved in size
1702 by remapping all the way down to a page size and then back up to
1703 the original size. This worker is only available for Linux.
1704
1705 --mremap-ops N
1706 stop mremap stress workers after N bogo operations.
1707
1708 --mremap-bytes N
1709 initially allocate N bytes per remap stress worker, the default
1710 is 256MB. One can specify the size in units of Bytes, KBytes,
1711 MBytes and GBytes using the suffix b, k, m or g.
1712
1713 --msg N
1714 start N sender and receiver processes that continually send and
1715 receive messages using System V message IPC.
1716
1717 --msg-ops N
1718 stop after N bogo message send operations completed.
1719
1720 --msync N
1721 start N stressors that msync data from a file backed memory map‐
1722 ping from memory back to the file and msync modified data from
1723 the file back to the mapped memory. This exercises the msync(2)
1724 MS_SYNC and MS_INVALIDATE sync operations.
1725
1726 --msync-ops N
1727 stop after N msync bogo operations completed.
1728
1729 --msync-bytes N
1730 allocate N bytes for the memory mapped file, the default is
1731 256MB. One can specify the size as % of total available memory
1732 or in units of Bytes, KBytes, MBytes and GBytes using the suffix
1733 b, k, m or g.
1734
1735 --mq N start N sender and receiver processes that continually send and
1736 receive messages using POSIX message queues. (Linux only).
1737
1738 --mq-ops N
1739 stop after N bogo POSIX message send operations completed.
1740
1741 --mq-size N
1742 specify size of POSIX message queue. The default size is 10 mes‐
1743 sages and most Linux systems this is the maximum allowed size
1744 for normal users. If the given size is greater than the allowed
1745 message queue size then a warning is issued and the maximum
1746 allowed size is used instead.
1747
1748 --netlink-proc N
1749 start N workers that spawn child processes and monitor
1750 fork/exec/exit process events via the proc netlink connector.
1751 Each event received is counted as a bogo op. This stressor can
1752 only be run on Linux and with root privilege.
1753
1754 --netlink-proc-ops N
1755 stop the proc netlink connector stressors after N bogo ops.
1756
1757 --nice N
1758 start N cpu consuming workers that exercise the available nice
1759 levels. Each iteration forks off a child process that runs
1760 through the all the nice levels running a busy loop for 0.1 sec‐
1761 onds per level and then exits.
1762
1763 --nice-ops N
1764 stop after N nice bogo nice loops
1765
1766 --nop N
1767 start N workers that consume cpu cycles issuing no-op instruc‐
1768 tions. This stressor is available if the assembler supports the
1769 "nop" instruction.
1770
1771 --nop-ops N
1772 stop nop workers after N no-op bogo operations. Each bogo-opera‐
1773 tion is equivalent to 256 loops of 256 no-op instructions.
1774
1775 --null N
1776 start N workers writing to /dev/null.
1777
1778 --null-ops N
1779 stop null stress workers after N /dev/null bogo write opera‐
1780 tions.
1781
1782 --numa N
1783 start N workers that migrate stressors and a 4MB memory mapped
1784 buffer around all the available NUMA nodes. This uses
1785 migrate_pages(2) to move the stressors and mbind(2) and
1786 move_pages(2) to move the pages of the mapped buffer. After each
1787 move, the buffer is written to force activity over the bus which
1788 results cache misses. This test will only run on hardware with
1789 NUMA enabled and more than 1 NUMA node.
1790
1791 --numa-ops N
1792 stop NUMA stress workers after N bogo NUMA operations.
1793
1794 --oom-pipe N
1795 start N workers that create as many pipes as allowed and exer‐
1796 cise expanding and shrinking the pipes from the largest pipe
1797 size down to a page size. Data is written into the pipes and
1798 read out again to fill the pipe buffers. With the --aggressive
1799 mode enabled the data is not read out when the pipes are shrunk,
1800 causing the kernel to OOM processes aggressively. Running many
1801 instances of this stressor will force kernel to OOM processes
1802 due to the many large pipe buffer allocations.
1803
1804 --oom-pipe-ops N
1805 stop after N bogo pipe expand/shrink operations.
1806
1807 --opcode N
1808 start N workers that fork off children that execute randomly
1809 generated executable code. This will generate issues such as
1810 illegal instructions, bus errors, segmentation faults, traps,
1811 floating point errors that are handled gracefully by the stres‐
1812 sor.
1813
1814 --opcode-ops N
1815 stop after N attempts to executate illegal code.
1816
1817 -o N, --open N
1818 start N workers that perform open(2) and then close(2) opera‐
1819 tions on /dev/zero. The maximum opens at one time is system
1820 defined, so the test will run up to this maximum, or 65536 open
1821 file descriptors, which ever comes first.
1822
1823 --open-ops N
1824 stop the open stress workers after N bogo open operations.
1825
1826 --personality N
1827 start N workers that attempt to set personality and get all the
1828 available personality types (process execution domain types) via
1829 the personality(2) system call. (Linux only).
1830
1831 --personality-ops N
1832 stop personality stress workers after N bogo personality opera‐
1833 tions.
1834
1835 -p N, --pipe N
1836 start N workers that perform large pipe writes and reads to
1837 exercise pipe I/O. This exercises memory write and reads as
1838 well as context switching. Each worker has two processes, a
1839 reader and a writer.
1840
1841 --pipe-ops N
1842 stop pipe stress workers after N bogo pipe write operations.
1843
1844 --pipe-data-size N
1845 specifies the size in bytes of each write to the pipe (range
1846 from 4 bytes to 4096 bytes). Setting a small data size will
1847 cause more writes to be buffered in the pipe, hence reducing the
1848 context switch rate between the pipe writer and pipe reader pro‐
1849 cesses. Default size is the page size.
1850
1851 --pipe-size N
1852 specifies the size of the pipe in bytes (for systems that sup‐
1853 port the F_SETPIPE_SZ fcntl() command). Setting a small pipe
1854 size will cause the pipe to fill and block more frequently,
1855 hence increasing the context switch rate between the pipe writer
1856 and the pipe reader processes. Default size is 512 bytes.
1857
1858 -P N, --poll N
1859 start N workers that perform zero timeout polling via the
1860 poll(2), select(2) and sleep(3) calls. This wastes system and
1861 user time doing nothing.
1862
1863 --poll-ops N
1864 stop poll stress workers after N bogo poll operations.
1865
1866 --procfs N
1867 start N workers that read files from /proc and recursively read
1868 files from /proc/self (Linux only).
1869
1870 --procfs-ops N
1871 stop procfs reading after N bogo read operations. Note, since
1872 the number of entries may vary between kernels, this bogo ops
1873 metric is probably very misleading.
1874
1875 --pthread N
1876 start N workers that iteratively creates and terminates multiple
1877 pthreads (the default is 1024 pthreads per worker). In each
1878 iteration, each newly created pthread waits until the worker has
1879 created all the pthreads and then they all terminate together.
1880
1881 --pthread-ops N
1882 stop pthread workers after N bogo pthread create operations.
1883
1884 --pthread-max N
1885 create N pthreads per worker. If the product of the number of
1886 pthreads by the number of workers is greater than the soft limit
1887 of allowed pthreads then the maximum is re-adjusted down to the
1888 maximum allowed.
1889
1890 --ptrace N
1891 start N workers that fork and trace system calls of a child
1892 process using ptrace(2).
1893
1894 --ptrace-ops N
1895 stop ptracer workers after N bogo system calls are traced.
1896
1897 --pty N
1898 start N workers that repeatedly attempt to open pseudoterminals
1899 and perform various pty ioctls upon the ptys before closing
1900 them.
1901
1902 --pty-ops N
1903 stop pty workers after N pty bogo operations.
1904
1905 --pty-max N
1906 try to open a maximum of N pseudoterminals, the default is
1907 65536. The allowed range of this setting is 8..65536.
1908
1909 -Q, --qsort N
1910 start N workers that sort 32 bit integers using qsort.
1911
1912 --qsort-ops N
1913 stop qsort stress workers after N bogo qsorts.
1914
1915 --qsort-size N
1916 specify number of 32 bit integers to sort, default is 262144
1917 (256 × 1024).
1918
1919 --quota N
1920 start N workers that exercise the Q_GETQUOTA, Q_GETFMT, Q_GET‐
1921 INFO, Q_GETSTATS and Q_SYNC quotactl(2) commands on all the
1922 available mounted block based file systems.
1923
1924 --quota-ops N
1925 stop quota stress workers after N bogo quotactl operations.
1926
1927 --rdrand N
1928 start N workers that read the Intel hardware random number gen‐
1929 erator (Intel Ivybridge processors upwards).
1930
1931 --rdrand-ops N
1932 stop rdrand stress workers after N bogo rdrand operations (1
1933 bogo op = 2048 random bits successfully read).
1934
1935 --readahead N
1936 start N workers that randomly seeks and performs 512 byte
1937 read/write I/O operations on a file with readahead. The default
1938 file size is 1 GB. Readaheads and reads are batched into 16
1939 readaheads and then 16 reads.
1940
1941 --readahead-bytes N
1942 set the size of readahead file, the default is 1 GB. One can
1943 specify the size as % of free space on the file system or in
1944 units of Bytes, KBytes, MBytes and GBytes using the suffix b, k,
1945 m or g.
1946
1947 --readahead-ops N
1948 stop readahead stress workers after N bogo read operations.
1949
1950 --remap N
1951 start N workers that map 512 pages and re-order these pages
1952 using the deprecated system call remap_file_pages(2). Several
1953 page re-orderings are exercised: forward, reverse, random and
1954 many pages to 1 page.
1955
1956 --remap-ops N
1957 stop after N remapping bogo operations.
1958
1959 -R N, --rename N
1960 start N workers that each create a file and then repeatedly
1961 rename it.
1962
1963 --rename-ops N
1964 stop rename stress workers after N bogo rename operations.
1965
1966 --resources N
1967 start N workers that consume various system resources. Each
1968 worker will spawn 1024 child processes that iterate 1024 times
1969 consuming shared memory, heap, stack, temporary files and vari‐
1970 ous file descriptors (eventfds, memoryfds,
1971 userfaultfds, pipes and sockets).
1972
1973 --resources-ops N
1974 stop after N resource child forks.
1975
1976 --rlimit N
1977 start N workers that exceed CPU and file size resource imits,
1978 generating SIGXCPU and SIGXFSZ signals.
1979
1980 --rlimit-ops N
1981 stop after N bogo resource limited SIGXCPU and SIGXFSZ signals
1982 have been caught.
1983
1984 --rmap N
1985 start N workers that exercise the VM reverse-mapping. This cre‐
1986 ates 16 processes per worker that write/read multiple file-
1987 backed memory mappings. There are 64 lots of 4 page mappings
1988 made onto the file, with each mapping overlapping the previous
1989 by 3 pages and at least 1 page of non-mapped memory between each
1990 of the mappings. Data is synchronously msync'd to the file 1 in
1991 every 256 iterations in a random manner.
1992
1993 --rmap-ops N
1994 stop after N bogo rmap memory writes/reads.
1995
1996 --rtc N
1997 start N workers that exercise the real time clock (RTC) inter‐
1998 faces via /dev/rtc and /sys/class/rtc/rtc0. No destructive
1999 writes (modifications) are performed on the RTC. This is a Linux
2000 only stressor.
2001
2002 --rtc-ops N
2003 stop after N bogo RTC interface accesses.
2004
2005 --schedpolicy N
2006 start N workers that work set the worker to various available
2007 scheduling policies out of SCHED_OTHER, SCHED_BATCH, SCHED_IDLE,
2008 SCHED_FIFO and SCHED_RR. For the real time scheduling policies
2009 a random sched priority is selected between the minimum and max‐
2010 imum scheduling prority settings.
2011
2012 --schedpolicy-ops N
2013 stop after N bogo scheduling policy changes.
2014
2015 --sctp N
2016 start N workers that perform network sctp stress activity using
2017 the Stream Control Transmission Protocol (SCTP). This involves
2018 client/server processes performing rapid connect, send/receives
2019 and disconnects on the local host.
2020
2021 --sctp-domain D
2022 specify the domain to use, the default is ipv4. Currently ipv4
2023 and ipv6 are supported.
2024
2025 --sctp-ops N
2026 stop sctp workers after N bogo operations.
2027
2028 --sctp-port P
2029 start at sctp port P. For N sctp worker processes, ports P to (P
2030 * 4) - 1 are used for ipv4, ipv6 domains and ports P to P - 1
2031 are used for the unix domain.
2032
2033 --seal N
2034 start N workers that exercise the fcntl(2) SEAL commands on a
2035 small anonymous file created using memfd_create(2). After each
2036 SEAL command is issued the stessor also sanity checks if the
2037 seal operation has sealed the file correctly. (Linux only).
2038
2039 --seal-ops N
2040 stop after N bogo seal operations.
2041
2042 --seccomp N
2043 start N workers that exercise Secure Computing system call fil‐
2044 tering. Each worker creates child processes that write a short
2045 message to /dev/null and then exits. 2% of the child processes
2046 have a seccomp filter that disallows the write system call and
2047 hence it is killed by seccomp with a SIGSYS. Note that this
2048 stressor can generate many audit log messages each time the
2049 child is killed.
2050
2051 --seccomp-ops N
2052 stop seccomp stress workers after N seccomp filter tests.
2053
2054 --seek N
2055 start N workers that randomly seeks and performs 512 byte
2056 read/write I/O operations on a file. The default file size is 16
2057 GB.
2058
2059 --seek-ops N
2060 stop seek stress workers after N bogo seek operations.
2061
2062 --seek-punch
2063 punch randomly located 8K holes into the file to cause more
2064 extents to force a more demanding seek stressor, (Linux only).
2065
2066 --seek-size N
2067 specify the size of the file in bytes. Small file sizes allow
2068 the I/O to occur in the cache, causing greater CPU load. Large
2069 file sizes force more I/O operations to drive causing more wait
2070 time and more I/O on the drive. One can specify the size in
2071 units of Bytes, KBytes, MBytes and GBytes using the suffix b, k,
2072 m or g.
2073
2074 --sem N
2075 start N workers that perform POSIX semaphore wait and post oper‐
2076 ations. By default, a parent and 4 children are started per
2077 worker to provide some contention on the semaphore. This
2078 stresses fast semaphore operations and produces rapid context
2079 switching.
2080
2081 --sem-ops N
2082 stop semaphore stress workers after N bogo semaphore operations.
2083
2084 --sem-procs N
2085 start N child workers per worker to provide contention on the
2086 semaphore, the default is 4 and a maximum of 64 are allowed.
2087
2088 --sem-sysv N
2089 start N workers that perform System V semaphore wait and post
2090 operations. By default, a parent and 4 children are started per
2091 worker to provide some contention on the semaphore. This
2092 stresses fast semaphore operations and produces rapid context
2093 switching.
2094
2095 --sem-sysv-ops N
2096 stop semaphore stress workers after N bogo System V semaphore
2097 operations.
2098
2099 --sem-sysv-procs N
2100 start N child processes per worker to provide contention on the
2101 System V semaphore, the default is 4 and a maximum of 64 are
2102 allowed.
2103
2104 --sendfile N
2105 start N workers that send an empty file to /dev/null. This oper‐
2106 ation spends nearly all the time in the kernel. The default
2107 sendfile size is 4MB. The sendfile options are for Linux only.
2108
2109 --sendfile-ops N
2110 stop sendfile workers after N sendfile bogo operations.
2111
2112 --sendfile-size S
2113 specify the size to be copied with each sendfile call. The
2114 default size is 4MB. One can specify the size in units of Bytes,
2115 KBytes, MBytes and GBytes using the suffix b, k, m or g.
2116
2117 --shm N
2118 start N workers that open and allocate shared memory objects
2119 using the POSIX shared memory interfaces. By default, the test
2120 will repeatedly create and destroy 32 shared memory objects,
2121 each of which is 8MB in size.
2122
2123 --shm-ops N
2124 stop after N POSIX shared memory create and destroy bogo opera‐
2125 tions are complete.
2126
2127 --shm-bytes N
2128 specify the size of the POSIX shared memory objects to be cre‐
2129 ated. One can specify the size as % of total available memory or
2130 in units of Bytes, KBytes, MBytes and GBytes using the suffix b,
2131 k, m or g.
2132
2133 --shm-objs N
2134 specify the number of shared memory objects to be created.
2135
2136 --shm-sysv N
2137 start N workers that allocate shared memory using the System V
2138 shared memory interface. By default, the test will repeatedly
2139 create and destroy 8 shared memory segments, each of which is
2140 8MB in size.
2141
2142 --shm-sysv-ops N
2143 stop after N shared memory create and destroy bogo operations
2144 are complete.
2145
2146 --shm-sysv-bytes N
2147 specify the size of the shared memory segment to be created. One
2148 can specify the size as % of total available memory or in units
2149 of Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or
2150 g.
2151
2152 --shm-sysv-segs N
2153 specify the number of shared memory segments to be created.
2154
2155 --sigfd N
2156 start N workers that generate SIGRT signals and are handled by
2157 reads by a child process using a file descriptor set up using
2158 signalfd(2). (Linux only). This will generate a heavy context
2159 switch load when all CPUs are fully loaded.
2160
2161 --sigfd-ops
2162 stop sigfd workers after N bogo SIGUSR1 signals are sent.
2163
2164 --sigfpe N
2165 start N workers that rapidly cause division by zero SIGFPE
2166 faults.
2167
2168 --sigfpe-ops N
2169 stop sigfpe stress workers after N bogo SIGFPE faults.
2170
2171 --sigpending N
2172 start N workers that check if SIGUSR1 signals are pending. This
2173 stressor masks SIGUSR1, generates a SIGUSR1 signal and uses sig‐
2174 pending(2) to see if the signal is pending. Then it unmasks the
2175 signal and checks if the signal is no longer pending.
2176
2177 --signpending-ops N
2178 stop sigpending stress workers after N bogo sigpending pend‐
2179 ing/unpending checks.
2180
2181 --sigsegv N
2182 start N workers that rapidly create and catch segmentation
2183 faults.
2184
2185 --sigsegv-ops N
2186 stop sigsegv stress workers after N bogo segmentation faults.
2187
2188 --sigsuspend N
2189 start N workers that each spawn off 4 child processes that wait
2190 for a SIGUSR1 signal from the parent using sigsuspend(2). The
2191 parent sends SIGUSR1 signals to each child in rapid succession.
2192 Each sigsuspend wakeup is counted as one bogo operation.
2193
2194 --sigsuspend-ops N
2195 stop sigsuspend stress workers after N bogo sigsuspend wakeups.
2196
2197 --sigq N
2198 start N workers that rapidly send SIGUSR1 signals using
2199 sigqueue(3) to child processes that wait for the signal via sig‐
2200 waitinfo(2).
2201
2202 --sigq-ops N
2203 stop sigq stress workers after N bogo signal send operations.
2204
2205 --sleep N
2206 start N workers that spawn off multiple threads that each per‐
2207 form multiple sleeps of ranges 1us to 0.1s. This creates multi‐
2208 ple context switches and timer interrupts.
2209
2210 --sleep-ops N
2211 stop after N sleep bogo operations.
2212
2213 --sleep-max P
2214 start P threads per worker. The default is 1024, the maximum
2215 allowed is 30000.
2216
2217 -S N, --sock N
2218 start N workers that perform various socket stress activity.
2219 This involves a pair of client/server processes performing rapid
2220 connect, send and receives and disconnects on the local host.
2221
2222 --sock-domain D
2223 specify the domain to use, the default is ipv4. Currently ipv4,
2224 ipv6 and unix are supported.
2225
2226 --sock-nodelay
2227 This disables the TCP Nagle algorithm, so data segments are
2228 always sent as soon as possible. This stops data from being
2229 buffered before being transmitted, hence resulting in poorer
2230 network utilisation and more context switches between the sender
2231 and receiver.
2232
2233 --sock-port P
2234 start at socket port P. For N socket worker processes, ports P
2235 to P - 1 are used.
2236
2237 --sock-ops N
2238 stop socket stress workers after N bogo operations.
2239
2240 --sock-opts [ send | sendmsg | sendmmsg ]
2241 by default, messages are sent using send(2). This option allows
2242 one to specify the sending method using send(2), sendmsg(2) or
2243 sendmmsg(2). Note that sendmmsg is only available for Linux
2244 systems that support this system call.
2245
2246 --sock-type [ stream | seqpacket ]
2247 specify the socket type to use. The default type is stream. seq‐
2248 packet currently only works for the unix socket domain.
2249
2250 --sockfd N
2251 start N workers that pass file descriptors over a UNIX domain
2252 socket using the CMSG(3) ancillary data mechanism. For each
2253 worker, pair of client/server processes are created, the server
2254 opens as many file descriptors on /dev/null as possible and
2255 passing these over the socket to a client that reads these from
2256 the CMSG data and immediately closes the files.
2257
2258 --sockfd-ops N
2259 stop sockfd stress workers after N bogo operations.
2260
2261 --sockfd-port P
2262 start at socket port P. For N socket worker processes, ports P
2263 to P - 1 are used.
2264
2265 --sockpair N
2266 start N workers that perform socket pair I/O read/writes. This
2267 involves a pair of client/server processes performing randomly
2268 sized socket I/O operations.
2269
2270 --sockpair-ops N
2271 stop socket pair stress workers after N bogo operations.
2272
2273 --spawn N
2274 start N workers continually spawn children using posix_spawn(3)
2275 that exec stress-ng and then exit almost immediately. Currently
2276 Linux only.
2277
2278 --spawn-ops N
2279 stop spawn stress workers after N bogo spawns.
2280
2281 --splice N
2282 move data from /dev/zero to /dev/null through a pipe without any
2283 copying between kernel address space and user address space
2284 using splice(2). This is only available for Linux.
2285
2286 --splice-ops N
2287 stop after N bogo splice operations.
2288
2289 --splice-bytes N
2290 transfer N bytes per splice call, the default is 64K. One can
2291 specify the size as % of total available memory or in units of
2292 Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g.
2293
2294 --stack N
2295 start N workers that rapidly cause and catch stack overflows by
2296 use of alloca(3).
2297
2298 --stack-fill
2299 the default action is to touch the lowest page on each stack
2300 allocation. This option touches all the pages by filling the new
2301 stack allocation with zeros which forces physical pages to be
2302 allocated and hence is more aggressive.
2303
2304 --stack-ops N
2305 stop stack stress workers after N bogo stack overflows.
2306
2307 --stackmmap N
2308 start N workers that use a 2MB stack that is memory mapped onto
2309 a temporary file. A recursive function works down the stack and
2310 flushes dirty stack pages back to the memory mapped file using
2311 msync(2) until the end of the stack is reached (stack overflow).
2312 This exercises dirty page and stack exception handling.
2313
2314 --stackmmap-ops N
2315 stop workers after N stack overflows have occurred.
2316
2317 --str N
2318 start N workers that exercise various libc string functions on
2319 random strings.
2320
2321 --str-method strfunc
2322 select a specific libc string function to stress. Available
2323 string functions to stress are: all, index, rindex, strcasecmp,
2324 strcat, strchr, strcoll, strcmp, strcpy, strlen, strncasecmp,
2325 strncat, strncmp, strrchr and strxfrm. See string(3) for more
2326 information on these string functions. The 'all' method is the
2327 default and will exercise all the string methods.
2328
2329 --str-ops N
2330 stop after N bogo string operations.
2331
2332 --stream N
2333 start N workers exercising a memory bandwidth stressor loosely
2334 based on the STREAM "Sustainable Memory Bandwidth in High Per‐
2335 formance Computers" benchmarking tool by John D. McCalpin, Ph.D.
2336 This stressor allocates buffers that are at least 4 times the
2337 size of the CPU L2 cache and continually performs rounds of fol‐
2338 lowing computations on large arrays of double precision floating
2339 point numbers:
2340
2341 Operation Description
2342 copy c[i] = a[i]
2343 scale b[i] = scalar * c[i]
2344 add c[i] = a[i] + b[i]
2345 triad a[i] = b[i] + (c[i] * scalar)
2346
2347 Since this is loosely based on a variant of the STREAM benchmark
2348 code, DO NOT submit results based on this as it is intended to
2349 in stress-ng just to stress memory and compute and NOT intended
2350 for STREAM accurate tuned or non-tuned benchmarking whatsoever.
2351 Use the official STREAM benchmarking tool if you desire accurate
2352 and standardised STREAM benchmarks.
2353
2354 --stream-ops N
2355 stop after N stream bogo operations, where a bogo operation is
2356 one round of copy, scale, add and triad operations.
2357
2358 --stream-l3-size N
2359 Specify the CPU Level 3 cache size in bytes. One can specify
2360 the size in units of Bytes, KBytes, MBytes and GBytes using the
2361 suffix b, k, m or g. If the L3 cache size is not provided, then
2362 stress-ng will attempt to determine the cache size, and failing
2363 this, will default the size to 4MB.
2364
2365 -s N, --switch N
2366 start N workers that send messages via pipe to a child to force
2367 context switching.
2368
2369 --switch-ops N
2370 stop context switching workers after N bogo operations.
2371
2372 --symlink N
2373 start N workers creating and removing symbolic links.
2374
2375 --symlink-ops N
2376 stop symlink stress workers after N bogo operations.
2377
2378 --sync-file N
2379 start N workers that perform a range of data syncs across a file
2380 using sync_file_range(2). Three mixes of syncs are performed,
2381 from start to the end of the file, from end of the file to the
2382 start, and a random mix. A random selection of valid sync types
2383 are used, covering the SYNC_FILE_RANGE_WAIT_BEFORE,
2384 SYNC_FILE_RANGE_WRITE and SYNC_FILE_RANGE_WAIT_AFTER flag bits.
2385
2386 --sync-file-ops N
2387 stop sync-file workers after N bogo sync operations.
2388
2389 --sync-file-bytes N
2390 specify the size of the file to be sync'd. One can specify the
2391 size as % of free space on the file system in units of Bytes,
2392 KBytes, MBytes and GBytes using the suffix b, k, m or g.
2393
2394 --sysinfo N
2395 start N workers that continually read system and process spe‐
2396 cific information. This reads the process user and system times
2397 using the times(2) system call. For Linux systems, it also
2398 reads overall system statistics using the sysinfo(2) system call
2399 and also the file system statistics for all mounted file systems
2400 using statfs(2).
2401
2402 --sysinfo-ops N
2403 stop the sysinfo workers after N bogo operations.
2404
2405 --sysfs N
2406 start N workers that recursively read files from /sys (Linux
2407 only). This may cause specific kernel drivers to emit messages
2408 into the kernel log.
2409
2410 --sys-ops N
2411 stop sysfs reading after N bogo read operations. Note, since the
2412 number of entries may vary between kernels, this bogo ops metric
2413 is probably very misleading.
2414
2415 --tee N
2416 move data from a writer process to a reader process through
2417 pipes and to /dev/null without any copying between kernel
2418 address space and user address space using tee(2). This is only
2419 available for Linux.
2420
2421 --tee-ops N
2422 stop after N bogo tee operations.
2423
2424 -T N, --timer N
2425 start N workers creating timer events at a default rate of 1 MHz
2426 (Linux only); this can create a many thousands of timer clock
2427 interrupts. Each timer event is caught by a signal handler and
2428 counted as a bogo timer op.
2429
2430 --timer-ops N
2431 stop timer stress workers after N bogo timer events (Linux
2432 only).
2433
2434 --timer-freq F
2435 run timers at F Hz; range from 1 to 1000000000 Hz (Linux only).
2436 By selecting an appropriate frequency stress-ng can generate
2437 hundreds of thousands of interrupts per second.
2438
2439 --timer-rand
2440 select a timer frequency based around the timer frequency +/-
2441 12.5% random jitter. This tries to force more variability in the
2442 timer interval to make the scheduling less predictable.
2443
2444 --timerfd N
2445 start N workers creating timerfd events at a default rate of 1
2446 MHz (Linux only); this can create a many thousands of timer
2447 clock events. Timer events are waited for on the timer file
2448 descriptor using select(2) and then read and counted as a bogo
2449 timerfd op.
2450
2451 --timerfd-ops N
2452 stop timerfd stress workers after N bogo timerfd events (Linux
2453 only).
2454
2455 --timerfd-freq F
2456 run timers at F Hz; range from 1 to 1000000000 Hz (Linux only).
2457 By selecting an appropriate frequency stress-ng can generate
2458 hundreds of thousands of interrupts per second.
2459
2460 --timerfd-rand
2461 select a timerfd frequency based around the timer frequency +/-
2462 12.5% random jitter. This tries to force more variability in the
2463 timer interval to make the scheduling less predictable.
2464
2465 --tlb-shootdown N
2466 start N workers that force Translation Lookaside Buffer (TLB)
2467 shootdowns. This is achieved by creating up to 16 child pro‐
2468 cesses that all share a region of memory and these processes are
2469 shared amongst the available CPUs. The processes adjust the
2470 page mapping settings causing TLBs to be force flushed on the
2471 other processors, causing the TLB shootdowns.
2472
2473 --tlb-shootdown-ops N
2474 stop after N bogo TLB shootdown operations are completed.
2475
2476 --tmpfs N
2477 start N workers that create a temporary file on an available
2478 tmpfs file system and perform various file based mmap operations
2479 upon it.
2480
2481 --tmpfs-ops N
2482 stop tmpfs stressors after N bogo mmap operations.
2483
2484 --tsc N
2485 start N workers that read the Time Stamp Counter (TSC) 256 times
2486 per loop iteration (bogo operation). Available only on Intel x86
2487 platforms.
2488
2489 --tsc-ops N
2490 stop the tsc workers after N bogo operations are completed.
2491
2492 --tsearch N
2493 start N workers that insert, search and delete 32 bit integers
2494 on a binary tree using tsearch(3), tfind(3) and tdelete(3). By
2495 default, there are 65536 randomized integers used in the tree.
2496 This is a useful method to exercise random access of memory and
2497 processor cache.
2498
2499 --tsearch-ops N
2500 stop the tsearch workers after N bogo tree operations are com‐
2501 pleted.
2502
2503 --tsearch-size N
2504 specify the size (number of 32 bit integers) in the array to
2505 tsearch. Size can be from 1K to 4M.
2506
2507 --udp N
2508 start N workers that transmit data using UDP. This involves a
2509 pair of client/server processes performing rapid connect, send
2510 and receives and disconnects on the local host.
2511
2512 --udp-domain D
2513 specify the domain to use, the default is ipv4. Currently ipv4,
2514 ipv6 and unix are supported.
2515
2516 --udp-lite
2517 use the UDP-Lite (RFC 3828) protocol (only for ipv4 and ipv4
2518 domains).
2519
2520 --udp-ops N
2521 stop udp stress workers after N bogo operations.
2522
2523 --udp-port P
2524 start at port P. For N udp worker processes, ports P to P - 1
2525 are used. By default, ports 7000 upwards are used.
2526
2527 --udp-flood N
2528 start N workers that attempt to flood the host with UDP packets
2529 to random ports. The IP address of the packets are currently not
2530 spoofed. This is only available on systems that support
2531 AF_PACKET.
2532
2533 --udp-flood-domain D
2534 specify the domain to use, the default is ipv4. Currently ipv4
2535 and ipv6 are supported.
2536
2537 --udp-flood-ops N
2538 stop udp-flood stress workers after N bogo operations.
2539
2540 --unshare N
2541 start N workers that each fork off 32 child processes, each of
2542 which exercises the unshare(2) system call by disassociating
2543 parts of the process execution context. (Linux only).
2544
2545 --unshare-ops N
2546 stop after N bogo unshare operations.
2547
2548 -u N, --urandom N
2549 start N workers reading /dev/urandom (Linux only). This will
2550 load the kernel random number source.
2551
2552 --urandom-ops N
2553 stop urandom stress workers after N urandom bogo read operations
2554 (Linux only).
2555
2556 --userfaultfd N
2557 start N workers that generate write page faults on a small
2558 anonymously mapped memory region and handle these faults using
2559 the user space fault handling via the userfaultfd mechanism.
2560 This will generate a large quanity of major page faults and also
2561 context switches during the handling of the page faults. (Linux
2562 only).
2563
2564 --userfaultfd-ops N
2565 stop userfaultfd stress workers after N page faults.
2566
2567 --userfaultfd-bytes N
2568 mmap N bytes per userfaultfd worker to page fault on, the
2569 default is 16MB One can specify the size as % of total available
2570 memory or in units of Bytes, KBytes, MBytes and GBytes using the
2571 suffix b, k, m or g.
2572
2573 --utime N
2574 start N workers updating file timestamps. This is mainly CPU
2575 bound when the default is used as the system flushes metadata
2576 changes only periodically.
2577
2578 --utime-ops N
2579 stop utime stress workers after N utime bogo operations.
2580
2581 --utime-fsync
2582 force metadata changes on each file timestamp update to be
2583 flushed to disk. This forces the test to become I/O bound and
2584 will result in many dirty metadata writes.
2585
2586 --vecmath N
2587 start N workers that perform various unsigned integer math oper‐
2588 ations on various 128 bit vectors. A mix of vector math opera‐
2589 tions are performed on the following vectors: 16 × 8 bits, 8 ×
2590 16 bits, 4 × 32 bits, 2 × 64 bits. The metrics produced by this
2591 mix depend on the processor architecture and the vector math
2592 optimisations produced by the compiler.
2593
2594 --vecmath-ops N
2595 stop after N bogo vector integer math operations.
2596
2597 --vfork N
2598 start N workers continually vforking children that immediately
2599 exit.
2600
2601 --vfork-ops N
2602 stop vfork stress workers after N bogo operations.
2603
2604 --vfork-max P
2605 create P processes and then wait for them to exit per iteration.
2606 The default is just 1; higher values will create many temporary
2607 zombie processes that are waiting to be reaped. One can poten‐
2608 tially fill up the the process table using high values for
2609 --vfork-max and --vfork.
2610
2611 --vforkmany N
2612 start N workers that spawn off a chain of vfork children until
2613 the process table fills up and/or vfork fails. vfork can
2614 rapidly create child processes and the parent process has to
2615 wait until the child dies, so this stressor rapidly fills up the
2616 process table.
2617
2618 --vforkmany-ops N
2619 stop vforkmany stressors after N vforks have been made.
2620
2621 -m N, --vm N
2622 start N workers continuously calling mmap(2)/munmap(2) and writ‐
2623 ing to the allocated memory. Note that this can cause systems to
2624 trip the kernel OOM killer on Linux systems if not enough physi‐
2625 cal memory and swap is not available.
2626
2627 --vm-bytes N
2628 mmap N bytes per vm worker, the default is 256MB. One can spec‐
2629 ify the size as % of total available memory or in units of
2630 Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g.
2631
2632 --vm-ops N
2633 stop vm workers after N bogo operations.
2634
2635 --vm-hang N
2636 sleep N seconds before unmapping memory, the default is zero
2637 seconds. Specifying 0 will do an infinite wait.
2638
2639 --vm-keep
2640 do not continually unmap and map memory, just keep on re-writing
2641 to it.
2642
2643 --vm-locked
2644 Lock the pages of the mapped region into memory using mmap
2645 MAP_LOCKED (since Linux 2.5.37). This is similar to locking
2646 memory as described in mlock(2).
2647
2648 --vm-method m
2649 specify a vm stress method. By default, all the stress methods
2650 are exercised sequentially, however one can specify just one
2651 method to be used if required. Each of the vm workers have 3
2652 phases:
2653
2654 1. Initialised. The anonymously memory mapped region is set to a
2655 known pattern.
2656
2657 2. Exercised. Memory is modified in a known predictable way.
2658 Some vm workers alter memory sequentially, some use small or
2659 large strides to step along memory.
2660
2661 3. Checked. The modified memory is checked to see if it matches
2662 the expected result.
2663
2664 The vm methods containing 'prime' in their name have a stride of
2665 the largest prime less than 2^64, allowing to them to thoroughly
2666 step through memory and touch all locations just once while also
2667 doing without touching memory cells next to each other. This
2668 strategy exercises the cache and page non-locality.
2669
2670 Since the memory being exercised is virtually mapped then there
2671 is no guarantee of touching page addresses in any particular
2672 physical order. These workers should not be used to test that
2673 all the system's memory is working correctly either, use tools
2674 such as memtest86 instead.
2675
2676 The vm stress methods are intended to exercise memory in ways to
2677 possibly find memory issues and to try to force thermal errors.
2678
2679 Available vm stress methods are described as follows:
2680
2681 Method Description
2682 all iterate over all the vm stress methods
2683 as listed below.
2684 flip sequentially work through memory 8
2685 times, each time just one bit in memory
2686 flipped (inverted). This will effec‐
2687 tively invert each byte in 8 passes.
2688 galpat-0 galloping pattern zeros. This sets all
2689 bits to 0 and flips just 1 in 4096 bits
2690 to 1. It then checks to see if the 1s
2691 are pulled down to 0 by their neighbours
2692 or of the neighbours have been pulled up
2693 to 1.
2694 galpat-1 galloping pattern ones. This sets all
2695 bits to 1 and flips just 1 in 4096 bits
2696 to 0. It then checks to see if the 0s
2697 are pulled up to 1 by their neighbours
2698 or of the neighbours have been pulled
2699 down to 0.
2700 gray fill the memory with sequential gray
2701 codes (these only change 1 bit at a time
2702 between adjacent bytes) and then check
2703 if they are set correctly.
2704 incdec work sequentially through memory twice,
2705 the first pass increments each byte by a
2706 specific value and the second pass
2707 decrements each byte back to the origi‐
2708 nal start value. The increment/decrement
2709 value changes on each invocation of the
2710 stressor.
2711 inc-nybble initialise memory to a set value (that
2712 changes on each invocation of the stres‐
2713 sor) and then sequentially work through
2714 each byte incrementing the bottom 4 bits
2715 by 1 and the top 4 bits by 15.
2716 rand-set sequentially work through memory in 64
2717 bit chunks setting bytes in the chunk to
2718 the same 8 bit random value. The random
2719 value changes on each chunk. Check that
2720 the values have not changed.
2721 rand-sum sequentially set all memory to random
2722 values and then summate the number of
2723 bits that have changed from the original
2724 set values.
2725 read64 sequentially read memory using 32 x 64
2726 bit reads per bogo loop. Each loop
2727 equates to one bogo operation. This
2728 exercises raw memory reads.
2729 ror fill memory with a random pattern and
2730 then sequentially rotate 64 bits of mem‐
2731 ory right by one bit, then check the
2732 final load/rotate/stored values.
2733
2734
2735
2736
2737
2738
2739 swap fill memory in 64 byte chunks with ran‐
2740 dom patterns. Then swap each 64 chunk
2741 with a randomly chosen chunk. Finally,
2742 reverse the swap to put the chunks back
2743 to their original place and check if the
2744 data is correct. This exercises adjacent
2745 and random memory load/stores.
2746 move-inv sequentially fill memory 64 bits of mem‐
2747 ory at a time with random values, and
2748 then check if the memory is set cor‐
2749 rectly. Next, sequentially invert each
2750 64 bit pattern and again check if the
2751 memory is set as expected.
2752 modulo-x fill memory over 23 iterations. Each
2753 iteration starts one byte further along
2754 from the start of the memory and steps
2755 along in 23 byte strides. In each
2756 stride, the first byte is set to a ran‐
2757 dom pattern and all other bytes are set
2758 to the inverse. Then it checks see if
2759 the first byte contains the expected
2760 random pattern. This exercises cache
2761 store/reads as well as seeing if neigh‐
2762 bouring cells influence each other.
2763 prime-0 iterate 8 times by stepping through mem‐
2764 ory in very large prime strides clearing
2765 just on bit at a time in every byte.
2766 Then check to see if all bits are set to
2767 zero.
2768 prime-1 iterate 8 times by stepping through mem‐
2769 ory in very large prime strides setting
2770 just on bit at a time in every byte.
2771 Then check to see if all bits are set to
2772 one.
2773 prime-gray-0 first step through memory in very large
2774 prime strides clearing just on bit
2775 (based on a gray code) in every byte.
2776 Next, repeat this but clear the other 7
2777 bits. Then check to see if all bits are
2778 set to zero.
2779 prime-gray-1 first step through memory in very large
2780 prime strides setting just on bit (based
2781 on a gray code) in every byte. Next,
2782 repeat this but set the other 7 bits.
2783 Then check to see if all bits are set to
2784 one.
2785 rowhammer try to force memory corruption using the
2786 rowhammer memory stressor. This fetches
2787 two 32 bit integers from memory and
2788 forces a cache flush on the two
2789 addresses multiple times. This has been
2790 known to force bit flipping on some
2791 hardware, especially with lower fre‐
2792 quency memory refresh cycles.
2793 walk-0d for each byte in memory, walk through
2794 each data line setting them to low (and
2795 the others are set high) and check that
2796 the written value is as expected. This
2797 checks if any data lines are stuck.
2798 walk-1d for each byte in memory, walk through
2799 each data line setting them to high (and
2800 the others are set low) and check that
2801 the written value is as expected. This
2802 checks if any data lines are stuck.
2803 walk-0a in the given memory mapping, work
2804 through a range of specially chosen
2805 addresses working through address lines
2806 to see if any address lines are stuck
2807 low. This works best with physical mem‐
2808 ory addressing, however, exercising
2809 these virtual addresses has some value
2810 too.
2811 walk-1a in the given memory mapping, work
2812 through a range of specially chosen
2813 addresses working through address lines
2814 to see if any address lines are stuck
2815 high. This works best with physical mem‐
2816 ory addressing, however, exercising
2817 these virtual addresses has some value
2818 too.
2819 write64 sequentially write memory using 32 x 64
2820 bit writes per bogo loop. Each loop
2821 equates to one bogo operation. This
2822 exercises raw memory writes. Note that
2823 memory writes are not checked at the end
2824 of each test iteration.
2825 zero-one set all memory bits to zero and then
2826 check if any bits are not zero. Next,
2827 set all the memory bits to one and check
2828 if any bits are not one.
2829
2830 --vm-populate
2831 populate (prefault) page tables for the memory mappings; this
2832 can stress swapping. Only available on systems that support
2833 MAP_POPULATE (since Linux 2.5.46).
2834
2835 --vm-rw N
2836 start N workers that transfer memory to/from a parent/child
2837 using process_vm_writev(2) and process_vm_readv(2). This is fea‐
2838 ture is only supported on Linux. Memory transfers are only ver‐
2839 ified if the --verify option is enabled.
2840
2841 --vm-rw-ops N
2842 stop vm-rw workers after N memory read/writes.
2843
2844 --vm-rw-bytes N
2845 mmap N bytes per vm-rw worker, the default is 16MB. One can
2846 specify the size as % of total available memory or in units of
2847 Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g.
2848
2849 --vm-splice N
2850 move data from memory to /dev/null through a pipe without any
2851 copying between kernel address space and user address space
2852 using vmsplice(2) and splice(2). This is only available for
2853 Linux.
2854
2855 --vm-splice-ops N
2856 stop after N bogo vm-splice operations.
2857
2858 --vm-splice-bytes N
2859 transfer N bytes per vmsplice call, the default is 64K. One can
2860 specify the size as % of total available memory or in units of
2861 Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g.
2862
2863 --wait N
2864 start N workers that spawn off two children; one spins in a
2865 pause(2) loop, the other continually stops and continues the
2866 first. The controlling process waits on the first child to be
2867 resumed by the delivery of SIGCONT using waitpid(2) and
2868 waitid(2).
2869
2870 --wait-ops N
2871 stop after N bogo wait operations.
2872
2873 --wcs N
2874 start N workers that exercise various libc wide character string
2875 functions on random strings.
2876
2877 --wcs-method wcsfunc
2878 select a specific libc wide character string function to stress.
2879 Available string functions to stress are: all, wcscasecmp,
2880 wcscat, wcschr, wcscoll, wcscmp, wcscpy, wcslen, wcsncasecmp,
2881 wcsncat, wcsncmp, wcsrchr and wcsxfrm. The 'all' method is the
2882 default and will exercise all the string methods.
2883
2884 --wcs-ops N
2885 stop after N bogo wide character string operations.
2886
2887 --xattr N
2888 start N workers that create, update and delete batches of
2889 extended attributes on a file.
2890
2891 --xattr-ops N
2892 stop after N bogo extended attribute operations.
2893
2894 -y N, --yield N
2895 start N workers that call sched_yield(2). This stressor ensures
2896 that at least 2 child processes per CPU exercice shield_yield(2)
2897 no matter how many workers are specified, thus always ensuring
2898 rapid context switching.
2899
2900 --yield-ops N
2901 stop yield stress workers after N sched_yield(2) bogo opera‐
2902 tions.
2903
2904 --zero N
2905 start N workers reading /dev/zero.
2906
2907 --zero-ops N
2908 stop zero stress workers after N /dev/zero bogo read operations.
2909
2910 --zlib N
2911 start N workers compressing and decompressing random data using
2912 zlib. Each worker has two processes, one that compresses random
2913 data and pipes it to another process that decompresses the data.
2914 This stressor exercises CPU, cache and memory.
2915
2916 --zlib-ops N
2917 stop after N bogo compression operations, each bogo compression
2918 operation is a compression of 64K of random data at the highest
2919 compression level.
2920
2921 --zlib-method method
2922 specify the type of random data to send to the zlib library. By
2923 default, the data stream is created from a random selection of
2924 the different data generation processes. However one can spec‐
2925 ify just one method to be used if required. Available zlib data
2926 generation methods are described as follows:
2927
2928 Method Description
2929 random segments of the data stream are created by ran‐
2930 domly calling the different data generation
2931 methods.
2932 binary 32 bit random numbers
2933 text random ASCII text
2934 ascii01 randomly distributed ASCII 0 and 1 characters
2935 asciidigits randomly distributed ASCII digits in the range
2936 of 0 and 9
2937 00ff randomly distributed 0x00 and 0xFF values
2938 nybble randomly distributed bytes in the range of 0x00
2939 to 0x0f
2940 rarely1 data that has a single 1 in every 32 bits, ran‐
2941 domly located
2942 rarely0 data that has a single 0 in every 32 bits, ran‐
2943 domly located
2944 fixed data stream is repeated 0x04030201
2945
2946 --zombie N
2947 start N workers that create zombie processes. This will rapidly
2948 try to create a default of 8192 child processes that immediately
2949 die and wait in a zombie state until they are reaped. Once the
2950 maximum number of processes is reached (or fork fails because
2951 one has reached the maximum allowed number of children) the old‐
2952 est child is reaped and a new process is then created in a
2953 first-in first-out manner, and then repeated.
2954
2955 --zombie-ops N
2956 stop zombie stress workers after N bogo zombie operations.
2957
2958 --zombie-max N
2959 try to create as many as N zombie processes. This may not be
2960 reached if the system limit is less than N.
2961
2963 stress-ng --vm 8 --vm-bytes 80% -t 1h
2964
2965 run 8 virtual memory stressors that combined use 80% of the
2966 available memory for 1 hour. Thus each stressor uses 10% of the
2967 available memory.
2968
2969 stress-ng --cpu 4 --io 2 --vm 1 --vm-bytes 1G --timeout 60s
2970
2971 runs for 60 seconds with 4 cpu stressors, 2 io stressors and 1
2972 vm stressor using 1GB of virtual memory.
2973
2974 stress-ng --iomix 2 --iomix-bytes 10% -t 10m
2975
2976 runs 2 instances of the mixed I/O stressors using a total of 10%
2977 of the available file system space for 10 minutes. Each stressor
2978 will use 5% of the available file system space.
2979
2980 stress-ng --cpu 8 --cpu-ops 800000
2981
2982 runs 8 cpu stressors and stops after 800000 bogo operations.
2983
2984 stress-ng --sequential 2 --timeout 2m --metrics
2985
2986 run 2 simultaneous instances of all the stressors sequentially
2987 one by one, each for 2 minutes and summarise with performance
2988 metrics at the end.
2989
2990 stress-ng --cpu 4 --cpu-method fft --cpu-ops 10000 --metrics-brief
2991
2992 run 4 FFT cpu stressors, stop after 10000 bogo operations and
2993 produce a summary just for the FFT results.
2994
2995 stress-ng --cpu 0 --cpu-method all -t 1h
2996
2997 run cpu stressors on all online CPUs working through all the
2998 available CPU stressors for 1 hour.
2999
3000 stress-ng --all 4 --timeout 5m
3001
3002 run 4 instances of all the stressors for 5 minutes.
3003
3004 stress-ng --random 64
3005
3006 run 64 stressors that are randomly chosen from all the available
3007 stressors.
3008
3009 stress-ng --cpu 64 --cpu-method all --verify -t 10m --metrics-brief
3010
3011 run 64 instances of all the different cpu stressors and verify
3012 that the computations are correct for 10 minutes with a bogo
3013 operations summary at the end.
3014
3015 stress-ng --sequential 0 -t 10m
3016
3017 run all the stressors one by one for 10 minutes, with the number
3018 of instances of each stressor matching the number of online
3019 CPUs.
3020
3021 stress-ng --sequential 8 --class io -t 5m --times
3022
3023 run all the stressors in the io class one by one for 5 minutes
3024 each, with 8 instances of each stressor running concurrently and
3025 show overall time utilisation statistics at the end of the run.
3026
3027 stress-ng --all 0 --maximize --aggressive
3028
3029 run all the stressors (1 instance of each per CPU) simultane‐
3030 ously, maximize the settings (memory sizes, file allocations,
3031 etc.) and select the most demanding/aggressive options.
3032
3033 stress-ng --random 32 -x numa,hdd,key
3034
3035 run 32 randomly selected stressors and exclude the numa, hdd and
3036 key stressors
3037
3038 stress-ng --sequential 4 --class vm --exclude bigheap,brk,stack
3039
3040 run 4 instances of the VM stressors one after each other,
3041 excluding the bigheap, brk and stack stressors
3042
3043 stress-ng --taskset 0,2-3 --cpu 3
3044
3045 run 3 instances of the CPU stressor and pin them to CPUs 0, 2
3046 and 3.
3047
3049 Status Description
3050 0 Success.
3051 1 Error; incorrect user options or a fatal resource issue in
3052 the stress-ng stressor harness (for example, out of mem‐
3053 ory).
3054 2 One or more stressors failed.
3055 3 One or more stressors failed to initialise because of lack
3056 of resources, for example ENOMEM (no memory), ENOSPC (no
3057 space on file system) or a missing or unimplemented system
3058 call.
3059 4 One or more stressors were not implemented on a specific
3060 architecture or operating system.
3061
3063 File bug reports at:
3064 https://launchpad.net/ubuntu/+source/stress-ng/+filebug
3065
3067 cpuburn(1), perf(1), stress(1), taskset(1)
3068
3070 stress-ng was written by Colin King <colin.king@canonical.com> and is a
3071 clean room re-implementation and extension of the original stress tool
3072 by Amos Waterland <apw@rossby.metr.ou.edu>. Thanks also for contribu‐
3073 tions from Christian Ehrhardt, James Hunt, Jim Rowan, Joseph DeVincen‐
3074 tis, Luca Pizzamiglio, Luis Henriques, Rob Colclaser, Tim Gardner and
3075 Zhiyi Sun.
3076
3078 Sending a SIGALRM, SIGINT or SIGHUP to stress-ng causes it to terminate
3079 all the stressor processes and ensures temporary files and shared mem‐
3080 ory segments are removed cleanly.
3081
3082 Sending a SIGUSR2 to stress-ng will dump out the current load average
3083 and memory statistics.
3084
3085 Note that the stress-ng cpu, io, vm and hdd tests are different imple‐
3086 mentations of the original stress tests and hence may produce different
3087 stress characteristics. stress-ng does not support any GPU stress
3088 tests.
3089
3090 The bogo operations metrics may change with each release because of
3091 bug fixes to the code, new features, compiler optimisations or changes
3092 in system call performance.
3093
3095 Copyright © 2013-2017 Canonical Ltd.
3096 This is free software; see the source for copying conditions. There is
3097 NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR
3098 PURPOSE.
3099
3100
3101
3102 March 27, 2017 STRESS-NG(1)