1HYPERFINE(1) General Commands Manual HYPERFINE(1)
2
3
4
6 hyperfine - command-line benchmarking tool
7
9 hyperfine [-ihVN] [--warmup NUM] [--min-runs NUM] [--max-runs NUM]
10 [--runs NUM] [--setup CMD] [--prepare CMD] [--cleanup CMD] [--parame‐
11 ter-scan VAR MIN MAX] [--parameter-step-size DELTA] [--parameter-list
12 VAR VALUES] [--shell SHELL] [--style TYPE] [--sort METHOD] [--time-unit
13 UNIT] [--export-asciidoc FILE] [--export-csv FILE] [--export-json FILE]
14 [--export-markdown FILE] [--export-orgmode FILE] [--output WHERE]
15 [--input WHERE] [--command-name NAME] [COMMAND...]
16
18 A command-line benchmarking tool which includes:
19
20 * Statistical analysis across multiple runs
21 * Support for arbitrary shell commands
22 * Constant feedback about the benchmark progress and current es‐
23 timates
24 * Warmup runs can be executed before the actual benchmark
25 * Cache-clearing commands can be set up before each timing run
26 * Statistical outlier detection to detect interference from
27 other programs and caching effects
28 * Export results to various formats: CSV, JSON, Markdown, Asci‐
29 iDoc
30 * Parameterized benchmarks (e.g. vary the number of threads)
31
33 -w, --warmup NUM
34
35 Perform NUM warmup runs before the actual benchmark. This can be
36 used to fill (disk) caches for I/O-heavy programs.
37
38 -m, --min-runs NUM
39
40 Perform at least NUM runs for each command. Default: 10.
41
42 -M, --max-runs NUM
43
44 Perform at most NUM runs for each command. By default, there is
45 no limit.
46
47 -r, --runs NUM
48
49 Perform exactly NUM runs for each command. If this option is not
50 specified, hyperfine automatically determines the number of
51 runs.
52
53 -s, --setup CMD...
54
55 Execute CMD once before each set of timing runs. This is useful
56 for compiling your software or with the provided parameters, or
57 to do any other work that should happen once before a series of
58 benchmark runs, not every time as would happen with the --pre‐
59 pare option.
60
61 -p, --prepare CMD...
62
63 Execute CMD before each timing run. This is useful for clearing
64 disk caches, for example. The --prepare option can be specified
65 once for all commands or multiple times, once for each command.
66 In the latter case, each preparation command will be run prior
67 to the corresponding benchmark command.
68
69 -c, --cleanup CMD...
70
71 Execute CMD after the completion of all benchmarking runs for
72 each individual command to be benchmarked. This is useful if the
73 commands to be benchmarked produce artifacts that need to be
74 cleaned up.
75
76 -P, --parameter-scan VAR MIN MAX
77
78 Perform benchmark runs for each value in the range MIN..MAX. Re‐
79 places the string '{VAR}' in each command by the current parame‐
80 ter value.
81
82 Example:
83 hyperfine -P threads 1 8 'make -j {threads}'
84
85 This performs benchmarks for 'make -j 1', 'make -j 2', ...,
86 'make -j 8'.
87
88 To have the value increase following different patterns, use
89 shell arithmetics.
90
91 Example:
92 hyperfine -P size 0 3 'sleep $((2**{size}))'
93
94 This performs benchmarks with power of 2 increases: 'sleep 1',
95 'sleep 2', 'sleep 4', ...
96
97 The exact syntax may vary depending on your shell and OS.
98
99 -D, --parameter-step-size DELTA
100
101 This argument requires --parameter-scan to be specified as well.
102 Traverse the range MIN..MAX in steps of DELTA.
103
104 Example:
105 hyperfine -P delay 0.3 0.7 -D 0.2 'sleep {delay}'
106
107 This performs benchmarks for 'sleep 0.3', 'sleep 0.5' and 'sleep
108 0.7'.
109
110 -L, --parameter-list VAR VALUES
111
112 Perform benchmark runs for each value in the comma-separated
113 list of VALUES. Replaces the string '{VAR}' in each command by
114 the current parameter value.
115
116 Example:
117 hyperfine -L compiler gcc,clang '{compiler} -O2 main.cpp'
118
119 This performs benchmarks for 'gcc -O2 main.cpp' and 'clang -O2
120 main.cpp'.
121
122 The option can be specified multiple times to run benchmarks for
123 all possible parameter combinations.
124
125 -S, --shell SHELL
126
127 Set the shell to use for executing benchmarked commands. This
128 can be the name or the path to the shell executable, or a full
129 command line like "bash --norc". It can also be set to "default"
130 to explicitly select the default shell on this platform. Fi‐
131 nally, this can also be set to "none" to disable the shell. In
132 this case, commands will be executed directly. They can still
133 have arguments, but more complex things like "sleep 0.1; sleep
134 0.2" are not possible without a shell.
135
136 -N
137
138 An alias for '--shell=none'.
139
140 -i, --ignore-failure
141
142 Ignore non-zero exit codes of the benchmarked programs.
143
144 --style TYPE
145
146 Set output style TYPE (default: auto). Set this to 'basic' to
147 disable output coloring and interactive elements. Set it to
148 'full' to enable all effects even if no interactive terminal was
149 detected. Set this to 'nocolor' to keep the interactive output
150 without any colors. Set this to 'color' to keep the colors with‐
151 out any interactive output. Set this to 'none' to disable all
152 the output of the tool.
153
154 --sort METHOD
155
156 Specify the sort order of the speed comparison summary and the
157 exported tables for markup formats (Markdown, AsciiDoc,
158 org-mode):
159
160 auto (default)
161 the speed comparison will be ordered by time and the
162 markup tables will be ordered by command (input order).
163
164 command
165 order benchmarks in the way they were specified
166
167 mean-time
168 order benchmarks by mean runtime
169
170 -u, --time-unit UNIT
171
172 Set the time unit to be used. Possible values: microsecond, mil‐
173 lisecond, second. If the option is not given, the time unit is
174 determined automatically. This option affects the standard out‐
175 put as well as all export formats except for CSV and JSON.
176
177 --export-asciidoc FILE
178
179 Export the timing summary statistics as an AsciiDoc table to the
180 given FILE. The output time unit can be changed using the
181 --time-unit option.
182
183 --export-csv FILE
184
185 Export the timing summary statistics as CSV to the given FILE.
186 If you need the timing results for each individual run, use the
187 JSON export format. The output time unit is always seconds.
188
189 --export-json FILE
190
191 Export the timing summary statistics and timings of individual
192 runs as JSON to the given FILE. The output time unit is always
193 seconds.
194
195 --export-markdown FILE
196
197 Export the timing summary statistics as a Markdown table to the
198 given FILE. The output time unit can be changed using the
199 --time-unit option.
200
201 --export-orgmode FILE
202
203 Export the timing summary statistics as an Emacs org-mode table
204 to the given FILE. The output time unit can be changed using the
205 --time-unit option.
206
207 --show-output
208
209 Print the stdout and stderr of the benchmark instead of sup‐
210 pressing it. This will increase the time it takes for benchmarks
211 to run, so it should only be used for debugging purposes or when
212 trying to benchmark output speed.
213
214 --output WHERE
215
216 Control where the output of the benchmark is redirected. Note
217 that some programs like 'grep' detect when standard output is
218 /dev/null and apply certain optimizations. To avoid that, con‐
219 sider using --output=pipe.
220
221 WHERE can be:
222
223 null Redirect output to /dev/null (the default).
224
225 pipe Feed the output through a pipe before discarding it.
226
227 inherit
228 Don't redirect the output at all (same as '--show-out‐
229 put').
230
231 <FILE> Write the output to the given file.
232
233 --input WHERE
234
235 Control where the input of the benchmark comes from.
236
237 WHERE can be:
238
239 null Read from /dev/null (the default).
240
241 <FILE> Read the input from the given file.
242
243 -n, --command-name NAME
244
245 Give a meaningful NAME to a command. This can be specified mul‐
246 tiple times if several commands are benchmarked.
247
248 -h, --help
249
250 Print help
251
252 -V, --version
253
254 Print version
255
257 Basic benchmark of 'find . -name todo.txt':
258 hyperfine 'find . -name todo.txt'
259
260 Perform benchmarks for 'sleep 0.2' and 'sleep 3.2' with a minimum 5
261 runs each:
262 hyperfine --min-runs 5 'sleep 0.2' 'sleep 3.2'
263
264 Perform a benchmark of 'grep' with a warm disk cache by executing 3
265 runs up front that are not part of the measurement:
266 hyperfine --warmup 3 'grep -R TODO *'
267
268 Export the results of a parameter scan benchmark to a markdown table:
269 hyperfine --export-markdown output.md --parameter-scan time 1 5 'sleep {time}'
270
271 Demonstrate when each of --setup, --prepare, cmd and --cleanup will
272 run:
273 hyperfine -L n 1,2 -r 2 --show-output \
274 --setup 'echo setup n={n}' \
275 --prepare 'echo prepare={n}' \
276 --cleanup 'echo cleanup n={n}' \
277 'echo command n={n}'
278
280 David Peter <mail@david-peter.de>
281
282 Source, bug tracker, and additional information can be found on GitHub:
283 https://github.com/sharkdp/hyperfine
284
285
286
287 HYPERFINE(1)