1HYPERFINE(1)                General Commands Manual               HYPERFINE(1)
2
3
4

NAME

6       hyperfine - command-line benchmarking tool
7

SYNOPSIS

9       hyperfine  [-ihV]  [-w  warmupruns] [-r runs] [-p cmd...]  [-c cmd] [-s
10       style] [cmd...]
11

DESCRIPTION

13       A command-line benchmarking tool which includes:
14
15              * Statistical analysis across multiple runs
16              * Support for arbitrary shell commands
17              * Constant feedback about the benchmark progress and current es‐
18              timates
19              * Warmup runs can be executed before the actual benchmark
20              * Cache-clearing commands can be set up before each timing run
21              *  Statistical  outlier  detection  to  detect interference from
22              other programs and caching effects
23              * Export results to various formats: CSV, JSON, Markdown,  Asci‐
24              iDoc
25              * Parameterized benchmarks (e.g. vary the number of threads)
26

OPTIONS

28       -w, --warmup warmupruns
29
30              Perform  warmupruns  (number)  before the actual benchmark. This
31              can be used to fill (disk) caches for I/O-heavy programs.
32
33       -m, --min-runs minruns
34
35              Perform at least minruns (number) runs  for  each  command.  De‐
36              fault: 10.
37
38       -M, --max-runs maxruns
39
40              Perform at most maxruns (number) runs for each command. Default:
41              no limit.
42
43       -r, --runs runs
44
45              Perform exactly runs (number) runs for each command. If this op‐
46              tion  is  not  specified, hyperfine automatically determines the
47              number of runs.
48
49       -s, --setup cmd
50
51              Execute cmd once before each set of timing runs. This is  useful
52              for  compiling  software  or  doing  other  one-off  setup.  The
53              --setup option can only be specified once.
54
55       -p, --prepare cmd...
56
57              Execute cmd before each timing run. This is useful for  clearing
58              disk caches, for example.  The --prepare option can be specified
59              once for all commands or multiple times, once for each  command.
60              In  the  latter case, each preparation command will be run prior
61              to the corresponding benchmark command.
62
63       -c, --cleanup cmd
64
65              Execute cmd after the completion of all  benchmarking  runs  for
66              each individual command to be benchmarked. This is useful if the
67              commands to be benchmarked produce artifacts  that  need  to  be
68              cleaned up.
69
70       -P, --parameter-scan var min max
71
72              Perform benchmark runs for each value in the range min..max. Re‐
73              places the string '{var}' in each command by the current parame‐
74              ter value.
75
76              Example:
77                     hyperfine -P threads 1 8 'make -j {threads}'
78
79              This  performs  benchmarks  for  'make  -j 1', 'make -j 2', ...,
80              'make -j 8'.
81
82       -D, --parameter-step-size delta
83
84              This argument requires --parameter-scan to be specified as well.
85              Traverse the range min..max in steps of delta.
86
87              Example:
88                     hyperfine -P delay 0.3 0.7 -D 0.2 'sleep {delay}'
89
90              This performs benchmarks for 'sleep 0.3', 'sleep 0.5' and 'sleep
91              0.7'.
92
93       -L, --parameter-list var values
94
95              Perform benchmark runs for each  value  in  the  comma-separated
96              list  of values.  Replaces the string '{var}' in each command by
97              the current parameter value.
98
99              Example:
100                     hyperfine -L compiler gcc,clang '{compiler} -O2 main.cpp'
101
102              This performs benchmarks for 'gcc -O2 main.cpp' and  'clang  -O2
103              main.cpp'.
104
105       --style type
106
107              Set  output  style  type (default: auto). Set this to 'basic' to
108              disable output coloring and  interactive  elements.  Set  it  to
109              'full' to enable all effects even if no interactive terminal was
110              detected. Set this to 'nocolor' to keep the  interactive  output
111              without any colors. Set this to 'color' to keep the colors with‐
112              out any interactive output. Set this to 'none'  to  disable  all
113              the  output  of  the tool. In hyperfine versions v0.4.0..v1.12.0
114              this option took the -s, short option,  which  is  now  used  by
115              --setup.
116
117       -S, --shell shell
118
119              Set the shell to use for executing benchmarked commands.
120
121       -i, --ignore-failure
122
123              Ignore non-zero exit codes of the benchmarked programs.
124
125       -u, --time-unit unit
126
127              Set  the time unit to be used. Default: second. Possible values:
128              millisecond, second.
129
130       --export-asciidoc file
131
132              Export the timing summary statistics as an AsciiDoc table to the
133              given file.
134
135       --export-csv file
136
137              Export  the  timing summary statistics as CSV to the given file.
138              If you need the timing results for each individual run, use  the
139              JSON export format.
140
141       --export-json file
142
143              Export  the  timing summary statistics and timings of individual
144              runs as JSON to the given file.
145
146       --export-markdown file
147
148              Export the timing summary statistics as a Markdown table to  the
149              given file.
150
151       --show-output
152
153              Print  the  stdout  and  stderr of the benchmark instead of sup‐
154              pressing it. This will increase the time it takes for benchmarks
155              to run, so it should only be used for debugging purposes or when
156              trying to benchmark output speed.
157
158       -n, --command-name name
159
160              Identify a command with the given name. Commands and  names  are
161              paired  in  the  same order: the first command executed gets the
162              first name passed as option.
163
164       -h, --help
165
166              Print help message.
167
168       -V, --version
169
170              Show version information.
171

EXAMPLES

173       Basic benchmark of 'find . -name todo.txt':
174              hyperfine 'find . -name todo.txt'
175
176       Perform benchmarks for 'sleep 0.2' and 'sleep 3.2'  with  a  minimum  5
177       runs each:
178              hyperfine --min-runs 5 'sleep 0.2' 'sleep 3.2'
179
180       Perform  a  benchmark  of  'grep' with a warm disk cache by executing 3
181       runs up front that are not part of the measurement:
182              hyperfine --warmup 3 'grep -R TODO *'
183
184       Export the results of a parameter scan benchmark to a markdown table:
185              hyperfine --export-markdown output.md --parameter-scan time 1 5 'sleep {time}'
186
187       Demonstrate when each of --setup, --prepare,  cmd  and  --cleanup  will
188       run:
189              hyperfine -L n 1,2 -r 2 --show-output \
190                   --setup 'echo setup n={n}' \
191                   --prepare 'echo prepare={n}' \
192                   --cleanup 'echo cleanup n={n}' \
193                   'echo command n={n}'
194

AUTHOR

196       David Peter (sharkdp)
197
198       Source,  bug tracker, and additional information can be found on GitHub
199       at: https://github.com/sharkdp/hyperfine
200
201
202
203                                                                  HYPERFINE(1)
Impressum