1HYPERFINE(1)                General Commands Manual               HYPERFINE(1)
2
3
4

NAME

6       hyperfine - command-line benchmarking tool
7

SYNOPSIS

9       hyperfine  [-ihV]  [-w  warmupruns] [-r runs] [-p cmd...]  [-c cmd] [-s
10       style] [cmd...]
11

DESCRIPTION

13       A command-line benchmarking tool which includes:
14
15              * Statistical analysis across multiple runs
16              * Support for arbitrary shell commands
17              * Constant feedback about the benchmark progress and current es‐
18              timates
19              * Warmup runs can be executed before the actual benchmark
20              * Cache-clearing commands can be set up before each timing run
21              *  Statistical  outlier  detection  to  detect interference from
22              other programs and caching effects
23              * Export results to various formats: CSV, JSON, Markdown,  Asci‐
24              iDoc
25              * Parameterized benchmarks (e.g. vary the number of threads)
26

OPTIONS

28       -w, --warmup warmupruns
29
30              Perform  warmupruns  (number)  before the actual benchmark. This
31              can be used to fill (disk) caches for I/O-heavy programs.
32
33       -m, --min-runs minruns
34
35              Perform at least minruns (number) runs  for  each  command.  De‐
36              fault: 10.
37
38       -M, --max-runs maxruns
39
40              Perform at most maxruns (number) runs for each command. Default:
41              no limit.
42
43       -r, --runs runs
44
45              Perform exactly runs (number) runs for each command. If this op‐
46              tion  is  not  specified, hyperfine automatically determines the
47              number of runs.
48
49       -p, --prepare cmd...
50
51              Execute cmd before each timing run. This is useful for  clearing
52              disk caches, for example.  The --prepare option can be specified
53              once for all commands or multiple times, once for each  command.
54              In  the  latter case, each preparation command will be run prior
55              to the corresponding benchmark command.
56
57       -c, --cleanup cmd
58
59              Execute cmd after the completion of all  benchmarking  runs  for
60              each individual command to be benchmarked. This is useful if the
61              commands to be benchmarked produce artifacts  that  need  to  be
62              cleaned up.
63
64       -P, --parameter-scan var min max
65
66              Perform benchmark runs for each value in the range min..max. Re‐
67              places the string '{var}' in each command by the current parame‐
68              ter value.
69
70              Example:
71                     hyperfine -P threads 1 8 'make -j {threads}'
72
73              This  performs  benchmarks  for  'make  -j 1', 'make -j 2', ...,
74              'make -j 8'.
75
76       -D, --parameter-step-size delta
77
78              This argument requires --parameter-scan to be specified as well.
79              Traverse the range min..max in steps of delta.
80
81              Example:
82                     hyperfine -P delay 0.3 0.7 -D 0.2 'sleep {delay}'
83
84              This performs benchmarks for 'sleep 0.3', 'sleep 0.5' and 'sleep
85              0.7'.
86
87       -L, --parameter-list var values
88
89              Perform benchmark runs for each  value  in  the  comma-separated
90              list  of values.  Replaces the string '{var}' in each command by
91              the current parameter value.
92
93              Example:
94                     hyperfine -L compiler gcc,clang '{compiler} -O2 main.cpp'
95
96              This performs benchmarks for 'gcc -O2 main.cpp' and  'clang  -O2
97              main.cpp'.
98
99       -s, --style type
100
101              Set  output  style  type (default: auto). Set this to 'basic' to
102              disable output coloring and  interactive  elements.  Set  it  to
103              'full' to enable all effects even if no interactive terminal was
104              detected. Set this to 'nocolor' to keep the  interactive  output
105              without any colors. Set this to 'color' to keep the colors with‐
106              out any interactive output. Set this to 'none'  to  disable  all
107              the output of the tool.
108
109       -S, --shell shell
110
111              Set the shell to use for executing benchmarked commands.
112
113       -i, --ignore-failure
114
115              Ignore non-zero exit codes of the benchmarked programs.
116
117       -u, --time-unit unit
118
119              Set  the time unit to be used. Default: second. Possible values:
120              millisecond, second.
121
122       --export-asciidoc file
123
124              Export the timing summary statistics as an AsciiDoc table to the
125              given file.
126
127       --export-csv file
128
129              Export  the  timing summary statistics as CSV to the given file.
130              If you need the timing results for each individual run, use  the
131              JSON export format.
132
133       --export-json file
134
135              Export  the  timing summary statistics and timings of individual
136              runs as JSON to the given file.
137
138       --export-markdown file
139
140              Export the timing summary statistics as a Markdown table to  the
141              given file.
142
143       --show-output
144
145              Print  the  stdout  and  stderr of the benchmark instead of sup‐
146              pressing it. This will increase the time it takes for benchmarks
147              to run, so it should only be used for debugging purposes or when
148              trying to benchmark output speed.
149
150       -n, --command-name name
151
152              Identify a command with the given name. Commands and  names  are
153              paired  in  the  same order: the first command executed gets the
154              first name passed as option.
155
156       -h, --help
157
158              Print help message.
159
160       -V, --version
161
162              Show version information.
163

EXAMPLES

165       Basic benchmark of 'find . -name todo.txt':
166              hyperfine 'find . -name todo.txt'
167
168       Perform benchmarks for 'sleep 0.2' and 'sleep 3.2'  with  a  minimum  5
169       runs each:
170              hyperfine --min-runs 5 'sleep 0.2' 'sleep 3.2'
171
172       Perform  a  benchmark  of  'grep' with a warm disk cache by executing 3
173       runs up front that are not part of the measurement:
174              hyperfine --warmup 3 'grep -R TODO *'
175
176       Export the results of a parameter scan benchmark to a markdown table:
177              hyperfine --export-markdown output.md --parameter-scan time 1 5 'sleep {time}'
178

AUTHOR

180       David Peter (sharkdp)
181
182       Source, bug tracker, and additional information can be found on  GitHub
183       at: https://github.com/sharkdp/hyperfine
184
185
186
187                                                                  HYPERFINE(1)
Impressum