1PERF-STAT(1) perf Manual PERF-STAT(1)
2
3
4
6 perf-stat - Run a command and gather performance counter statistics
7
9 perf stat [-e <EVENT> | --event=EVENT] [-a] <command>
10 perf stat [-e <EVENT> | --event=EVENT] [-a] — <command> [<options>]
11 perf stat [-e <EVENT> | --event=EVENT] [-a] record [-o file] — <command> [<options>]
12 perf stat report [-i file]
13
15 This command runs a command and gathers performance counter statistics
16 from it.
17
19 <command>...
20 Any command you can specify in a shell.
21
22 record
23 See STAT RECORD.
24
25 report
26 See STAT REPORT.
27
28 -e, --event=
29 Select the PMU event. Selection can be:
30
31 · a symbolic event name (use perf list to list all events)
32
33 · a raw PMU event (eventsel+umask) in the form of rNNN where NNN
34 is a hexadecimal event descriptor.
35
36 · a symbolically formed event like pmu/param1=0x3,param2/ where
37 param1 and param2 are defined as formats for the PMU in
38 /sys/bus/event_source/devices/<pmu>/format/*
39
40 · a symbolically formed event like
41 pmu/config=M,config1=N,config2=K/ where M, N, K are numbers (in
42 decimal, hex, octal format). Acceptable values for each of
43 config, config1 and config2 parameters are defined by
44 corresponding entries in
45 /sys/bus/event_source/devices/<pmu>/format/*
46
47 Note that the last two syntaxes support prefix and glob matching in
48 the PMU name to simplify creation of events accross multiple instances
49 of the same type of PMU in large systems (e.g. memory controller PMUs).
50 Multiple PMU instances are typical for uncore PMUs, so the prefix
51 'uncore_' is also ignored when performing this match.
52
53 -i, --no-inherit
54 child tasks do not inherit counters
55
56 -p, --pid=<pid>
57 stat events on existing process id (comma separated list)
58
59 -t, --tid=<tid>
60 stat events on existing thread id (comma separated list)
61
62 -a, --all-cpus
63 system-wide collection from all CPUs (default if no target is
64 specified)
65
66 -c, --scale
67 scale/normalize counter values
68
69 -d, --detailed
70 print more detailed statistics, can be specified up to 3 times
71
72 -d: detailed events, L1 and LLC data cache
73 -d -d: more detailed events, dTLB and iTLB events
74 -d -d -d: very detailed events, adding prefetch events
75
76 -r, --repeat=<n>
77 repeat command and print average + stddev (max: 100). 0 means
78 forever.
79
80 -B, --big-num
81 print large numbers with thousands' separators according to locale
82
83 -C, --cpu=
84 Count only on the list of CPUs provided. Multiple CPUs can be
85 provided as a comma-separated list with no space: 0,1. Ranges of
86 CPUs are specified with -: 0-2. In per-thread mode, this option is
87 ignored. The -a option is still necessary to activate system-wide
88 monitoring. Default is to count on all CPUs.
89
90 -A, --no-aggr
91 Do not aggregate counts across all monitored CPUs.
92
93 -n, --null
94 null run - don’t start any counters
95
96 -v, --verbose
97 be more verbose (show counter open errors, etc)
98
99 -x SEP, --field-separator SEP
100 print counts using a CSV-style output to make it easy to import
101 directly into spreadsheets. Columns are separated by the string
102 specified in SEP.
103
104 --table
105 Display time for each run (-r option), in a table format, e.g.:
106
107 $ perf stat --null -r 5 --table perf bench sched pipe
108
109 Performance counter stats for 'perf bench sched pipe' (5 runs):
110
111 # Table of individual measurements:
112 5.189 (-0.293) #
113 5.189 (-0.294) #
114 5.186 (-0.296) #
115 5.663 (+0.181) ##
116 6.186 (+0.703) ####
117
118 # Final result:
119 5.483 +- 0.198 seconds time elapsed ( +- 3.62% )
120
121 -G name, --cgroup name
122 monitor only in the container (cgroup) called "name". This option
123 is available only in per-cpu mode. The cgroup filesystem must be
124 mounted. All threads belonging to container "name" are monitored
125 when they run on the monitored CPUs. Multiple cgroups can be
126 provided. Each cgroup is applied to the corresponding event, i.e.,
127 first cgroup to first event, second cgroup to second event and so
128 on. It is possible to provide an empty cgroup (monitor all the
129 time) using, e.g., -G foo,,bar. Cgroups must have corresponding
130 events, i.e., they always refer to events defined earlier on the
131 command line. If the user wants to track multiple events for a
132 specific cgroup, the user can use -e e1 -e e2 -G foo,foo or just
133 use -e e1 -e e2 -G foo.
134
135 If wanting to monitor, say, cycles for a cgroup and also for system
136 wide, this command line can be used: perf stat -e cycles -G cgroup_name
137 -a -e cycles.
138
139 -o file, --output file
140 Print the output into the designated file.
141
142 --append
143 Append to the output file designated with the -o option. Ignored if
144 -o is not specified.
145
146 --log-fd
147 Log output to fd, instead of stderr. Complementary to --output, and
148 mutually exclusive with it. --append may be used here. Examples:
149 3>results perf stat --log-fd 3 — $cmd 3>>results perf stat
150 --log-fd 3 --append — $cmd
151
152 --pre, --post
153 Pre and post measurement hooks, e.g.:
154
155 perf stat --repeat 10 --null --sync --pre make -s
156 O=defconfig-build/clean — make -s -j64 O=defconfig-build/ bzImage
157
158 -I msecs, --interval-print msecs
159 Print count deltas every N milliseconds (minimum: 1ms) The overhead
160 percentage could be high in some cases, for instance with small,
161 sub 100ms intervals. Use with caution. example: perf stat -I 1000
162 -e cycles -a sleep 5
163
164 --interval-count times
165 Print count deltas for fixed number of times. This option should be
166 used together with "-I" option. example: perf stat -I 1000
167 --interval-count 2 -e cycles -a
168
169 --interval-clear
170 Clear the screen before next interval.
171
172 --timeout msecs
173 Stop the perf stat session and print count deltas after N
174 milliseconds (minimum: 10 ms). This option is not supported with
175 the "-I" option. example: perf stat --time 2000 -e cycles -a
176
177 --metric-only
178 Only print computed metrics. Print them in a single line. Don’t
179 show any raw values. Not supported with --per-thread.
180
181 --per-socket
182 Aggregate counts per processor socket for system-wide mode
183 measurements. This is a useful mode to detect imbalance between
184 sockets. To enable this mode, use --per-socket in addition to -a.
185 (system-wide). The output includes the socket number and the number
186 of online processors on that socket. This is useful to gauge the
187 amount of aggregation.
188
189 --per-core
190 Aggregate counts per physical processor for system-wide mode
191 measurements. This is a useful mode to detect imbalance between
192 physical cores. To enable this mode, use --per-core in addition to
193 -a. (system-wide). The output includes the core number and the
194 number of online logical processors on that physical processor.
195
196 --per-thread
197 Aggregate counts per monitored threads, when monitoring threads (-t
198 option) or processes (-p option).
199
200 -D msecs, --delay msecs
201 After starting the program, wait msecs before measuring. This is
202 useful to filter out the startup phase of the program, which is
203 often very different.
204
205 -T, --transaction
206 Print statistics of transactional execution if supported.
207
209 Stores stat data into perf data file.
210
211 -o file, --output file
212 Output file name.
213
215 Reads and reports stat data from perf data file.
216
217 -i file, --input file
218 Input file name.
219
220 --per-socket
221 Aggregate counts per processor socket for system-wide mode
222 measurements.
223
224 --per-core
225 Aggregate counts per physical processor for system-wide mode
226 measurements.
227
228 -M, --metrics
229 Print metrics or metricgroups specified in a comma separated list.
230 For a group all metrics from the group are added. The events from
231 the metrics are automatically measured. See perf list output for
232 the possble metrics and metricgroups.
233
234 -A, --no-aggr
235 Do not aggregate counts across all monitored CPUs.
236
237 --topdown
238 Print top down level 1 metrics if supported by the CPU. This allows
239 to determine bottle necks in the CPU pipeline for CPU bound
240 workloads, by breaking the cycles consumed down into frontend
241 bound, backend bound, bad speculation and retiring.
242
243 Frontend bound means that the CPU cannot fetch and decode instructions
244 fast enough. Backend bound means that computation or memory access is
245 the bottle neck. Bad Speculation means that the CPU wasted cycles due
246 to branch mispredictions and similar issues. Retiring means that the
247 CPU computed without an apparently bottleneck. The bottleneck is only
248 the real bottleneck if the workload is actually bound by the CPU and
249 not by something else.
250
251 For best results it is usually a good idea to use it with interval mode
252 like -I 1000, as the bottleneck of workloads can change often.
253
254 The top down metrics are collected per core instead of per CPU thread.
255 Per core mode is automatically enabled and -a (global monitoring) is
256 needed, requiring root rights or perf.perf_event_paranoid=-1.
257
258 Topdown uses the full Performance Monitoring Unit, and needs disabling
259 of the NMI watchdog (as root): echo 0 > /proc/sys/kernel/nmi_watchdog
260 for best results. Otherwise the bottlenecks may be inconsistent on
261 workload with changing phases.
262
263 This enables --metric-only, unless overriden with --no-metric-only.
264
265 To interpret the results it is usually needed to know on which CPUs the
266 workload runs on. If needed the CPUs can be forced using taskset.
267
268 --no-merge
269 Do not merge results from same PMUs.
270
271 When multiple events are created from a single event specification,
272 stat will, by default, aggregate the event counts and show the result
273 in a single row. This option disables that behavior and shows the
274 individual events and counts.
275
276 Multiple events are created from a single event specification when: 1.
277 Prefix or glob matching is used for the PMU name. 2. Aliases, which are
278 listed immediately after the Kernel PMU events by perf list, are used.
279
280 --smi-cost
281 Measure SMI cost if msr/aperf/ and msr/smi/ events are supported.
282
283 During the measurement, the /sys/device/cpu/freeze_on_smi will be set
284 to freeze core counters on SMI. The aperf counter will not be effected
285 by the setting. The cost of SMI can be measured by (aperf - unhalted
286 core cycles).
287
288 In practice, the percentages of SMI cycles is very useful for
289 performance oriented analysis. --metric_only will be applied by
290 default. The output is SMI cycles%, equals to (aperf - unhalted core
291 cycles) / aperf
292
293 Users who wants to get the actual value can apply --no-metric-only.
294
296 $ perf stat — make
297
298 Performance counter stats for 'make':
299
300 83723.452481 task-clock:u (msec) # 1.004 CPUs utilized
301 0 context-switches:u # 0.000 K/sec
302 0 cpu-migrations:u # 0.000 K/sec
303 3,228,188 page-faults:u # 0.039 M/sec
304 229,570,665,834 cycles:u # 2.742 GHz
305 313,163,853,778 instructions:u # 1.36 insn per cycle
306 69,704,684,856 branches:u # 832.559 M/sec
307 2,078,861,393 branch-misses:u # 2.98% of all branches
308
309 83.409183620 seconds time elapsed
310
311 74.684747000 seconds user
312 8.739217000 seconds sys
313
315 As displayed in the example above we can display 3 types of timings. We
316 always display the time the counters were enabled/alive:
317
318 83.409183620 seconds time elapsed
319
320 For workload sessions we also display time the workloads spent in
321 user/system lands:
322
323 74.684747000 seconds user
324 8.739217000 seconds sys
325
326 Those times are the very same as displayed by the time tool.
327
329 With -x, perf stat is able to output a not-quite-CSV format output
330 Commas in the output are not put into "". To make it easy to parse it
331 is recommended to use a different character like -x \;
332
333 The fields are in this order:
334
335 · optional usec time stamp in fractions of second (with -I xxx)
336
337 · optional CPU, core, or socket identifier
338
339 · optional number of logical CPUs aggregated
340
341 · counter value
342
343 · unit of the counter value or empty
344
345 · event name
346
347 · run time of counter
348
349 · percentage of measurement time the counter was running
350
351 · optional variance if multiple values are collected with -r
352
353 · optional metric value
354
355 · optional unit of metric
356
357 Additional metrics may be printed with all earlier fields being empty.
358
360 perf-top(1), perf-list(1)
361
362
363
364perf 09/24/2019 PERF-STAT(1)