1PERF-STAT(1) perf Manual PERF-STAT(1)
2
3
4
6 perf-stat - Run a command and gather performance counter statistics
7
9 perf stat [-e <EVENT> | --event=EVENT] [-a] <command>
10 perf stat [-e <EVENT> | --event=EVENT] [-a] — <command> [<options>]
11 perf stat [-e <EVENT> | --event=EVENT] [-a] record [-o file] — <command> [<options>]
12 perf stat report [-i file]
13
15 This command runs a command and gathers performance counter statistics
16 from it.
17
19 <command>...
20 Any command you can specify in a shell.
21
22 record
23 See STAT RECORD.
24
25 report
26 See STAT REPORT.
27
28 -e, --event=
29 Select the PMU event. Selection can be:
30
31 · a symbolic event name (use perf list to list all events)
32
33 · a raw PMU event (eventsel+umask) in the form of rNNN where NNN
34 is a hexadecimal event descriptor.
35
36 · a symbolically formed event like pmu/param1=0x3,param2/ where
37 param1 and param2 are defined as formats for the PMU in
38 /sys/bus/event_source/devices/<pmu>/format/*
39
40 'percore' is a event qualifier that sums up the event counts for both
41 hardware threads in a core. For example:
42 perf stat -A -a -e cpu/event,percore=1/,otherevent ...
43
44 · a symbolically formed event like
45 pmu/config=M,config1=N,config2=K/ where M, N, K are numbers (in
46 decimal, hex, octal format). Acceptable values for each of
47 config, config1 and config2 parameters are defined by
48 corresponding entries in
49 /sys/bus/event_source/devices/<pmu>/format/*
50
51 Note that the last two syntaxes support prefix and glob matching in
52 the PMU name to simplify creation of events across multiple instances
53 of the same type of PMU in large systems (e.g. memory controller PMUs).
54 Multiple PMU instances are typical for uncore PMUs, so the prefix
55 'uncore_' is also ignored when performing this match.
56
57 -i, --no-inherit
58 child tasks do not inherit counters
59
60 -p, --pid=<pid>
61 stat events on existing process id (comma separated list)
62
63 -t, --tid=<tid>
64 stat events on existing thread id (comma separated list)
65
66 -a, --all-cpus
67 system-wide collection from all CPUs (default if no target is
68 specified)
69
70 --no-scale
71 Don’t scale/normalize counter values
72
73 -d, --detailed
74 print more detailed statistics, can be specified up to 3 times
75
76 -d: detailed events, L1 and LLC data cache
77 -d -d: more detailed events, dTLB and iTLB events
78 -d -d -d: very detailed events, adding prefetch events
79
80 -r, --repeat=<n>
81 repeat command and print average + stddev (max: 100). 0 means
82 forever.
83
84 -B, --big-num
85 print large numbers with thousands' separators according to locale
86
87 -C, --cpu=
88 Count only on the list of CPUs provided. Multiple CPUs can be
89 provided as a comma-separated list with no space: 0,1. Ranges of
90 CPUs are specified with -: 0-2. In per-thread mode, this option is
91 ignored. The -a option is still necessary to activate system-wide
92 monitoring. Default is to count on all CPUs.
93
94 -A, --no-aggr
95 Do not aggregate counts across all monitored CPUs.
96
97 -n, --null
98 null run - don’t start any counters
99
100 -v, --verbose
101 be more verbose (show counter open errors, etc)
102
103 -x SEP, --field-separator SEP
104 print counts using a CSV-style output to make it easy to import
105 directly into spreadsheets. Columns are separated by the string
106 specified in SEP.
107
108 --table
109 Display time for each run (-r option), in a table format, e.g.:
110
111 $ perf stat --null -r 5 --table perf bench sched pipe
112
113 Performance counter stats for 'perf bench sched pipe' (5 runs):
114
115 # Table of individual measurements:
116 5.189 (-0.293) #
117 5.189 (-0.294) #
118 5.186 (-0.296) #
119 5.663 (+0.181) ##
120 6.186 (+0.703) ####
121
122 # Final result:
123 5.483 +- 0.198 seconds time elapsed ( +- 3.62% )
124
125 -G name, --cgroup name
126 monitor only in the container (cgroup) called "name". This option
127 is available only in per-cpu mode. The cgroup filesystem must be
128 mounted. All threads belonging to container "name" are monitored
129 when they run on the monitored CPUs. Multiple cgroups can be
130 provided. Each cgroup is applied to the corresponding event, i.e.,
131 first cgroup to first event, second cgroup to second event and so
132 on. It is possible to provide an empty cgroup (monitor all the
133 time) using, e.g., -G foo,,bar. Cgroups must have corresponding
134 events, i.e., they always refer to events defined earlier on the
135 command line. If the user wants to track multiple events for a
136 specific cgroup, the user can use -e e1 -e e2 -G foo,foo or just
137 use -e e1 -e e2 -G foo.
138
139 If wanting to monitor, say, cycles for a cgroup and also for system
140 wide, this command line can be used: perf stat -e cycles -G cgroup_name
141 -a -e cycles.
142
143 -o file, --output file
144 Print the output into the designated file.
145
146 --append
147 Append to the output file designated with the -o option. Ignored if
148 -o is not specified.
149
150 --log-fd
151 Log output to fd, instead of stderr. Complementary to --output, and
152 mutually exclusive with it. --append may be used here. Examples:
153 3>results perf stat --log-fd 3 — $cmd 3>>results perf stat
154 --log-fd 3 --append — $cmd
155
156 --pre, --post
157 Pre and post measurement hooks, e.g.:
158
159 perf stat --repeat 10 --null --sync --pre make -s
160 O=defconfig-build/clean — make -s -j64 O=defconfig-build/ bzImage
161
162 -I msecs, --interval-print msecs
163 Print count deltas every N milliseconds (minimum: 1ms) The overhead
164 percentage could be high in some cases, for instance with small,
165 sub 100ms intervals. Use with caution. example: perf stat -I 1000
166 -e cycles -a sleep 5
167
168 --interval-count times
169 Print count deltas for fixed number of times. This option should be
170 used together with "-I" option. example: perf stat -I 1000
171 --interval-count 2 -e cycles -a
172
173 --interval-clear
174 Clear the screen before next interval.
175
176 --timeout msecs
177 Stop the perf stat session and print count deltas after N
178 milliseconds (minimum: 10 ms). This option is not supported with
179 the "-I" option. example: perf stat --time 2000 -e cycles -a
180
181 --metric-only
182 Only print computed metrics. Print them in a single line. Don’t
183 show any raw values. Not supported with --per-thread.
184
185 --per-socket
186 Aggregate counts per processor socket for system-wide mode
187 measurements. This is a useful mode to detect imbalance between
188 sockets. To enable this mode, use --per-socket in addition to -a.
189 (system-wide). The output includes the socket number and the number
190 of online processors on that socket. This is useful to gauge the
191 amount of aggregation.
192
193 --per-die
194 Aggregate counts per processor die for system-wide mode
195 measurements. This is a useful mode to detect imbalance between
196 dies. To enable this mode, use --per-die in addition to -a.
197 (system-wide). The output includes the die number and the number of
198 online processors on that die. This is useful to gauge the amount
199 of aggregation.
200
201 --per-core
202 Aggregate counts per physical processor for system-wide mode
203 measurements. This is a useful mode to detect imbalance between
204 physical cores. To enable this mode, use --per-core in addition to
205 -a. (system-wide). The output includes the core number and the
206 number of online logical processors on that physical processor.
207
208 --per-thread
209 Aggregate counts per monitored threads, when monitoring threads (-t
210 option) or processes (-p option).
211
212 --per-node
213 Aggregate counts per NUMA nodes for system-wide mode measurements.
214 This is a useful mode to detect imbalance between NUMA nodes. To
215 enable this mode, use --per-node in addition to -a. (system-wide).
216
217 -D msecs, --delay msecs
218 After starting the program, wait msecs before measuring. This is
219 useful to filter out the startup phase of the program, which is
220 often very different.
221
222 -T, --transaction
223 Print statistics of transactional execution if supported.
224
226 Stores stat data into perf data file.
227
228 -o file, --output file
229 Output file name.
230
232 Reads and reports stat data from perf data file.
233
234 -i file, --input file
235 Input file name.
236
237 --per-socket
238 Aggregate counts per processor socket for system-wide mode
239 measurements.
240
241 --per-die
242 Aggregate counts per processor die for system-wide mode
243 measurements.
244
245 --per-core
246 Aggregate counts per physical processor for system-wide mode
247 measurements.
248
249 -M, --metrics
250 Print metrics or metricgroups specified in a comma separated list.
251 For a group all metrics from the group are added. The events from
252 the metrics are automatically measured. See perf list output for
253 the possble metrics and metricgroups.
254
255 -A, --no-aggr
256 Do not aggregate counts across all monitored CPUs.
257
258 --topdown
259 Print top down level 1 metrics if supported by the CPU. This allows
260 to determine bottle necks in the CPU pipeline for CPU bound
261 workloads, by breaking the cycles consumed down into frontend
262 bound, backend bound, bad speculation and retiring.
263
264 Frontend bound means that the CPU cannot fetch and decode instructions
265 fast enough. Backend bound means that computation or memory access is
266 the bottle neck. Bad Speculation means that the CPU wasted cycles due
267 to branch mispredictions and similar issues. Retiring means that the
268 CPU computed without an apparently bottleneck. The bottleneck is only
269 the real bottleneck if the workload is actually bound by the CPU and
270 not by something else.
271
272 For best results it is usually a good idea to use it with interval mode
273 like -I 1000, as the bottleneck of workloads can change often.
274
275 The top down metrics are collected per core instead of per CPU thread.
276 Per core mode is automatically enabled and -a (global monitoring) is
277 needed, requiring root rights or perf.perf_event_paranoid=-1.
278
279 Topdown uses the full Performance Monitoring Unit, and needs disabling
280 of the NMI watchdog (as root): echo 0 > /proc/sys/kernel/nmi_watchdog
281 for best results. Otherwise the bottlenecks may be inconsistent on
282 workload with changing phases.
283
284 This enables --metric-only, unless overridden with --no-metric-only.
285
286 To interpret the results it is usually needed to know on which CPUs the
287 workload runs on. If needed the CPUs can be forced using taskset.
288
289 --no-merge
290 Do not merge results from same PMUs.
291
292 When multiple events are created from a single event specification,
293 stat will, by default, aggregate the event counts and show the result
294 in a single row. This option disables that behavior and shows the
295 individual events and counts.
296
297 Multiple events are created from a single event specification when: 1.
298 Prefix or glob matching is used for the PMU name. 2. Aliases, which are
299 listed immediately after the Kernel PMU events by perf list, are used.
300
301 --smi-cost
302 Measure SMI cost if msr/aperf/ and msr/smi/ events are supported.
303
304 During the measurement, the /sys/device/cpu/freeze_on_smi will be set
305 to freeze core counters on SMI. The aperf counter will not be effected
306 by the setting. The cost of SMI can be measured by (aperf - unhalted
307 core cycles).
308
309 In practice, the percentages of SMI cycles is very useful for
310 performance oriented analysis. --metric_only will be applied by
311 default. The output is SMI cycles%, equals to (aperf - unhalted core
312 cycles) / aperf
313
314 Users who wants to get the actual value can apply --no-metric-only.
315
316 --all-kernel
317 Configure all used events to run in kernel space.
318
319 --all-user
320 Configure all used events to run in user space.
321
323 $ perf stat — make
324
325 Performance counter stats for 'make':
326
327 83723.452481 task-clock:u (msec) # 1.004 CPUs utilized
328 0 context-switches:u # 0.000 K/sec
329 0 cpu-migrations:u # 0.000 K/sec
330 3,228,188 page-faults:u # 0.039 M/sec
331 229,570,665,834 cycles:u # 2.742 GHz
332 313,163,853,778 instructions:u # 1.36 insn per cycle
333 69,704,684,856 branches:u # 832.559 M/sec
334 2,078,861,393 branch-misses:u # 2.98% of all branches
335
336 83.409183620 seconds time elapsed
337
338 74.684747000 seconds user
339 8.739217000 seconds sys
340
342 As displayed in the example above we can display 3 types of timings. We
343 always display the time the counters were enabled/alive:
344
345 83.409183620 seconds time elapsed
346
347 For workload sessions we also display time the workloads spent in
348 user/system lands:
349
350 74.684747000 seconds user
351 8.739217000 seconds sys
352
353 Those times are the very same as displayed by the time tool.
354
356 With -x, perf stat is able to output a not-quite-CSV format output
357 Commas in the output are not put into "". To make it easy to parse it
358 is recommended to use a different character like -x \;
359
360 The fields are in this order:
361
362 · optional usec time stamp in fractions of second (with -I xxx)
363
364 · optional CPU, core, or socket identifier
365
366 · optional number of logical CPUs aggregated
367
368 · counter value
369
370 · unit of the counter value or empty
371
372 · event name
373
374 · run time of counter
375
376 · percentage of measurement time the counter was running
377
378 · optional variance if multiple values are collected with -r
379
380 · optional metric value
381
382 · optional unit of metric
383
384 Additional metrics may be printed with all earlier fields being empty.
385
387 perf-top(1), perf-list(1)
388
389
390
391perf 04/23/2020 PERF-STAT(1)