1cpudist(8) System Manager's Manual cpudist(8)
2
3
4
6 cpudist - On- and off-CPU task time as a histogram.
7
9 cpudist [-h] [-O] [-T] [-m] [-P] [-L] [-p PID] [interval] [count]
10
12 This measures the time a task spends on the CPU before being desched‐
13 uled, and shows the times as a histogram. Tasks that spend a very short
14 time on the CPU can be indicative of excessive context-switches and
15 poor workload distribution, and possibly point to a shared source of
16 contention that keeps tasks switching in and out as it becomes avail‐
17 able (such as a mutex).
18
19 Similarly, the tool can also measure the time a task spends off-CPU be‐
20 fore it is scheduled again. This can be helpful in identifying long
21 blocking and I/O operations, or alternatively very short descheduling
22 times due to short-lived locks or timers.
23
24 This tool uses in-kernel eBPF maps for storing timestamps and the his‐
25 togram, for efficiency. Despite this, the overhead of this tool may be‐
26 come significant for some workloads: see the OVERHEAD section.
27
28 Since this uses BPF, only the root user can use this tool.
29
31 CONFIG_BPF and bcc.
32
34 -h Print usage message.
35
36 -O Measure off-CPU time instead of on-CPU time.
37
38 -T Include timestamps on output.
39
40 -m Output histogram in milliseconds.
41
42 -P Print a histogram for each PID (tgid from the kernel's perspec‐
43 tive).
44
45 -L Print a histogram for each TID (pid from the kernel's perspec‐
46 tive).
47
48 -p PID Only show this PID (filtered in kernel for efficiency).
49
50 interval
51 Output interval, in seconds.
52
53 count Number of outputs.
54
56 Summarize task on-CPU time as a histogram:
57 # cpudist
58
59 Summarize task off-CPU time as a histogram:
60 # cpudist -O
61
62 Print 1 second summaries, 10 times:
63 # cpudist 1 10
64
65 Print 1 second summaries, using milliseconds as units for the his‐
66 togram, and include timestamps on output:
67 # cpudist -mT 1
68
69 Trace PID 185 only, 1 second summaries:
70 # cpudist -p 185 1
71
73 usecs Microsecond range
74
75 msecs Millisecond range
76
77 count How many times a task event fell into this range
78
79 distribution
80 An ASCII bar chart to visualize the distribution (count column)
81
83 This traces scheduler tracepoints, which can become very frequent.
84 While eBPF has very low overhead, and this tool uses in-kernel maps for
85 efficiency, the frequency of scheduler events for some workloads may be
86 high enough that the overhead of this tool becomes significant. Measure
87 in a lab environment to quantify the overhead before use.
88
90 This is from bcc.
91
92 https://github.com/iovisor/bcc
93
94 Also look in the bcc distribution for a companion _example.txt file
95 containing example usage, output, and commentary for this tool.
96
98 Linux
99
101 Unstable - in development.
102
104 Sasha Goldshtein
105
107 pidstat(1), runqlat(8)
108
109
110
111USER COMMANDS 2016-06-28 cpudist(8)