1offcputime(8)               System Manager's Manual              offcputime(8)
2
3
4

NAME

6       offcputime  -  Summarize off-CPU time by kernel stack trace. Uses Linux
7       eBPF/bcc.
8

SYNOPSIS

10       offcputime [-h] [-p PID | -t TID |  -u  |  -k]  [-U  |  -K]  [-d]  [-f]
11       [--stack-storage-size   STACK_STORAGE_SIZE]   [-m  MIN_BLOCK_TIME]  [-M
12       MAX_BLOCK_TIME] [--state STATE] [duration]
13

DESCRIPTION

15       This program shows stack traces and task names that  were  blocked  and
16       "off-CPU", and the total duration they were not running: their "off-CPU
17       time".  It works by tracing when threads block and when they return  to
18       CPU,  measuring  both  the time they were off-CPU and the blocked stack
19       trace and the task name.  This data is summarized in the  kernel  using
20       an  eBPF map, and by summing the off-CPU time by unique stack trace and
21       task name.
22
23       The output summary will help you  identify  reasons  why  threads  were
24       blocking, and quantify the time they were off-CPU. This spans all types
25       of blocking activity: disk I/O, network I/O, locks, page faults, invol‐
26       untary context switches, etc.
27
28       This  is  complementary to CPU profiling (e.g., CPU flame graphs) which
29       shows the time spent on-CPU. This shows the time spent off-CPU, and the
30       output,  especially  the -f format, can be used to generate an "off-CPU
31       time flame graph".
32
33       See http://www.brendangregg.com/FlameGraphs/offcpuflamegraphs.html
34
35       This tool only works on Linux 4.6+. It uses the  new  `BPF_STACK_TRACE`
36       table  APIs  to generate the in-kernel stack traces.  For kernels older
37       than 4.6, see the version under tools/old.
38
39       Note: this tool only traces off-CPU times that began  and  ended  while
40       tracing.
41

REQUIREMENTS

43       CONFIG_BPF and bcc.
44

OPTIONS

46       -h     Print usage message.
47
48       -p PID Trace this process ID only (filtered in-kernel).
49
50       -t TID Trace this thread ID only (filtered in-kernel).
51
52       -u     Only trace user threads (no kernel threads).
53
54       -k     Only trace kernel threads (no user threads).
55
56       -U     Show stacks from user space only (no kernel space stacks).
57
58       -K     Show stacks from kernel space only (no user space stacks).
59
60       -d     Insert delimiter between kernel/user stacks.
61
62       -f     Print output in folded stack format.
63
64       --stack-storage-size STACK_STORAGE_SIZE
65              Change  the number of unique stack traces that can be stored and
66              displayed.
67
68       -m MIN_BLOCK_TIME
69              The minimum time in microseconds  over  which  we  store  traces
70              (default 1)
71
72       -M MAX_BLOCK_TIME
73              The  maximum  time  in  microseconds under which we store traces
74              (default U64_MAX)
75
76       --state
77              Filter on this thread state bitmask (eg, 2 ==  TASK_UNINTERRUPT‐
78              IBLE).  See include/linux/sched.h for states.
79
80       duration
81              Duration to trace, in seconds.
82

EXAMPLES

84       Trace  all  thread blocking events, and summarize (in-kernel) by kernel
85       stack trace and total off-CPU time:
86              # offcputime
87
88       Trace for 5 seconds only:
89              # offcputime 5
90
91       Trace for 5 seconds, and emit output in folded stack  format  (suitable
92       for flame graphs):
93              # offcputime -f 5
94
95       Trace PID 185 only:
96              # offcputime -p 185
97

OVERHEAD

99       This  summarizes unique stack traces in-kernel for efficiency, allowing
100       it to trace a higher rate of events than methods that  post-process  in
101       user  space. The stack trace and time data is only copied to user space
102       once, when the output is printed. While these techniques greatly  lower
103       overhead,  scheduler  events  are still a high frequency event, as they
104       can exceed 1 million events per second, and so caution should still  be
105       used. Test before production use.
106
107       If the overhead is still a problem, take a look at the MINBLOCK_US tun‐
108       able in the code. If your aim is to chase down longer blocking  events,
109       then this could be increased to filter shorter blocking events, further
110       lowering overhead.
111

SOURCE

113       This is from bcc.
114
115              https://github.com/iovisor/bcc
116
117       Also look in the bcc distribution for a  companion  _examples.txt  file
118       containing example usage, output, and commentary for this tool.
119

OS

121       Linux
122

STABILITY

124       Unstable - in development.
125

AUTHOR

127       Brendan Gregg
128

SEE ALSO

130       stackcount(8)
131
132
133
134USER COMMANDS                     2016-01-14                     offcputime(8)
Impressum