1drsnoop(8) System Manager's Manual drsnoop(8)
2
3
4
6 drsnoop - Trace direct reclaim events. Uses Linux eBPF/bcc.
7
9 drsnoop.py [-h] [-T] [-U] [-p PID] [-t TID] [-u UID] [-d DURATION] [-n
10 name] [-v]
11
13 drsnoop trace direct reclaim events, showing which processes are alloc‐
14 ing pages with direct reclaiming. This can be useful for discovering
15 when allocstall (/p- roc/vmstat) continues to increase, whether it is
16 caused by some critical proc- esses or not.
17
18 This works by tracing the direct reclaim events using kernel trace‐
19 points.
20
21 This makes use of a Linux 4.4 feature (bpf_perf_event_output()); for
22 kernels older than 4.4, see the version under tools/old, which uses an
23 older mechanism.
24
25 Since this uses BPF, only the root user can use this tool.
26
28 CONFIG_BPF and bcc.
29
31 -h Print usage message.
32
33 -T Include a timestamp column.
34
35 -U Show UID.
36
37 -p PID Trace this process ID only (filtered in-kernel).
38
39 -t TID Trace this thread ID only (filtered in-kernel).
40
41 -u UID Trace this UID only (filtered in-kernel).
42
43 -d DURATION
44 Total duration of trace in seconds.
45
46 -n name
47 Only print processes where its name partially matches 'name' -v
48 verbose Run in verbose mode. Will output system memory state
49
50 -v show system memory state
51
53 Trace all direct reclaim events:
54 # drsnoop
55
56 Trace all direct reclaim events, for 10 seconds only:
57 # drsnoop -d 10
58
59 Trace all direct reclaim events, and include timestamps:
60 # drsnoop -T
61
62 Show UID:
63 # drsnoop -U
64
65 Trace PID 181 only:
66 # drsnoop -p 181
67
68 Trace UID 1000 only:
69 # drsnoop -u 1000
70
71 Trace all direct reclaim events from processes where its name partially
72 match-
73 es 'mond': # drnsnoop -n mond
74
76 TIME(s)
77 Time of the call, in seconds.
78
79 UID User ID
80
81 PID Process ID
82
83 TID Thread ID
84
85 COMM Process name
86
88 This traces the kernel direct reclaim tracepoints and prints output for
89 each event. As the rate of this is generally expected to be low (<
90 1000/s), the overhead is also expected to be negligible.
91
93 This is from bcc.
94
95 https://github.com/iovisor/bcc
96
97 Also look in the bcc distribution for a companion _examples.txt file
98 containing example usage, output, and commentary for this tool.
99
101 Linux
102
104 Unstable - in development.
105
107 Wenbo Zhang
108
109
110
111USER COMMANDS 2019-02-20 drsnoop(8)