1numad(8) Administration numad(8)
2
3
4
6 numad - A user-level daemon that provides placement advice and process
7 management for efficient use of CPUs and memory on systems with NUMA
8 topology.
9
11 numad [-dhvV]
12
13 numad [-C 0|1]
14
15 numad [-H THP_hugepage_scan_sleep_ms]
16
17 numad [-i [min_interval:]max_interval]
18
19 numad [-K 0|1]
20
21 numad [-l log_level]
22
23 numad [-m target_memory_locality]
24
25 numad [-p PID]
26
27 numad [-r PID]
28
29 numad [-R reserved-CPU-list]
30
31 numad [-S 0|1]
32
33 numad [-t logical_CPU_percent]
34
35 numad [-u target_utilization]
36
37 numad [-w NCPUS[:MB]]
38
39 numad [-x PID]
40
42 Numad is a system daemon that monitors NUMA topology and resource
43 usage. It will attempt to locate processes for efficient NUMA locality
44 and affinity, dynamically adjusting to changing system conditions.
45 Numad also provides guidance to assist management applications with
46 initial manual binding of CPU and memory resources for their processes.
47 Note that numad is primarily intended for server consolidation environ‐
48 ments, where there might be multiple applications or multiple virtual
49 guests running on the same server system. Numad is most likely to have
50 a positive effect when processes can be localized in a subset of the
51 system's NUMA nodes. If the entire system is dedicated to a large in-
52 memory database application, for example -- especially if memory
53 accesses will likely remain unpredictable -- numad will probably not
54 improve performance.
55
57 -C <0|1>
58 This option controls whether or not numad treats inactive file
59 cache as available memory. By default, numad assumes it can
60 count inactive file cache as "free" memory when considering
61 resources to match with processes. Specify -C 0 if numad should
62 instead consider inactive file cache as a consumed resource.
63
64 -d Debug output in log, sets the log level to LOG_DEBUG. Same
65 effect as -l 7.
66
67 -h Display usage help information and then exit.
68
69 -H <THP_scan_sleep_ms>
70 Set the desired transparent hugepage scan interval in ms. The
71 /sys/kernel/mm/tranparent_hugepage/khugepaged/scan_sleep_mil‐
72 lisecs tunable is usually set to 10000ms by the operating sys‐
73 tem. The default is changed by numad to be 1000ms since it is
74 helpful for the hugepage daemon to be more aggressive when mem‐
75 ory moves between nodes. Specifying (-H 0) will cause numad to
76 retain the system default value. You can also make the hugepage
77 daemon more or less aggressive by specifying an alternate value
78 with this option. For example, setting this value to 100ms (-H
79 100) might improve the performance of workloads which use many
80 transparent hugepages.
81
82 -i <[min_interval:]max_interval>
83 Sets the time interval that numad waits between system scans, in
84 seconds to <max_interval>. Default <max_interval> is 15 seconds,
85 default <min_interval> is 5 seconds. Setting a <max_interval>
86 of zero will cause the daemon to exit. (This is the normal
87 mechanism to terminate the daemon.) A bigger <max_interval>
88 will decrease numad overhead but also decrease responsiveness to
89 changing loads. The default numad max_interval can be changed
90 in the numad.conf file.
91
92 -K <0|1>
93 This option controls whether numad keeps interleaved memory
94 spread across NUMA nodes, or attempts to merge interleaved mem‐
95 ory to local NUMA nodes. The default is to merge interleaved
96 memory. This is the appropriate setting to localize processes
97 in a subset of the system's NUMA nodes. If you are running a
98 large, single-instance application that allocates interleaved
99 memory because the workload will have continuous unpredictable
100 memory access patterns (e.g. a large in-memory database), you
101 might get better results by specifying -K 1 to instruct numad to
102 keep interleaved memory distributed.
103
104 -l <log_level>
105 Sets the log level to <log_level>. Reasonable choices are 5, 6,
106 or 7. The default value is 5. Note that CPU values are scaled
107 by a factor of 100 internally and in the numad log files.
108 Unfortunately, you don't actually have that many CPUs.
109
110 -m <target_memory_locality>
111 Set the desired memory locality threshold to stop moving process
112 memory. Numad might stop retrying to coalesce process memory
113 when more than this percentage of the process's memory is
114 already localized in the target node(s). The default is 90%.
115 Numad will frequently localize more than the localization
116 threshold percent, but it will not necessarily do so. Decrease
117 the threshold to allow numad to leave more process memory dis‐
118 tributed on various nodes. Increase the threshold to instruct
119 numad to try to localize more memory. Acceptable values are
120 between 50 and 100 percent. Note that setting the target memory
121 locality to 100% might cause numad to continually retry to move
122 memory that the kernel will never succesfully move.
123
124 -p <PID>
125 Add PID to explicit inclusion list of processes to consider for
126 managing, if the process also uses significant resources. Mul‐
127 tiple -p PID options can be specified at daemon start, but after
128 daemon start, only one PID can be added to the inclusion list
129 per subsequent numad invocation. Use with -S to precisely con‐
130 trol the scope of processes numad can manage. Note that the
131 specified process will not necessarily be actively managed
132 unless it also meets numad's significance threshold -- which is
133 currently 300MB and half of a CPU.
134
135 -r <PID>
136 Remove PID from both the explicit inclusion and the exclusion
137 lists of processes. After daemon start, only one PID can be
138 removed from the explicit process lists per subsequent numad
139 invocation. Use with -S and -p and -x to precisely control the
140 scope of processes numad can manage.
141
142 -R <CPU_LIST>
143 Specify a list of CPUs that numad should assume are reserved for
144 non-numad use. No processes will be bound to the specified CPUs
145 by numad. This option is effective only when starting numad.
146 You cannot change reserved CPUs dynamically while numad is
147 already running.
148
149 -S <0|1>
150 This option controls whether numad scans all system processes or
151 only the processes on the explicit inclusion PID list. The
152 default is to scan all processes. Use -S 0 to scan only the
153 explicit inclusion PID list. Use -S 1 to again scan all system
154 processes (excepting those on the explicit exclusion list).
155 Starting numad as
156 numad -S 0 -p <PID-1> -p <PID-2> -p <PID-3>
157 will limit scanning, and thus also automatic NUMA management, to
158 only those three explicitly specified processes.
159
160 -t <logical_CPU_percent>
161 Specify the resource value of logical CPUs. Hardware threads
162 typically share most core resources, and so logical CPUs add
163 only a fraction of CPU power for many workloads. By default
164 numad considers logical CPUs to be only 20 percent of a dedi‐
165 cated hardware core.
166
167 -u <target_utilization>
168 Set the desired maximum consumption percentage of a node.
169 Default is 85%. Decrease the target value to maintain more
170 available resource margin on each node. Increase the target
171 value to more exhaustively consume node resources. If you have
172 sized your workloads to precisely fit inside a NUMA node, speci‐
173 fying (-u 100) might improve system performance by telling numad
174 to go ahead and consume all the resources in each node. It is
175 possible to specify values up to 130 percent to oversubscribe
176 CPUs in the nodes, but memory utilization is always capped at
177 100%. Use oversubscription values very carefully.
178
179 -v Verbose output in log, sets the log level to LOG_INFO. Same
180 effect as -l 6.
181
182 -V Display version information and exit.
183
184 -w <NCPUS[:MB]>
185 Queries numad for the best NUMA nodes to bind an entity that
186 needs <NCPUS>. The amount of memory (in MBs) is optional, but
187 should normally be specified as well <:MB> so numad can recom‐
188 mend NUMA nodes with available CPU capacity and adequate free
189 memory. This query option can be used regardless of whether
190 numad is running as a daemon. (An invocation using this option
191 when numad is not running as a daemon, will not cause the daemon
192 to start.) Output of this option is a string that contains a
193 NUMA node list. For example: 2-3,6. The recommended node list
194 could be saved in a shell variable (e.g., NODES) and then used
195 as the node list parameter in a
196 numactl -m $NODES -N $NODES ...
197 command. See numactl(8).
198
199 -x <PID>
200 Add PID to explicit exclusion list of processes to blacklist
201 from managing. Multiple -x PID options can be specified at dae‐
202 mon start, but after daemon start, only one PID can be added to
203 the exclusion list per subsequent numad invocation. Use with -S
204 to precisely control the scope of processes numad can manage.
205
207 /usr/bin/numad
208 /etc/numad.conf
209 /var/log/numad.log
210 /var/run/numad.pid
211
213 None.
214
216 Numad can be run as a system daemon and can be managed by the standard
217 init mechanisms of the host.
218
219 If interactive (manual) control is desired, you can start the daemon
220 manually by typing:
221
222 /usr/bin/numad
223
224 Subsequent numad invocations while the daemon is running can be used to
225 dynamically change most run-time options.
226
227 You can terminate numad from running by typing:
228
229 /usr/bin/numad -i0
230
232 Bill Gray <bgray@redhat.com>
233
235 numactl(8)
236
237
238
239Bill Gray 1.0.0 numad(8)