1JEMALLOC(3) User Manual JEMALLOC(3)
2
3
4
6 jemalloc - general purpose memory allocation functions
7
9 This manual describes jemalloc
10 5.3.0-0-g54eaed1d8b56b1aa528be3bdd1877e59c56fa90c. More information can
11 be found at the jemalloc website[1].
12
14 #include <jemalloc/jemalloc.h>
15
16 Standard API
17 void *malloc(size_t size);
18
19 void *calloc(size_t number, size_t size);
20
21 int posix_memalign(void **ptr, size_t alignment, size_t size);
22
23 void *aligned_alloc(size_t alignment, size_t size);
24
25 void *realloc(void *ptr, size_t size);
26
27 void free(void *ptr);
28
29 Non-standard API
30 void *mallocx(size_t size, int flags);
31
32 void *rallocx(void *ptr, size_t size, int flags);
33
34 size_t xallocx(void *ptr, size_t size, size_t extra, int flags);
35
36 size_t sallocx(void *ptr, int flags);
37
38 void dallocx(void *ptr, int flags);
39
40 void sdallocx(void *ptr, size_t size, int flags);
41
42 size_t nallocx(size_t size, int flags);
43
44 int mallctl(const char *name, void *oldp, size_t *oldlenp, void *newp,
45 size_t newlen);
46
47 int mallctlnametomib(const char *name, size_t *mibp, size_t *miblenp);
48
49 int mallctlbymib(const size_t *mib, size_t miblen, void *oldp,
50 size_t *oldlenp, void *newp, size_t newlen);
51
52 void malloc_stats_print(void (*write_cb) (void *, const char *),
53 void *cbopaque, const char *opts);
54
55 size_t malloc_usable_size(const void *ptr);
56
57 void (*malloc_message)(void *cbopaque, const char *s);
58
59 const char *malloc_conf;
60
62 Standard API
63 The malloc() function allocates size bytes of uninitialized memory. The
64 allocated space is suitably aligned (after possible pointer coercion)
65 for storage of any type of object.
66
67 The calloc() function allocates space for number objects, each size
68 bytes in length. The result is identical to calling malloc() with an
69 argument of number * size, with the exception that the allocated memory
70 is explicitly initialized to zero bytes.
71
72 The posix_memalign() function allocates size bytes of memory such that
73 the allocation's base address is a multiple of alignment, and returns
74 the allocation in the value pointed to by ptr. The requested alignment
75 must be a power of 2 at least as large as sizeof(void *).
76
77 The aligned_alloc() function allocates size bytes of memory such that
78 the allocation's base address is a multiple of alignment. The requested
79 alignment must be a power of 2. Behavior is undefined if size is not an
80 integral multiple of alignment.
81
82 The realloc() function changes the size of the previously allocated
83 memory referenced by ptr to size bytes. The contents of the memory are
84 unchanged up to the lesser of the new and old sizes. If the new size is
85 larger, the contents of the newly allocated portion of the memory are
86 undefined. Upon success, the memory referenced by ptr is freed and a
87 pointer to the newly allocated memory is returned. Note that realloc()
88 may move the memory allocation, resulting in a different return value
89 than ptr. If ptr is NULL, the realloc() function behaves identically to
90 malloc() for the specified size.
91
92 The free() function causes the allocated memory referenced by ptr to be
93 made available for future allocations. If ptr is NULL, no action
94 occurs.
95
96 Non-standard API
97 The mallocx(), rallocx(), xallocx(), sallocx(), dallocx(), sdallocx(),
98 and nallocx() functions all have a flags argument that can be used to
99 specify options. The functions only check the options that are
100 contextually relevant. Use bitwise or (|) operations to specify one or
101 more of the following:
102
103 MALLOCX_LG_ALIGN(la)
104 Align the memory allocation to start at an address that is a
105 multiple of (1 << la). This macro does not validate that la is
106 within the valid range.
107
108 MALLOCX_ALIGN(a)
109 Align the memory allocation to start at an address that is a
110 multiple of a, where a is a power of two. This macro does not
111 validate that a is a power of 2.
112
113 MALLOCX_ZERO
114 Initialize newly allocated memory to contain zero bytes. In the
115 growing reallocation case, the real size prior to reallocation
116 defines the boundary between untouched bytes and those that are
117 initialized to contain zero bytes. If this macro is absent, newly
118 allocated memory is uninitialized.
119
120 MALLOCX_TCACHE(tc)
121 Use the thread-specific cache (tcache) specified by the identifier
122 tc, which must have been acquired via the tcache.create mallctl.
123 This macro does not validate that tc specifies a valid identifier.
124
125 MALLOCX_TCACHE_NONE
126 Do not use a thread-specific cache (tcache). Unless
127 MALLOCX_TCACHE(tc) or MALLOCX_TCACHE_NONE is specified, an
128 automatically managed tcache will be used under many circumstances.
129 This macro cannot be used in the same flags argument as
130 MALLOCX_TCACHE(tc).
131
132 MALLOCX_ARENA(a)
133 Use the arena specified by the index a. This macro has no effect
134 for regions that were allocated via an arena other than the one
135 specified. This macro does not validate that a specifies an arena
136 index in the valid range.
137
138 The mallocx() function allocates at least size bytes of memory, and
139 returns a pointer to the base address of the allocation. Behavior is
140 undefined if size is 0.
141
142 The rallocx() function resizes the allocation at ptr to be at least
143 size bytes, and returns a pointer to the base address of the resulting
144 allocation, which may or may not have moved from its original location.
145 Behavior is undefined if size is 0.
146
147 The xallocx() function resizes the allocation at ptr in place to be at
148 least size bytes, and returns the real size of the allocation. If extra
149 is non-zero, an attempt is made to resize the allocation to be at least
150 (size + extra) bytes, though inability to allocate the extra byte(s)
151 will not by itself result in failure to resize. Behavior is undefined
152 if size is 0, or if (size + extra > SIZE_T_MAX).
153
154 The sallocx() function returns the real size of the allocation at ptr.
155
156 The dallocx() function causes the memory referenced by ptr to be made
157 available for future allocations.
158
159 The sdallocx() function is an extension of dallocx() with a size
160 parameter to allow the caller to pass in the allocation size as an
161 optimization. The minimum valid input size is the original requested
162 size of the allocation, and the maximum valid input size is the
163 corresponding value returned by nallocx() or sallocx().
164
165 The nallocx() function allocates no memory, but it performs the same
166 size computation as the mallocx() function, and returns the real size
167 of the allocation that would result from the equivalent mallocx()
168 function call, or 0 if the inputs exceed the maximum supported size
169 class and/or alignment. Behavior is undefined if size is 0.
170
171 The mallctl() function provides a general interface for introspecting
172 the memory allocator, as well as setting modifiable parameters and
173 triggering actions. The period-separated name argument specifies a
174 location in a tree-structured namespace; see the MALLCTL NAMESPACE
175 section for documentation on the tree contents. To read a value, pass a
176 pointer via oldp to adequate space to contain the value, and a pointer
177 to its length via oldlenp; otherwise pass NULL and NULL. Similarly, to
178 write a value, pass a pointer to the value via newp, and its length via
179 newlen; otherwise pass NULL and 0.
180
181 The mallctlnametomib() function provides a way to avoid repeated name
182 lookups for applications that repeatedly query the same portion of the
183 namespace, by translating a name to a “Management Information Base”
184 (MIB) that can be passed repeatedly to mallctlbymib(). Upon successful
185 return from mallctlnametomib(), mibp contains an array of *miblenp
186 integers, where *miblenp is the lesser of the number of components in
187 name and the input value of *miblenp. Thus it is possible to pass a
188 *miblenp that is smaller than the number of period-separated name
189 components, which results in a partial MIB that can be used as the
190 basis for constructing a complete MIB. For name components that are
191 integers (e.g. the 2 in arenas.bin.2.size), the corresponding MIB
192 component will always be that integer. Therefore, it is legitimate to
193 construct code like the following:
194
195 unsigned nbins, i;
196 size_t mib[4];
197 size_t len, miblen;
198
199 len = sizeof(nbins);
200 mallctl("arenas.nbins", &nbins, &len, NULL, 0);
201
202 miblen = 4;
203 mallctlnametomib("arenas.bin.0.size", mib, &miblen);
204 for (i = 0; i < nbins; i++) {
205 size_t bin_size;
206
207 mib[2] = i;
208 len = sizeof(bin_size);
209 mallctlbymib(mib, miblen, (void *)&bin_size, &len, NULL, 0);
210 /* Do something with bin_size... */
211 }
212
213 The malloc_stats_print() function writes summary statistics via the
214 write_cb callback function pointer and cbopaque data passed to
215 write_cb, or malloc_message() if write_cb is NULL. The statistics are
216 presented in human-readable form unless “J” is specified as a character
217 within the opts string, in which case the statistics are presented in
218 JSON format[2]. This function can be called repeatedly. General
219 information that never changes during execution can be omitted by
220 specifying “g” as a character within the opts string. Note that
221 malloc_stats_print() uses the mallctl*() functions internally, so
222 inconsistent statistics can be reported if multiple threads use these
223 functions simultaneously. If --enable-stats is specified during
224 configuration, “m”, “d”, and “a” can be specified to omit merged arena,
225 destroyed merged arena, and per arena statistics, respectively; “b” and
226 “l” can be specified to omit per size class statistics for bins and
227 large objects, respectively; “x” can be specified to omit all mutex
228 statistics; “e” can be used to omit extent statistics. Unrecognized
229 characters are silently ignored. Note that thread caching may prevent
230 some statistics from being completely up to date, since extra locking
231 would be required to merge counters that track thread cache operations.
232
233 The malloc_usable_size() function returns the usable size of the
234 allocation pointed to by ptr. The return value may be larger than the
235 size that was requested during allocation. The malloc_usable_size()
236 function is not a mechanism for in-place realloc(); rather it is
237 provided solely as a tool for introspection purposes. Any discrepancy
238 between the requested allocation size and the size reported by
239 malloc_usable_size() should not be depended on, since such behavior is
240 entirely implementation-dependent.
241
243 Once, when the first call is made to one of the memory allocation
244 routines, the allocator initializes its internals based in part on
245 various options that can be specified at compile- or run-time.
246
247 The string specified via --with-malloc-conf, the string pointed to by
248 the global variable malloc_conf, the “name” of the file referenced by
249 the symbolic link named /etc/malloc.conf, and the value of the
250 environment variable MALLOC_CONF, will be interpreted, in that order,
251 from left to right as options. Note that malloc_conf may be read before
252 main() is entered, so the declaration of malloc_conf should specify an
253 initializer that contains the final value to be read by jemalloc.
254 --with-malloc-conf and malloc_conf are compile-time mechanisms, whereas
255 /etc/malloc.conf and MALLOC_CONF can be safely set any time prior to
256 program invocation.
257
258 An options string is a comma-separated list of option:value pairs.
259 There is one key corresponding to each opt.* mallctl (see the MALLCTL
260 NAMESPACE section for options documentation). For example,
261 abort:true,narenas:1 sets the opt.abort and opt.narenas options. Some
262 options have boolean values (true/false), others have integer values
263 (base 8, 10, or 16, depending on prefix), and yet others have raw
264 string values.
265
267 Traditionally, allocators have used sbrk(2) to obtain memory, which is
268 suboptimal for several reasons, including race conditions, increased
269 fragmentation, and artificial limitations on maximum usable memory. If
270 sbrk(2) is supported by the operating system, this allocator uses both
271 mmap(2) and sbrk(2), in that order of preference; otherwise only
272 mmap(2) is used.
273
274 This allocator uses multiple arenas in order to reduce lock contention
275 for threaded programs on multi-processor systems. This works well with
276 regard to threading scalability, but incurs some costs. There is a
277 small fixed per-arena overhead, and additionally, arenas manage memory
278 completely independently of each other, which means a small fixed
279 increase in overall memory fragmentation. These overheads are not
280 generally an issue, given the number of arenas normally used. Note that
281 using substantially more arenas than the default is not likely to
282 improve performance, mainly due to reduced cache performance. However,
283 it may make sense to reduce the number of arenas if an application does
284 not make much use of the allocation functions.
285
286 In addition to multiple arenas, this allocator supports thread-specific
287 caching, in order to make it possible to completely avoid
288 synchronization for most allocation requests. Such caching allows very
289 fast allocation in the common case, but it increases memory usage and
290 fragmentation, since a bounded number of objects can remain allocated
291 in each thread cache.
292
293 Memory is conceptually broken into extents. Extents are always aligned
294 to multiples of the page size. This alignment makes it possible to find
295 metadata for user objects quickly. User objects are broken into two
296 categories according to size: small and large. Contiguous small objects
297 comprise a slab, which resides within a single extent, whereas large
298 objects each have their own extents backing them.
299
300 Small objects are managed in groups by slabs. Each slab maintains a
301 bitmap to track which regions are in use. Allocation requests that are
302 no more than half the quantum (8 or 16, depending on architecture) are
303 rounded up to the nearest power of two that is at least sizeof(double).
304 All other object size classes are multiples of the quantum, spaced such
305 that there are four size classes for each doubling in size, which
306 limits internal fragmentation to approximately 20% for all but the
307 smallest size classes. Small size classes are smaller than four times
308 the page size, and large size classes extend from four times the page
309 size up to the largest size class that does not exceed PTRDIFF_MAX.
310
311 Allocations are packed tightly together, which can be an issue for
312 multi-threaded applications. If you need to assure that allocations do
313 not suffer from cacheline sharing, round your allocation requests up to
314 the nearest multiple of the cacheline size, or specify cacheline
315 alignment when allocating.
316
317 The realloc(), rallocx(), and xallocx() functions may resize
318 allocations without moving them under limited circumstances. Unlike the
319 *allocx() API, the standard API does not officially round up the usable
320 size of an allocation to the nearest size class, so technically it is
321 necessary to call realloc() to grow e.g. a 9-byte allocation to 16
322 bytes, or shrink a 16-byte allocation to 9 bytes. Growth and shrinkage
323 trivially succeeds in place as long as the pre-size and post-size both
324 round up to the same size class. No other API guarantees are made
325 regarding in-place resizing, but the current implementation also tries
326 to resize large allocations in place, as long as the pre-size and
327 post-size are both large. For shrinkage to succeed, the extent
328 allocator must support splitting (see arena.<i>.extent_hooks). Growth
329 only succeeds if the trailing memory is currently available, and the
330 extent allocator supports merging.
331
332 Assuming 4 KiB pages and a 16-byte quantum on a 64-bit system, the size
333 classes in each category are as shown in Table 1.
334
335 Table 1. Size classes
336 ┌─────────┬─────────┬─────────────────────┐
337 │Category │ Spacing │ Size │
338 ├─────────┼─────────┼─────────────────────┤
339 │ │ lg │ [8] │
340 │ ├─────────┼─────────────────────┤
341 │ │ 16 │ [16, 32, 48, 64, │
342 │ │ │ 80, 96, 112, 128] │
343 │ ├─────────┼─────────────────────┤
344 │ │ 32 │ [160, 192, 224, │
345 │ │ │ 256] │
346 │ ├─────────┼─────────────────────┤
347 │ │ 64 │ [320, 384, 448, │
348 │ │ │ 512] │
349 │ ├─────────┼─────────────────────┤
350 │ │ 128 │ [640, 768, 896, │
351 │Small │ │ 1024] │
352 │ ├─────────┼─────────────────────┤
353 │ │ 256 │ [1280, 1536, 1792, │
354 │ │ │ 2048] │
355 │ ├─────────┼─────────────────────┤
356 │ │ 512 │ [2560, 3072, 3584, │
357 │ │ │ 4096] │
358 │ ├─────────┼─────────────────────┤
359 │ │ 1 KiB │ [5 KiB, 6 KiB, 7 │
360 │ │ │ KiB, 8 KiB] │
361 │ ├─────────┼─────────────────────┤
362 │ │ 2 KiB │ [10 KiB, 12 KiB, 14 │
363 │ │ │ KiB] │
364 ├─────────┼─────────┼─────────────────────┤
365 │ │ 2 KiB │ [16 KiB] │
366 │ ├─────────┼─────────────────────┤
367 │ │ 4 KiB │ [20 KiB, 24 KiB, 28 │
368 │ │ │ KiB, 32 KiB] │
369 │ ├─────────┼─────────────────────┤
370 │ │ 8 KiB │ [40 KiB, 48 KiB, 56 │
371 │ │ │ KiB, 64 KiB] │
372 │ ├─────────┼─────────────────────┤
373 │ │ 16 KiB │ [80 KiB, 96 KiB, │
374 │ │ │ 112 KiB, 128 KiB] │
375 │ ├─────────┼─────────────────────┤
376 │ │ 32 KiB │ [160 KiB, 192 KiB, │
377 │ │ │ 224 KiB, 256 KiB] │
378 │ ├─────────┼─────────────────────┤
379 │ │ 64 KiB │ [320 KiB, 384 KiB, │
380 │ │ │ 448 KiB, 512 KiB] │
381 │ ├─────────┼─────────────────────┤
382 │ │ 128 KiB │ [640 KiB, 768 KiB, │
383 │ │ │ 896 KiB, 1 MiB] │
384 │ ├─────────┼─────────────────────┤
385 │ │ 256 KiB │ [1280 KiB, 1536 │
386 │ │ │ KiB, 1792 KiB, 2 │
387 │Large │ │ MiB] │
388 │ ├─────────┼─────────────────────┤
389 │ │ 512 KiB │ [2560 KiB, 3 MiB, │
390 │ │ │ 3584 KiB, 4 MiB] │
391 │ ├─────────┼─────────────────────┤
392 │ │ 1 MiB │ [5 MiB, 6 MiB, 7 │
393 │ │ │ MiB, 8 MiB] │
394 │ ├─────────┼─────────────────────┤
395 │ │ 2 MiB │ [10 MiB, 12 MiB, 14 │
396 │ │ │ MiB, 16 MiB] │
397 │ ├─────────┼─────────────────────┤
398 │ │ 4 MiB │ [20 MiB, 24 MiB, 28 │
399 │ │ │ MiB, 32 MiB] │
400 │ ├─────────┼─────────────────────┤
401 │ │ 8 MiB │ [40 MiB, 48 MiB, 56 │
402 │ │ │ MiB, 64 MiB] │
403 │ ├─────────┼─────────────────────┤
404 │ │ ... │ ... │
405 │ ├─────────┼─────────────────────┤
406 │ │ 512 PiB │ [2560 PiB, 3 EiB, │
407 │ │ │ 3584 PiB, 4 EiB] │
408 │ ├─────────┼─────────────────────┤
409 │ │ 1 EiB │ [5 EiB, 6 EiB, 7 │
410 │ │ │ EiB] │
411 └─────────┴─────────┴─────────────────────┘
412
414 The following names are defined in the namespace accessible via the
415 mallctl*() functions. Value types are specified in parentheses, their
416 readable/writable statuses are encoded as rw, r-, -w, or --, and
417 required build configuration flags follow, if any. A name element
418 encoded as <i> or <j> indicates an integer component, where the integer
419 varies from 0 to some upper value that must be determined via
420 introspection. In the case of stats.arenas.<i>.* and
421 arena.<i>.{initialized,purge,decay,dss}, <i> equal to
422 MALLCTL_ARENAS_ALL can be used to operate on all arenas or access the
423 summation of statistics from all arenas; similarly <i> equal to
424 MALLCTL_ARENAS_DESTROYED can be used to access the summation of
425 statistics from all destroyed arenas. These constants can be utilized
426 either via mallctlnametomib() followed by mallctlbymib(), or via code
427 such as the following:
428
429 #define STRINGIFY_HELPER(x) #x
430 #define STRINGIFY(x) STRINGIFY_HELPER(x)
431
432 mallctl("arena." STRINGIFY(MALLCTL_ARENAS_ALL) ".decay",
433 NULL, NULL, NULL, 0);
434
435 Take special note of the epoch mallctl, which controls refreshing of
436 cached dynamic statistics.
437
438 version (const char *) r-
439 Return the jemalloc version string.
440
441 epoch (uint64_t) rw
442 If a value is passed in, refresh the data from which the mallctl*()
443 functions report values, and increment the epoch. Return the
444 current epoch. This is useful for detecting whether another thread
445 caused a refresh.
446
447 background_thread (bool) rw
448 Enable/disable internal background worker threads. When set to
449 true, background threads are created on demand (the number of
450 background threads will be no more than the number of CPUs or
451 active arenas). Threads run periodically, and handle purging
452 asynchronously. When switching off, background threads are
453 terminated synchronously. Note that after fork(2) function, the
454 state in the child process will be disabled regardless the state in
455 parent process. See stats.background_thread for related stats.
456 opt.background_thread can be used to set the default option. This
457 option is only available on selected pthread-based platforms.
458
459 max_background_threads (size_t) rw
460 Maximum number of background worker threads that will be created.
461 This value is capped at opt.max_background_threads at startup.
462
463 config.cache_oblivious (bool) r-
464 --enable-cache-oblivious was specified during build configuration.
465
466 config.debug (bool) r-
467 --enable-debug was specified during build configuration.
468
469 config.fill (bool) r-
470 --enable-fill was specified during build configuration.
471
472 config.lazy_lock (bool) r-
473 --enable-lazy-lock was specified during build configuration.
474
475 config.malloc_conf (const char *) r-
476 Embedded configure-time-specified run-time options string, empty
477 unless --with-malloc-conf was specified during build configuration.
478
479 config.prof (bool) r-
480 --enable-prof was specified during build configuration.
481
482 config.prof_libgcc (bool) r-
483 --disable-prof-libgcc was not specified during build configuration.
484
485 config.prof_libunwind (bool) r-
486 --enable-prof-libunwind was specified during build configuration.
487
488 config.stats (bool) r-
489 --enable-stats was specified during build configuration.
490
491 config.utrace (bool) r-
492 --enable-utrace was specified during build configuration.
493
494 config.xmalloc (bool) r-
495 --enable-xmalloc was specified during build configuration.
496
497 opt.abort (bool) r-
498 Abort-on-warning enabled/disabled. If true, most warnings are
499 fatal. Note that runtime option warnings are not included (see
500 opt.abort_conf for that). The process will call abort(3) in these
501 cases. This option is disabled by default unless --enable-debug is
502 specified during configuration, in which case it is enabled by
503 default.
504
505 opt.confirm_conf (bool) r-
506 Confirm-runtime-options-when-program-starts enabled/disabled. If
507 true, the string specified via --with-malloc-conf, the string
508 pointed to by the global variable malloc_conf, the “name” of the
509 file referenced by the symbolic link named /etc/malloc.conf, and
510 the value of the environment variable MALLOC_CONF, will be printed
511 in order. Then, each option being set will be individually printed.
512 This option is disabled by default.
513
514 opt.abort_conf (bool) r-
515 Abort-on-invalid-configuration enabled/disabled. If true, invalid
516 runtime options are fatal. The process will call abort(3) in these
517 cases. This option is disabled by default unless --enable-debug is
518 specified during configuration, in which case it is enabled by
519 default.
520
521 opt.cache_oblivious (bool) r-
522 Enable / Disable cache-oblivious large allocation alignment, for
523 large requests with no alignment constraints. If this feature is
524 disabled, all large allocations are page-aligned as an
525 implementation artifact, which can severely harm CPU cache
526 utilization. However, the cache-oblivious layout comes at the cost
527 of one extra page per large allocation, which in the most extreme
528 case increases physical memory usage for the 16 KiB size class to
529 20 KiB. This option is enabled by default.
530
531 opt.metadata_thp (const char *) r-
532 Controls whether to allow jemalloc to use transparent huge page
533 (THP) for internal metadata (see stats.metadata). “always” allows
534 such usage. “auto” uses no THP initially, but may begin to do so
535 when metadata usage reaches certain level. The default is
536 “disabled”.
537
538 opt.trust_madvise (bool) r-
539 If true, do not perform runtime check for MADV_DONTNEED, to check
540 that it actually zeros pages. The default is disabled on Linux and
541 enabled elsewhere.
542
543 opt.retain (bool) r-
544 If true, retain unused virtual memory for later reuse rather than
545 discarding it by calling munmap(2) or equivalent (see
546 stats.retained for related details). It also makes jemalloc use
547 mmap(2) or equivalent in a more greedy way, mapping larger chunks
548 in one go. This option is disabled by default unless discarding
549 virtual memory is known to trigger platform-specific performance
550 problems, namely 1) for [64-bit] Linux, which has a quirk in its
551 virtual memory allocation algorithm that causes semi-permanent VM
552 map holes under normal jemalloc operation; and 2) for [64-bit]
553 Windows, which disallows split / merged regions with MEM_RELEASE.
554 Although the same issues may present on 32-bit platforms as well,
555 retaining virtual memory for 32-bit Linux and Windows is disabled
556 by default due to the practical possibility of address space
557 exhaustion.
558
559 opt.dss (const char *) r-
560 dss (sbrk(2)) allocation precedence as related to mmap(2)
561 allocation. The following settings are supported if sbrk(2) is
562 supported by the operating system: “disabled”, “primary”, and
563 “secondary”; otherwise only “disabled” is supported. The default is
564 “secondary” if sbrk(2) is supported by the operating system;
565 “disabled” otherwise.
566
567 opt.narenas (unsigned) r-
568 Maximum number of arenas to use for automatic multiplexing of
569 threads and arenas. The default is four times the number of CPUs,
570 or one if there is a single CPU.
571
572 opt.oversize_threshold (size_t) r-
573 The threshold in bytes of which requests are considered oversize.
574 Allocation requests with greater sizes are fulfilled from a
575 dedicated arena (automatically managed, however not within
576 narenas), in order to reduce fragmentation by not mixing huge
577 allocations with small ones. In addition, the decay API guarantees
578 on the extents greater than the specified threshold may be
579 overridden. Note that requests with arena index specified via
580 MALLOCX_ARENA, or threads associated with explicit arenas will not
581 be considered. The default threshold is 8MiB. Values not within
582 large size classes disables this feature.
583
584 opt.percpu_arena (const char *) r-
585 Per CPU arena mode. Use the “percpu” setting to enable this
586 feature, which uses number of CPUs to determine number of arenas,
587 and bind threads to arenas dynamically based on the CPU the thread
588 runs on currently. “phycpu” setting uses one arena per physical
589 CPU, which means the two hyper threads on the same CPU share one
590 arena. Note that no runtime checking regarding the availability of
591 hyper threading is done at the moment. When set to “disabled”,
592 narenas and thread to arena association will not be impacted by
593 this option. The default is “disabled”.
594
595 opt.background_thread (bool) r-
596 Internal background worker threads enabled/disabled. Because of
597 potential circular dependencies, enabling background thread using
598 this option may cause crash or deadlock during initialization. For
599 a reliable way to use this feature, see background_thread for
600 dynamic control options and details. This option is disabled by
601 default.
602
603 opt.max_background_threads (size_t) r-
604 Maximum number of background threads that will be created if
605 background_thread is set. Defaults to number of cpus.
606
607 opt.dirty_decay_ms (ssize_t) r-
608 Approximate time in milliseconds from the creation of a set of
609 unused dirty pages until an equivalent set of unused dirty pages is
610 purged (i.e. converted to muzzy via e.g. madvise(...MADV_FREE) if
611 supported by the operating system, or converted to clean otherwise)
612 and/or reused. Dirty pages are defined as previously having been
613 potentially written to by the application, and therefore consuming
614 physical memory, yet having no current use. The pages are
615 incrementally purged according to a sigmoidal decay curve that
616 starts and ends with zero purge rate. A decay time of 0 causes all
617 unused dirty pages to be purged immediately upon creation. A decay
618 time of -1 disables purging. The default decay time is 10 seconds.
619 See arenas.dirty_decay_ms and arena.<i>.dirty_decay_ms for related
620 dynamic control options. See opt.muzzy_decay_ms for a description
621 of muzzy pages.for a description of muzzy pages. Note that when the
622 oversize_threshold feature is enabled, the arenas reserved for
623 oversize requests may have its own default decay settings.
624
625 opt.muzzy_decay_ms (ssize_t) r-
626 Approximate time in milliseconds from the creation of a set of
627 unused muzzy pages until an equivalent set of unused muzzy pages is
628 purged (i.e. converted to clean) and/or reused. Muzzy pages are
629 defined as previously having been unused dirty pages that were
630 subsequently purged in a manner that left them subject to the
631 reclamation whims of the operating system (e.g.
632 madvise(...MADV_FREE)), and therefore in an indeterminate state.
633 The pages are incrementally purged according to a sigmoidal decay
634 curve that starts and ends with zero purge rate. A decay time of 0
635 causes all unused muzzy pages to be purged immediately upon
636 creation. A decay time of -1 disables purging. The default decay
637 time is 10 seconds. See arenas.muzzy_decay_ms and
638 arena.<i>.muzzy_decay_ms for related dynamic control options.
639
640 opt.lg_extent_max_active_fit (size_t) r-
641 When reusing dirty extents, this determines the (log base 2 of the)
642 maximum ratio between the size of the active extent selected (to
643 split off from) and the size of the requested allocation. This
644 prevents the splitting of large active extents for smaller
645 allocations, which can reduce fragmentation over the long run
646 (especially for non-active extents). Lower value may reduce
647 fragmentation, at the cost of extra active extents. The default
648 value is 6, which gives a maximum ratio of 64 (2^6).
649
650 opt.stats_print (bool) r-
651 Enable/disable statistics printing at exit. If enabled, the
652 malloc_stats_print() function is called at program exit via an
653 atexit(3) function. opt.stats_print_opts can be combined to
654 specify output options. If --enable-stats is specified during
655 configuration, this has the potential to cause deadlock for a
656 multi-threaded process that exits while one or more threads are
657 executing in the memory allocation functions. Furthermore, atexit()
658 may allocate memory during application initialization and then
659 deadlock internally when jemalloc in turn calls atexit(), so this
660 option is not universally usable (though the application can
661 register its own atexit() function with equivalent functionality).
662 Therefore, this option should only be used with care; it is
663 primarily intended as a performance tuning aid during application
664 development. This option is disabled by default.
665
666 opt.stats_print_opts (const char *) r-
667 Options (the opts string) to pass to the malloc_stats_print() at
668 exit (enabled through opt.stats_print). See available options in
669 malloc_stats_print(). Has no effect unless opt.stats_print is
670 enabled. The default is “”.
671
672 opt.stats_interval (int64_t) r-
673 Average interval between statistics outputs, as measured in bytes
674 of allocation activity. The actual interval may be sporadic because
675 decentralized event counters are used to avoid synchronization
676 bottlenecks. The output may be triggered on any thread, which then
677 calls malloc_stats_print(). opt.stats_interval_opts can be
678 combined to specify output options. By default, interval-triggered
679 stats output is disabled (encoded as -1).
680
681 opt.stats_interval_opts (const char *) r-
682 Options (the opts string) to pass to the malloc_stats_print() for
683 interval based statistics printing (enabled through
684 opt.stats_interval). See available options in malloc_stats_print().
685 Has no effect unless opt.stats_interval is enabled. The default is
686 “”.
687
688 opt.junk (const char *) r- [--enable-fill]
689 Junk filling. If set to “alloc”, each byte of uninitialized
690 allocated memory will be initialized to 0xa5. If set to “free”, all
691 deallocated memory will be initialized to 0x5a. If set to “true”,
692 both allocated and deallocated memory will be initialized, and if
693 set to “false”, junk filling be disabled entirely. This is intended
694 for debugging and will impact performance negatively. This option
695 is “false” by default unless --enable-debug is specified during
696 configuration, in which case it is “true” by default.
697
698 opt.zero (bool) r- [--enable-fill]
699 Zero filling enabled/disabled. If enabled, each byte of
700 uninitialized allocated memory will be initialized to 0. Note that
701 this initialization only happens once for each byte, so realloc()
702 and rallocx() calls do not zero memory that was previously
703 allocated. This is intended for debugging and will impact
704 performance negatively. This option is disabled by default.
705
706 opt.utrace (bool) r- [--enable-utrace]
707 Allocation tracing based on utrace(2) enabled/disabled. This option
708 is disabled by default.
709
710 opt.xmalloc (bool) r- [--enable-xmalloc]
711 Abort-on-out-of-memory enabled/disabled. If enabled, rather than
712 returning failure for any allocation function, display a diagnostic
713 message on STDERR_FILENO and cause the program to drop core (using
714 abort(3)). If an application is designed to depend on this
715 behavior, set the option at compile time by including the following
716 in the source code:
717
718 malloc_conf = "xmalloc:true";
719
720 This option is disabled by default.
721
722 opt.tcache (bool) r-
723 Thread-specific caching (tcache) enabled/disabled. When there are
724 multiple threads, each thread uses a tcache for objects up to a
725 certain size. Thread-specific caching allows many allocations to be
726 satisfied without performing any thread synchronization, at the
727 cost of increased memory use. See the opt.tcache_max option for
728 related tuning information. This option is enabled by default.
729
730 opt.tcache_max (size_t) r-
731 Maximum size class to cache in the thread-specific cache (tcache).
732 At a minimum, the first size class is cached; and at a maximum,
733 size classes up to 8 MiB can be cached. The default maximum is 32
734 KiB (2^15). As a convenience, this may also be set by specifying
735 lg_tcache_max, which will be taken to be the base-2 logarithm of
736 the setting of tcache_max.
737
738 opt.thp (const char *) r-
739 Transparent hugepage (THP) mode. Settings "always", "never" and
740 "default" are available if THP is supported by the operating
741 system. The "always" setting enables transparent hugepage for all
742 user memory mappings with MADV_HUGEPAGE; "never" ensures no
743 transparent hugepage with MADV_NOHUGEPAGE; the default setting
744 "default" makes no changes. Note that: this option does not affect
745 THP for jemalloc internal metadata (see opt.metadata_thp); in
746 addition, for arenas with customized extent_hooks, this option is
747 bypassed as it is implemented as part of the default extent hooks.
748
749 opt.prof (bool) r- [--enable-prof]
750 Memory profiling enabled/disabled. If enabled, profile memory
751 allocation activity. See the opt.prof_active option for on-the-fly
752 activation/deactivation. See the opt.lg_prof_sample option for
753 probabilistic sampling control. See the opt.prof_accum option for
754 control of cumulative sample reporting. See the
755 opt.lg_prof_interval option for information on interval-triggered
756 profile dumping, the opt.prof_gdump option for information on
757 high-water-triggered profile dumping, and the opt.prof_final option
758 for final profile dumping. Profile output is compatible with the
759 jeprof command, which is based on the pprof that is developed as
760 part of the gperftools package[3]. See HEAP PROFILE FORMAT for heap
761 profile format documentation.
762
763 opt.prof_prefix (const char *) r- [--enable-prof]
764 Filename prefix for profile dumps. If the prefix is set to the
765 empty string, no automatic dumps will occur; this is primarily
766 useful for disabling the automatic final heap dump (which also
767 disables leak reporting, if enabled). The default prefix is jeprof.
768 This prefix value can be overridden by prof.prefix.
769
770 opt.prof_active (bool) r- [--enable-prof]
771 Profiling activated/deactivated. This is a secondary control
772 mechanism that makes it possible to start the application with
773 profiling enabled (see the opt.prof option) but inactive, then
774 toggle profiling at any time during program execution with the
775 prof.active mallctl. This option is enabled by default.
776
777 opt.prof_thread_active_init (bool) r- [--enable-prof]
778 Initial setting for thread.prof.active in newly created threads.
779 The initial setting for newly created threads can also be changed
780 during execution via the prof.thread_active_init mallctl. This
781 option is enabled by default.
782
783 opt.lg_prof_sample (size_t) r- [--enable-prof]
784 Average interval (log base 2) between allocation samples, as
785 measured in bytes of allocation activity. Increasing the sampling
786 interval decreases profile fidelity, but also decreases the
787 computational overhead. The default sample interval is 512 KiB
788 (2^19 B).
789
790 opt.prof_accum (bool) r- [--enable-prof]
791 Reporting of cumulative object/byte counts in profile dumps
792 enabled/disabled. If this option is enabled, every unique backtrace
793 must be stored for the duration of execution. Depending on the
794 application, this can impose a large memory overhead, and the
795 cumulative counts are not always of interest. This option is
796 disabled by default.
797
798 opt.lg_prof_interval (ssize_t) r- [--enable-prof]
799 Average interval (log base 2) between memory profile dumps, as
800 measured in bytes of allocation activity. The actual interval
801 between dumps may be sporadic because decentralized allocation
802 counters are used to avoid synchronization bottlenecks. Profiles
803 are dumped to files named according to the pattern
804 <prefix>.<pid>.<seq>.i<iseq>.heap, where <prefix> is controlled by
805 the opt.prof_prefix and prof.prefix options. By default,
806 interval-triggered profile dumping is disabled (encoded as -1).
807
808 opt.prof_gdump (bool) r- [--enable-prof]
809 Set the initial state of prof.gdump, which when enabled triggers a
810 memory profile dump every time the total virtual memory exceeds the
811 previous maximum. This option is disabled by default.
812
813 opt.prof_final (bool) r- [--enable-prof]
814 Use an atexit(3) function to dump final memory usage to a file
815 named according to the pattern <prefix>.<pid>.<seq>.f.heap, where
816 <prefix> is controlled by the opt.prof_prefix and prof.prefix
817 options. Note that atexit() may allocate memory during application
818 initialization and then deadlock internally when jemalloc in turn
819 calls atexit(), so this option is not universally usable (though
820 the application can register its own atexit() function with
821 equivalent functionality). This option is disabled by default.
822
823 opt.prof_leak (bool) r- [--enable-prof]
824 Leak reporting enabled/disabled. If enabled, use an atexit(3)
825 function to report memory leaks detected by allocation sampling.
826 See the opt.prof option for information on analyzing heap profile
827 output. Works only when combined with opt.prof_final, otherwise
828 does nothing. This option is disabled by default.
829
830 opt.prof_leak_error (bool) r- [--enable-prof]
831 Similar to opt.prof_leak, but makes the process exit with error
832 code 1 if a memory leak is detected. This option supersedes
833 opt.prof_leak, meaning that if both are specified, this option
834 takes precedence. When enabled, also enables opt.prof_leak. Works
835 only when combined with opt.prof_final, otherwise does nothing.
836 This option is disabled by default.
837
838 opt.zero_realloc (const char *) r-
839 Determines the behavior of realloc() when passed a value of zero
840 for the new size. “alloc” treats this as an allocation of size
841 zero (and returns a non-null result except in case of resource
842 exhaustion). “free” treats this as a deallocation of the pointer,
843 and returns NULL without setting errno. “abort” aborts the process
844 if zero is passed. The default is “free” on Linux and Windows, and
845 “alloc” elsewhere.
846
847 There is considerable divergence of behaviors across
848 implementations in handling this case. Many have the behavior of
849 “free”. This can introduce security vulnerabilities, since a NULL
850 return value indicates failure, and the continued validity of the
851 passed-in pointer (per POSIX and C11). “alloc” is safe, but can
852 cause leaks in programs that expect the common behavior. Programs
853 intended to be portable and leak-free cannot assume either
854 behavior, and must therefore never call realloc with a size of 0.
855 The “abort” option enables these testing this behavior.
856
857 thread.arena (unsigned) rw
858 Get or set the arena associated with the calling thread. If the
859 specified arena was not initialized beforehand (see the
860 arena.i.initialized mallctl), it will be automatically initialized
861 as a side effect of calling this interface.
862
863 thread.allocated (uint64_t) r- [--enable-stats]
864 Get the total number of bytes ever allocated by the calling thread.
865 This counter has the potential to wrap around; it is up to the
866 application to appropriately interpret the counter in such cases.
867
868 thread.allocatedp (uint64_t *) r- [--enable-stats]
869 Get a pointer to the the value that is returned by the
870 thread.allocated mallctl. This is useful for avoiding the overhead
871 of repeated mallctl*() calls. Note that the underlying counter
872 should not be modified by the application.
873
874 thread.deallocated (uint64_t) r- [--enable-stats]
875 Get the total number of bytes ever deallocated by the calling
876 thread. This counter has the potential to wrap around; it is up to
877 the application to appropriately interpret the counter in such
878 cases.
879
880 thread.deallocatedp (uint64_t *) r- [--enable-stats]
881 Get a pointer to the the value that is returned by the
882 thread.deallocated mallctl. This is useful for avoiding the
883 overhead of repeated mallctl*() calls. Note that the underlying
884 counter should not be modified by the application.
885
886 thread.peak.read (uint64_t) r- [--enable-stats]
887 Get an approximation of the maximum value of the difference between
888 the number of bytes allocated and the number of bytes deallocated
889 by the calling thread since the last call to thread.peak.reset, or
890 since the thread's creation if it has not called thread.peak.reset.
891 No guarantees are made about the quality of the approximation, but
892 jemalloc currently endeavors to maintain accuracy to within one
893 hundred kilobytes.
894
895 thread.peak.reset (void) -- [--enable-stats]
896 Resets the counter for net bytes allocated in the calling thread to
897 zero. This affects subsequent calls to thread.peak.read, but not
898 the values returned by thread.allocated or thread.deallocated.
899
900 thread.tcache.enabled (bool) rw
901 Enable/disable calling thread's tcache. The tcache is implicitly
902 flushed as a side effect of becoming disabled (see
903 thread.tcache.flush).
904
905 thread.tcache.flush (void) --
906 Flush calling thread's thread-specific cache (tcache). This
907 interface releases all cached objects and internal data structures
908 associated with the calling thread's tcache. Ordinarily, this
909 interface need not be called, since automatic periodic incremental
910 garbage collection occurs, and the thread cache is automatically
911 discarded when a thread exits. However, garbage collection is
912 triggered by allocation activity, so it is possible for a thread
913 that stops allocating/deallocating to retain its cache
914 indefinitely, in which case the developer may find manual flushing
915 useful.
916
917 thread.prof.name (const char *) r- or -w [--enable-prof]
918 Get/set the descriptive name associated with the calling thread in
919 memory profile dumps. An internal copy of the name string is
920 created, so the input string need not be maintained after this
921 interface completes execution. The output string of this interface
922 should be copied for non-ephemeral uses, because multiple
923 implementation details can cause asynchronous string deallocation.
924 Furthermore, each invocation of this interface can only read or
925 write; simultaneous read/write is not supported due to string
926 lifetime limitations. The name string must be nil-terminated and
927 comprised only of characters in the sets recognized by isgraph(3)
928 and isblank(3).
929
930 thread.prof.active (bool) rw [--enable-prof]
931 Control whether sampling is currently active for the calling
932 thread. This is an activation mechanism in addition to prof.active;
933 both must be active for the calling thread to sample. This flag is
934 enabled by default.
935
936 thread.idle (void) --
937 Hints to jemalloc that the calling thread will be idle for some
938 nontrivial period of time (say, on the order of seconds), and that
939 doing some cleanup operations may be beneficial. There are no
940 guarantees as to what specific operations will be performed;
941 currently this flushes the caller's tcache and may (according to
942 some heuristic) purge its associated arena.
943
944 This is not intended to be a general-purpose background activity
945 mechanism, and threads should not wake up multiple times solely to
946 call it. Rather, a thread waiting for a task should do a timed wait
947 first, call thread.idle if no task appears in the timeout interval,
948 and then do an untimed wait. For such a background activity
949 mechanism, see background_thread.
950
951 tcache.create (unsigned) r-
952 Create an explicit thread-specific cache (tcache) and return an
953 identifier that can be passed to the MALLOCX_TCACHE(tc) macro to
954 explicitly use the specified cache rather than the automatically
955 managed one that is used by default. Each explicit cache can be
956 used by only one thread at a time; the application must assure that
957 this constraint holds.
958
959 If the amount of space supplied for storing the thread-specific
960 cache identifier does not equal sizeof(unsigned), no
961 thread-specific cache will be created, no data will be written to
962 the space pointed by oldp, and *oldlenp will be set to 0.
963
964 tcache.flush (unsigned) -w
965 Flush the specified thread-specific cache (tcache). The same
966 considerations apply to this interface as to thread.tcache.flush,
967 except that the tcache will never be automatically discarded.
968
969 tcache.destroy (unsigned) -w
970 Flush the specified thread-specific cache (tcache) and make the
971 identifier available for use during a future tcache creation.
972
973 arena.<i>.initialized (bool) r-
974 Get whether the specified arena's statistics are initialized (i.e.
975 the arena was initialized prior to the current epoch). This
976 interface can also be nominally used to query whether the merged
977 statistics corresponding to MALLCTL_ARENAS_ALL are initialized
978 (always true).
979
980 arena.<i>.decay (void) --
981 Trigger decay-based purging of unused dirty/muzzy pages for arena
982 <i>, or for all arenas if <i> equals MALLCTL_ARENAS_ALL. The
983 proportion of unused dirty/muzzy pages to be purged depends on the
984 current time; see opt.dirty_decay_ms and opt.muzy_decay_ms for
985 details.
986
987 arena.<i>.purge (void) --
988 Purge all unused dirty pages for arena <i>, or for all arenas if
989 <i> equals MALLCTL_ARENAS_ALL.
990
991 arena.<i>.reset (void) --
992 Discard all of the arena's extant allocations. This interface can
993 only be used with arenas explicitly created via arenas.create. None
994 of the arena's discarded/cached allocations may accessed afterward.
995 As part of this requirement, all thread caches which were used to
996 allocate/deallocate in conjunction with the arena must be flushed
997 beforehand.
998
999 arena.<i>.destroy (void) --
1000 Destroy the arena. Discard all of the arena's extant allocations
1001 using the same mechanism as for arena.<i>.reset (with all the same
1002 constraints and side effects), merge the arena stats into those
1003 accessible at arena index MALLCTL_ARENAS_DESTROYED, and then
1004 completely discard all metadata associated with the arena. Future
1005 calls to arenas.create may recycle the arena index. Destruction
1006 will fail if any threads are currently associated with the arena as
1007 a result of calls to thread.arena.
1008
1009 arena.<i>.dss (const char *) rw
1010 Set the precedence of dss allocation as related to mmap allocation
1011 for arena <i>, or for all arenas if <i> equals MALLCTL_ARENAS_ALL.
1012 See opt.dss for supported settings.
1013
1014 arena.<i>.dirty_decay_ms (ssize_t) rw
1015 Current per-arena approximate time in milliseconds from the
1016 creation of a set of unused dirty pages until an equivalent set of
1017 unused dirty pages is purged and/or reused. Each time this
1018 interface is set, all currently unused dirty pages are considered
1019 to have fully decayed, which causes immediate purging of all unused
1020 dirty pages unless the decay time is set to -1 (i.e. purging
1021 disabled). See opt.dirty_decay_ms for additional information.
1022
1023 arena.<i>.muzzy_decay_ms (ssize_t) rw
1024 Current per-arena approximate time in milliseconds from the
1025 creation of a set of unused muzzy pages until an equivalent set of
1026 unused muzzy pages is purged and/or reused. Each time this
1027 interface is set, all currently unused muzzy pages are considered
1028 to have fully decayed, which causes immediate purging of all unused
1029 muzzy pages unless the decay time is set to -1 (i.e. purging
1030 disabled). See opt.muzzy_decay_ms for additional information.
1031
1032 arena.<i>.retain_grow_limit (size_t) rw
1033 Maximum size to grow retained region (only relevant when opt.retain
1034 is enabled). This controls the maximum increment to expand virtual
1035 memory, or allocation through arena.<i>extent_hooks. In particular,
1036 if customized extent hooks reserve physical memory (e.g. 1G huge
1037 pages), this is useful to control the allocation hook's input size.
1038 The default is no limit.
1039
1040 arena.<i>.extent_hooks (extent_hooks_t *) rw
1041 Get or set the extent management hook functions for arena <i>. The
1042 functions must be capable of operating on all extant extents
1043 associated with arena <i>, usually by passing unknown extents to
1044 the replaced functions. In practice, it is feasible to control
1045 allocation for arenas explicitly created via arenas.create such
1046 that all extents originate from an application-supplied extent
1047 allocator (by specifying the custom extent hook functions during
1048 arena creation). However, the API guarantees for the automatically
1049 created arenas may be relaxed -- hooks set there may be called in a
1050 "best effort" fashion; in addition there may be extents created
1051 prior to the application having an opportunity to take over extent
1052 allocation.
1053
1054 typedef extent_hooks_s extent_hooks_t;
1055 struct extent_hooks_s {
1056 extent_alloc_t *alloc;
1057 extent_dalloc_t *dalloc;
1058 extent_destroy_t *destroy;
1059 extent_commit_t *commit;
1060 extent_decommit_t *decommit;
1061 extent_purge_t *purge_lazy;
1062 extent_purge_t *purge_forced;
1063 extent_split_t *split;
1064 extent_merge_t *merge;
1065 };
1066
1067 The extent_hooks_t structure comprises function pointers which are
1068 described individually below. jemalloc uses these functions to
1069 manage extent lifetime, which starts off with allocation of mapped
1070 committed memory, in the simplest case followed by deallocation.
1071 However, there are performance and platform reasons to retain
1072 extents for later reuse. Cleanup attempts cascade from deallocation
1073 to decommit to forced purging to lazy purging, which gives the
1074 extent management functions opportunities to reject the most
1075 permanent cleanup operations in favor of less permanent (and often
1076 less costly) operations. All operations except allocation can be
1077 universally opted out of by setting the hook pointers to NULL, or
1078 selectively opted out of by returning failure. Note that once the
1079 extent hook is set, the structure is accessed directly by the
1080 associated arenas, so it must remain valid for the entire lifetime
1081 of the arenas.
1082
1083 typedef void *(extent_alloc_t)(extent_hooks_t *extent_hooks,
1084 void *new_addr, size_t size,
1085 size_t alignment, bool *zero,
1086 bool *commit, unsigned arena_ind);
1087
1088
1089 An extent allocation function conforms to the extent_alloc_t type
1090 and upon success returns a pointer to size bytes of mapped memory
1091 on behalf of arena arena_ind such that the extent's base address is
1092 a multiple of alignment, as well as setting *zero to indicate
1093 whether the extent is zeroed and *commit to indicate whether the
1094 extent is committed. Upon error the function returns NULL and
1095 leaves *zero and *commit unmodified. The size parameter is always a
1096 multiple of the page size. The alignment parameter is always a
1097 power of two at least as large as the page size. Zeroing is
1098 mandatory if *zero is true upon function entry. Committing is
1099 mandatory if *commit is true upon function entry. If new_addr is
1100 not NULL, the returned pointer must be new_addr on success or NULL
1101 on error. Committed memory may be committed in absolute terms as on
1102 a system that does not overcommit, or in implicit terms as on a
1103 system that overcommits and satisfies physical memory needs on
1104 demand via soft page faults. Note that replacing the default extent
1105 allocation function makes the arena's arena.<i>.dss setting
1106 irrelevant.
1107
1108 typedef bool (extent_dalloc_t)(extent_hooks_t *extent_hooks,
1109 void *addr, size_t size,
1110 bool committed, unsigned arena_ind);
1111
1112
1113 An extent deallocation function conforms to the extent_dalloc_t
1114 type and deallocates an extent at given addr and size with
1115 committed/decommited memory as indicated, on behalf of arena
1116 arena_ind, returning false upon success. If the function returns
1117 true, this indicates opt-out from deallocation; the virtual memory
1118 mapping associated with the extent remains mapped, in the same
1119 commit state, and available for future use, in which case it will
1120 be automatically retained for later reuse.
1121
1122 typedef void (extent_destroy_t)(extent_hooks_t *extent_hooks,
1123 void *addr, size_t size,
1124 bool committed,
1125 unsigned arena_ind);
1126
1127
1128 An extent destruction function conforms to the extent_destroy_t
1129 type and unconditionally destroys an extent at given addr and size
1130 with committed/decommited memory as indicated, on behalf of arena
1131 arena_ind. This function may be called to destroy retained extents
1132 during arena destruction (see arena.<i>.destroy).
1133
1134 typedef bool (extent_commit_t)(extent_hooks_t *extent_hooks,
1135 void *addr, size_t size,
1136 size_t offset, size_t length,
1137 unsigned arena_ind);
1138
1139
1140 An extent commit function conforms to the extent_commit_t type and
1141 commits zeroed physical memory to back pages within an extent at
1142 given addr and size at offset bytes, extending for length on behalf
1143 of arena arena_ind, returning false upon success. Committed memory
1144 may be committed in absolute terms as on a system that does not
1145 overcommit, or in implicit terms as on a system that overcommits
1146 and satisfies physical memory needs on demand via soft page faults.
1147 If the function returns true, this indicates insufficient physical
1148 memory to satisfy the request.
1149
1150 typedef bool (extent_decommit_t)(extent_hooks_t *extent_hooks,
1151 void *addr, size_t size,
1152 size_t offset, size_t length,
1153 unsigned arena_ind);
1154
1155
1156 An extent decommit function conforms to the extent_decommit_t type
1157 and decommits any physical memory that is backing pages within an
1158 extent at given addr and size at offset bytes, extending for length
1159 on behalf of arena arena_ind, returning false upon success, in
1160 which case the pages will be committed via the extent commit
1161 function before being reused. If the function returns true, this
1162 indicates opt-out from decommit; the memory remains committed and
1163 available for future use, in which case it will be automatically
1164 retained for later reuse.
1165
1166 typedef bool (extent_purge_t)(extent_hooks_t *extent_hooks,
1167 void *addr, size_t size,
1168 size_t offset, size_t length,
1169 unsigned arena_ind);
1170
1171
1172 An extent purge function conforms to the extent_purge_t type and
1173 discards physical pages within the virtual memory mapping
1174 associated with an extent at given addr and size at offset bytes,
1175 extending for length on behalf of arena arena_ind. A lazy extent
1176 purge function (e.g. implemented via madvise(...MADV_FREE)) can
1177 delay purging indefinitely and leave the pages within the purged
1178 virtual memory range in an indeterminite state, whereas a forced
1179 extent purge function immediately purges, and the pages within the
1180 virtual memory range will be zero-filled the next time they are
1181 accessed. If the function returns true, this indicates failure to
1182 purge.
1183
1184 typedef bool (extent_split_t)(extent_hooks_t *extent_hooks,
1185 void *addr, size_t size,
1186 size_t size_a, size_t size_b,
1187 bool committed, unsigned arena_ind);
1188
1189
1190 An extent split function conforms to the extent_split_t type and
1191 optionally splits an extent at given addr and size into two
1192 adjacent extents, the first of size_a bytes, and the second of
1193 size_b bytes, operating on committed/decommitted memory as
1194 indicated, on behalf of arena arena_ind, returning false upon
1195 success. If the function returns true, this indicates that the
1196 extent remains unsplit and therefore should continue to be operated
1197 on as a whole.
1198
1199 typedef bool (extent_merge_t)(extent_hooks_t *extent_hooks,
1200 void *addr_a, size_t size_a,
1201 void *addr_b, size_t size_b,
1202 bool committed, unsigned arena_ind);
1203
1204
1205 An extent merge function conforms to the extent_merge_t type and
1206 optionally merges adjacent extents, at given addr_a and size_a with
1207 given addr_b and size_b into one contiguous extent, operating on
1208 committed/decommitted memory as indicated, on behalf of arena
1209 arena_ind, returning false upon success. If the function returns
1210 true, this indicates that the extents remain distinct mappings and
1211 therefore should continue to be operated on independently.
1212
1213 arenas.narenas (unsigned) r-
1214 Current limit on number of arenas.
1215
1216 arenas.dirty_decay_ms (ssize_t) rw
1217 Current default per-arena approximate time in milliseconds from the
1218 creation of a set of unused dirty pages until an equivalent set of
1219 unused dirty pages is purged and/or reused, used to initialize
1220 arena.<i>.dirty_decay_ms during arena creation. See
1221 opt.dirty_decay_ms for additional information.
1222
1223 arenas.muzzy_decay_ms (ssize_t) rw
1224 Current default per-arena approximate time in milliseconds from the
1225 creation of a set of unused muzzy pages until an equivalent set of
1226 unused muzzy pages is purged and/or reused, used to initialize
1227 arena.<i>.muzzy_decay_ms during arena creation. See
1228 opt.muzzy_decay_ms for additional information.
1229
1230 arenas.quantum (size_t) r-
1231 Quantum size.
1232
1233 arenas.page (size_t) r-
1234 Page size.
1235
1236 arenas.tcache_max (size_t) r-
1237 Maximum thread-cached size class.
1238
1239 arenas.nbins (unsigned) r-
1240 Number of bin size classes.
1241
1242 arenas.nhbins (unsigned) r-
1243 Total number of thread cache bin size classes.
1244
1245 arenas.bin.<i>.size (size_t) r-
1246 Maximum size supported by size class.
1247
1248 arenas.bin.<i>.nregs (uint32_t) r-
1249 Number of regions per slab.
1250
1251 arenas.bin.<i>.slab_size (size_t) r-
1252 Number of bytes per slab.
1253
1254 arenas.nlextents (unsigned) r-
1255 Total number of large size classes.
1256
1257 arenas.lextent.<i>.size (size_t) r-
1258 Maximum size supported by this large size class.
1259
1260 arenas.create (unsigned, extent_hooks_t *) rw
1261 Explicitly create a new arena outside the range of automatically
1262 managed arenas, with optionally specified extent hooks, and return
1263 the new arena index.
1264
1265 If the amount of space supplied for storing the arena index does
1266 not equal sizeof(unsigned), no arena will be created, no data will
1267 be written to the space pointed by oldp, and *oldlenp will be set
1268 to 0.
1269
1270 arenas.lookup (unsigned, void*) rw
1271 Index of the arena to which an allocation belongs to.
1272
1273 prof.thread_active_init (bool) rw [--enable-prof]
1274 Control the initial setting for thread.prof.active in newly created
1275 threads. See the opt.prof_thread_active_init option for additional
1276 information.
1277
1278 prof.active (bool) rw [--enable-prof]
1279 Control whether sampling is currently active. See the
1280 opt.prof_active option for additional information, as well as the
1281 interrelated thread.prof.active mallctl.
1282
1283 prof.dump (const char *) -w [--enable-prof]
1284 Dump a memory profile to the specified file, or if NULL is
1285 specified, to a file according to the pattern
1286 <prefix>.<pid>.<seq>.m<mseq>.heap, where <prefix> is controlled by
1287 the opt.prof_prefix and prof.prefix options.
1288
1289 prof.prefix (const char *) -w [--enable-prof]
1290 Set the filename prefix for profile dumps. See opt.prof_prefix for
1291 the default setting. This can be useful to differentiate profile
1292 dumps such as from forked processes.
1293
1294 prof.gdump (bool) rw [--enable-prof]
1295 When enabled, trigger a memory profile dump every time the total
1296 virtual memory exceeds the previous maximum. Profiles are dumped to
1297 files named according to the pattern
1298 <prefix>.<pid>.<seq>.u<useq>.heap, where <prefix> is controlled by
1299 the opt.prof_prefix and prof.prefix options.
1300
1301 prof.reset (size_t) -w [--enable-prof]
1302 Reset all memory profile statistics, and optionally update the
1303 sample rate (see opt.lg_prof_sample and prof.lg_sample).
1304
1305 prof.lg_sample (size_t) r- [--enable-prof]
1306 Get the current sample rate (see opt.lg_prof_sample).
1307
1308 prof.interval (uint64_t) r- [--enable-prof]
1309 Average number of bytes allocated between interval-based profile
1310 dumps. See the opt.lg_prof_interval option for additional
1311 information.
1312
1313 stats.allocated (size_t) r- [--enable-stats]
1314 Total number of bytes allocated by the application.
1315
1316 stats.active (size_t) r- [--enable-stats]
1317 Total number of bytes in active pages allocated by the application.
1318 This is a multiple of the page size, and greater than or equal to
1319 stats.allocated. This does not include stats.arenas.<i>.pdirty,
1320 stats.arenas.<i>.pmuzzy, nor pages entirely devoted to allocator
1321 metadata.
1322
1323 stats.metadata (size_t) r- [--enable-stats]
1324 Total number of bytes dedicated to metadata, which comprise base
1325 allocations used for bootstrap-sensitive allocator metadata
1326 structures (see stats.arenas.<i>.base) and internal allocations
1327 (see stats.arenas.<i>.internal). Transparent huge page (enabled
1328 with opt.metadata_thp) usage is not considered.
1329
1330 stats.metadata_thp (size_t) r- [--enable-stats]
1331 Number of transparent huge pages (THP) used for metadata. See
1332 stats.metadata and opt.metadata_thp) for details.
1333
1334 stats.resident (size_t) r- [--enable-stats]
1335 Maximum number of bytes in physically resident data pages mapped by
1336 the allocator, comprising all pages dedicated to allocator
1337 metadata, pages backing active allocations, and unused dirty pages.
1338 This is a maximum rather than precise because pages may not
1339 actually be physically resident if they correspond to demand-zeroed
1340 virtual memory that has not yet been touched. This is a multiple of
1341 the page size, and is larger than stats.active.
1342
1343 stats.mapped (size_t) r- [--enable-stats]
1344 Total number of bytes in active extents mapped by the allocator.
1345 This is larger than stats.active. This does not include inactive
1346 extents, even those that contain unused dirty pages, which means
1347 that there is no strict ordering between this and stats.resident.
1348
1349 stats.retained (size_t) r- [--enable-stats]
1350 Total number of bytes in virtual memory mappings that were retained
1351 rather than being returned to the operating system via e.g.
1352 munmap(2) or similar. Retained virtual memory is typically
1353 untouched, decommitted, or purged, so it has no strongly associated
1354 physical memory (see extent hooks for details). Retained memory is
1355 excluded from mapped memory statistics, e.g. stats.mapped.
1356
1357 stats.zero_reallocs (size_t) r- [--enable-stats]
1358 Number of times that the realloc() was called with a non-NULL
1359 pointer argument and a 0 size argument. This is a fundamentally
1360 unsafe pattern in portable programs; see opt.zero_realloc for
1361 details.
1362
1363 stats.background_thread.num_threads (size_t) r- [--enable-stats]
1364 Number of background threads running currently.
1365
1366 stats.background_thread.num_runs (uint64_t) r- [--enable-stats]
1367 Total number of runs from all background threads.
1368
1369 stats.background_thread.run_interval (uint64_t) r- [--enable-stats]
1370 Average run interval in nanoseconds of background threads.
1371
1372 stats.mutexes.ctl.{counter}; (counter specific type) r-
1373 [--enable-stats]
1374 Statistics on ctl mutex (global scope; mallctl related). {counter}
1375 is one of the counters below:
1376
1377 num_ops (uint64_t): Total number of lock acquisition operations
1378 on this mutex.
1379
1380 num_spin_acq (uint64_t): Number of times the mutex was
1381 spin-acquired. When the mutex is currently locked and cannot be
1382 acquired immediately, a short period of spin-retry within
1383 jemalloc will be performed. Acquired through spin generally
1384 means the contention was lightweight and not causing context
1385 switches.
1386
1387 num_wait (uint64_t): Number of times the mutex was
1388 wait-acquired, which means the mutex contention was not solved
1389 by spin-retry, and blocking operation was likely involved in
1390 order to acquire the mutex. This event generally implies higher
1391 cost / longer delay, and should be investigated if it happens
1392 often.
1393
1394 max_wait_time (uint64_t): Maximum length of time in nanoseconds
1395 spent on a single wait-acquired lock operation. Note that to
1396 avoid profiling overhead on the common path, this does not
1397 consider spin-acquired cases.
1398
1399 total_wait_time (uint64_t): Cumulative time in nanoseconds
1400 spent on wait-acquired lock operations. Similarly,
1401 spin-acquired cases are not considered.
1402
1403 max_num_thds (uint32_t): Maximum number of threads waiting on
1404 this mutex simultaneously. Similarly, spin-acquired cases are
1405 not considered.
1406
1407 num_owner_switch (uint64_t): Number of times the current mutex
1408 owner is different from the previous one. This event does not
1409 generally imply an issue; rather it is an indicator of how
1410 often the protected data are accessed by different threads.
1411
1412 stats.mutexes.background_thread.{counter} (counter specific type) r-
1413 [--enable-stats]
1414 Statistics on background_thread mutex (global scope;
1415 background_thread related). {counter} is one of the counters in
1416 mutex profiling counters.
1417
1418 stats.mutexes.prof.{counter} (counter specific type) r-
1419 [--enable-stats]
1420 Statistics on prof mutex (global scope; profiling related).
1421 {counter} is one of the counters in mutex profiling counters.
1422
1423 stats.mutexes.prof_thds_data.{counter} (counter specific type) r-
1424 [--enable-stats]
1425 Statistics on prof threads data mutex (global scope; profiling
1426 related). {counter} is one of the counters in mutex profiling
1427 counters.
1428
1429 stats.mutexes.prof_dump.{counter} (counter specific type) r-
1430 [--enable-stats]
1431 Statistics on prof dumping mutex (global scope; profiling related).
1432 {counter} is one of the counters in mutex profiling counters.
1433
1434 stats.mutexes.reset (void) -- [--enable-stats]
1435 Reset all mutex profile statistics, including global mutexes, arena
1436 mutexes and bin mutexes.
1437
1438 stats.arenas.<i>.dss (const char *) r-
1439 dss (sbrk(2)) allocation precedence as related to mmap(2)
1440 allocation. See opt.dss for details.
1441
1442 stats.arenas.<i>.dirty_decay_ms (ssize_t) r-
1443 Approximate time in milliseconds from the creation of a set of
1444 unused dirty pages until an equivalent set of unused dirty pages is
1445 purged and/or reused. See opt.dirty_decay_ms for details.
1446
1447 stats.arenas.<i>.muzzy_decay_ms (ssize_t) r-
1448 Approximate time in milliseconds from the creation of a set of
1449 unused muzzy pages until an equivalent set of unused muzzy pages is
1450 purged and/or reused. See opt.muzzy_decay_ms for details.
1451
1452 stats.arenas.<i>.nthreads (unsigned) r-
1453 Number of threads currently assigned to arena.
1454
1455 stats.arenas.<i>.uptime (uint64_t) r-
1456 Time elapsed (in nanoseconds) since the arena was created. If <i>
1457 equals 0 or MALLCTL_ARENAS_ALL, this is the uptime since malloc
1458 initialization.
1459
1460 stats.arenas.<i>.pactive (size_t) r-
1461 Number of pages in active extents.
1462
1463 stats.arenas.<i>.pdirty (size_t) r-
1464 Number of pages within unused extents that are potentially dirty,
1465 and for which madvise() or similar has not been called. See
1466 opt.dirty_decay_ms for a description of dirty pages.
1467
1468 stats.arenas.<i>.pmuzzy (size_t) r-
1469 Number of pages within unused extents that are muzzy. See
1470 opt.muzzy_decay_ms for a description of muzzy pages.
1471
1472 stats.arenas.<i>.mapped (size_t) r- [--enable-stats]
1473 Number of mapped bytes.
1474
1475 stats.arenas.<i>.retained (size_t) r- [--enable-stats]
1476 Number of retained bytes. See stats.retained for details.
1477
1478 stats.arenas.<i>.extent_avail (size_t) r- [--enable-stats]
1479 Number of allocated (but unused) extent structs in this arena.
1480
1481 stats.arenas.<i>.base (size_t) r- [--enable-stats]
1482 Number of bytes dedicated to bootstrap-sensitive allocator metadata
1483 structures.
1484
1485 stats.arenas.<i>.internal (size_t) r- [--enable-stats]
1486 Number of bytes dedicated to internal allocations. Internal
1487 allocations differ from application-originated allocations in that
1488 they are for internal use, and that they are omitted from heap
1489 profiles.
1490
1491 stats.arenas.<i>.metadata_thp (size_t) r- [--enable-stats]
1492 Number of transparent huge pages (THP) used for metadata. See
1493 opt.metadata_thp for details.
1494
1495 stats.arenas.<i>.resident (size_t) r- [--enable-stats]
1496 Maximum number of bytes in physically resident data pages mapped by
1497 the arena, comprising all pages dedicated to allocator metadata,
1498 pages backing active allocations, and unused dirty pages. This is a
1499 maximum rather than precise because pages may not actually be
1500 physically resident if they correspond to demand-zeroed virtual
1501 memory that has not yet been touched. This is a multiple of the
1502 page size.
1503
1504 stats.arenas.<i>.dirty_npurge (uint64_t) r- [--enable-stats]
1505 Number of dirty page purge sweeps performed.
1506
1507 stats.arenas.<i>.dirty_nmadvise (uint64_t) r- [--enable-stats]
1508 Number of madvise() or similar calls made to purge dirty pages.
1509
1510 stats.arenas.<i>.dirty_purged (uint64_t) r- [--enable-stats]
1511 Number of dirty pages purged.
1512
1513 stats.arenas.<i>.muzzy_npurge (uint64_t) r- [--enable-stats]
1514 Number of muzzy page purge sweeps performed.
1515
1516 stats.arenas.<i>.muzzy_nmadvise (uint64_t) r- [--enable-stats]
1517 Number of madvise() or similar calls made to purge muzzy pages.
1518
1519 stats.arenas.<i>.muzzy_purged (uint64_t) r- [--enable-stats]
1520 Number of muzzy pages purged.
1521
1522 stats.arenas.<i>.small.allocated (size_t) r- [--enable-stats]
1523 Number of bytes currently allocated by small objects.
1524
1525 stats.arenas.<i>.small.nmalloc (uint64_t) r- [--enable-stats]
1526 Cumulative number of times a small allocation was requested from
1527 the arena's bins, whether to fill the relevant tcache if opt.tcache
1528 is enabled, or to directly satisfy an allocation request otherwise.
1529
1530 stats.arenas.<i>.small.ndalloc (uint64_t) r- [--enable-stats]
1531 Cumulative number of times a small allocation was returned to the
1532 arena's bins, whether to flush the relevant tcache if opt.tcache is
1533 enabled, or to directly deallocate an allocation otherwise.
1534
1535 stats.arenas.<i>.small.nrequests (uint64_t) r- [--enable-stats]
1536 Cumulative number of allocation requests satisfied by all bin size
1537 classes.
1538
1539 stats.arenas.<i>.small.nfills (uint64_t) r- [--enable-stats]
1540 Cumulative number of tcache fills by all small size classes.
1541
1542 stats.arenas.<i>.small.nflushes (uint64_t) r- [--enable-stats]
1543 Cumulative number of tcache flushes by all small size classes.
1544
1545 stats.arenas.<i>.large.allocated (size_t) r- [--enable-stats]
1546 Number of bytes currently allocated by large objects.
1547
1548 stats.arenas.<i>.large.nmalloc (uint64_t) r- [--enable-stats]
1549 Cumulative number of times a large extent was allocated from the
1550 arena, whether to fill the relevant tcache if opt.tcache is enabled
1551 and the size class is within the range being cached, or to directly
1552 satisfy an allocation request otherwise.
1553
1554 stats.arenas.<i>.large.ndalloc (uint64_t) r- [--enable-stats]
1555 Cumulative number of times a large extent was returned to the
1556 arena, whether to flush the relevant tcache if opt.tcache is
1557 enabled and the size class is within the range being cached, or to
1558 directly deallocate an allocation otherwise.
1559
1560 stats.arenas.<i>.large.nrequests (uint64_t) r- [--enable-stats]
1561 Cumulative number of allocation requests satisfied by all large
1562 size classes.
1563
1564 stats.arenas.<i>.large.nfills (uint64_t) r- [--enable-stats]
1565 Cumulative number of tcache fills by all large size classes.
1566
1567 stats.arenas.<i>.large.nflushes (uint64_t) r- [--enable-stats]
1568 Cumulative number of tcache flushes by all large size classes.
1569
1570 stats.arenas.<i>.bins.<j>.nmalloc (uint64_t) r- [--enable-stats]
1571 Cumulative number of times a bin region of the corresponding size
1572 class was allocated from the arena, whether to fill the relevant
1573 tcache if opt.tcache is enabled, or to directly satisfy an
1574 allocation request otherwise.
1575
1576 stats.arenas.<i>.bins.<j>.ndalloc (uint64_t) r- [--enable-stats]
1577 Cumulative number of times a bin region of the corresponding size
1578 class was returned to the arena, whether to flush the relevant
1579 tcache if opt.tcache is enabled, or to directly deallocate an
1580 allocation otherwise.
1581
1582 stats.arenas.<i>.bins.<j>.nrequests (uint64_t) r- [--enable-stats]
1583 Cumulative number of allocation requests satisfied by bin regions
1584 of the corresponding size class.
1585
1586 stats.arenas.<i>.bins.<j>.curregs (size_t) r- [--enable-stats]
1587 Current number of regions for this size class.
1588
1589 stats.arenas.<i>.bins.<j>.nfills (uint64_t) r-
1590 Cumulative number of tcache fills.
1591
1592 stats.arenas.<i>.bins.<j>.nflushes (uint64_t) r-
1593 Cumulative number of tcache flushes.
1594
1595 stats.arenas.<i>.bins.<j>.nslabs (uint64_t) r- [--enable-stats]
1596 Cumulative number of slabs created.
1597
1598 stats.arenas.<i>.bins.<j>.nreslabs (uint64_t) r- [--enable-stats]
1599 Cumulative number of times the current slab from which to allocate
1600 changed.
1601
1602 stats.arenas.<i>.bins.<j>.curslabs (size_t) r- [--enable-stats]
1603 Current number of slabs.
1604
1605 stats.arenas.<i>.bins.<j>.nonfull_slabs (size_t) r- [--enable-stats]
1606 Current number of nonfull slabs.
1607
1608 stats.arenas.<i>.bins.<j>.mutex.{counter} (counter specific type) r-
1609 [--enable-stats]
1610 Statistics on arena.<i>.bins.<j> mutex (arena bin scope; bin
1611 operation related). {counter} is one of the counters in mutex
1612 profiling counters.
1613
1614 stats.arenas.<i>.extents.<j>.n{extent_type} (size_t) r-
1615 [--enable-stats]
1616 Number of extents of the given type in this arena in the bucket
1617 corresponding to page size index <j>. The extent type is one of
1618 dirty, muzzy, or retained.
1619
1620 stats.arenas.<i>.extents.<j>.{extent_type}_bytes (size_t) r-
1621 [--enable-stats]
1622 Sum of the bytes managed by extents of the given type in this arena
1623 in the bucket corresponding to page size index <j>. The extent type
1624 is one of dirty, muzzy, or retained.
1625
1626 stats.arenas.<i>.lextents.<j>.nmalloc (uint64_t) r- [--enable-stats]
1627 Cumulative number of times a large extent of the corresponding size
1628 class was allocated from the arena, whether to fill the relevant
1629 tcache if opt.tcache is enabled and the size class is within the
1630 range being cached, or to directly satisfy an allocation request
1631 otherwise.
1632
1633 stats.arenas.<i>.lextents.<j>.ndalloc (uint64_t) r- [--enable-stats]
1634 Cumulative number of times a large extent of the corresponding size
1635 class was returned to the arena, whether to flush the relevant
1636 tcache if opt.tcache is enabled and the size class is within the
1637 range being cached, or to directly deallocate an allocation
1638 otherwise.
1639
1640 stats.arenas.<i>.lextents.<j>.nrequests (uint64_t) r- [--enable-stats]
1641 Cumulative number of allocation requests satisfied by large extents
1642 of the corresponding size class.
1643
1644 stats.arenas.<i>.lextents.<j>.curlextents (size_t) r- [--enable-stats]
1645 Current number of large allocations for this size class.
1646
1647 stats.arenas.<i>.mutexes.large.{counter} (counter specific type) r-
1648 [--enable-stats]
1649 Statistics on arena.<i>.large mutex (arena scope; large allocation
1650 related). {counter} is one of the counters in mutex profiling
1651 counters.
1652
1653 stats.arenas.<i>.mutexes.extent_avail.{counter} (counter specific type)
1654 r- [--enable-stats]
1655 Statistics on arena.<i>.extent_avail mutex (arena scope; extent
1656 avail related). {counter} is one of the counters in mutex
1657 profiling counters.
1658
1659 stats.arenas.<i>.mutexes.extents_dirty.{counter} (counter specific
1660 type) r- [--enable-stats]
1661 Statistics on arena.<i>.extents_dirty mutex (arena scope; dirty
1662 extents related). {counter} is one of the counters in mutex
1663 profiling counters.
1664
1665 stats.arenas.<i>.mutexes.extents_muzzy.{counter} (counter specific
1666 type) r- [--enable-stats]
1667 Statistics on arena.<i>.extents_muzzy mutex (arena scope; muzzy
1668 extents related). {counter} is one of the counters in mutex
1669 profiling counters.
1670
1671 stats.arenas.<i>.mutexes.extents_retained.{counter} (counter specific
1672 type) r- [--enable-stats]
1673 Statistics on arena.<i>.extents_retained mutex (arena scope;
1674 retained extents related). {counter} is one of the counters in
1675 mutex profiling counters.
1676
1677 stats.arenas.<i>.mutexes.decay_dirty.{counter} (counter specific type)
1678 r- [--enable-stats]
1679 Statistics on arena.<i>.decay_dirty mutex (arena scope; decay for
1680 dirty pages related). {counter} is one of the counters in mutex
1681 profiling counters.
1682
1683 stats.arenas.<i>.mutexes.decay_muzzy.{counter} (counter specific type)
1684 r- [--enable-stats]
1685 Statistics on arena.<i>.decay_muzzy mutex (arena scope; decay for
1686 muzzy pages related). {counter} is one of the counters in mutex
1687 profiling counters.
1688
1689 stats.arenas.<i>.mutexes.base.{counter} (counter specific type) r-
1690 [--enable-stats]
1691 Statistics on arena.<i>.base mutex (arena scope; base allocator
1692 related). {counter} is one of the counters in mutex profiling
1693 counters.
1694
1695 stats.arenas.<i>.mutexes.tcache_list.{counter} (counter specific type)
1696 r- [--enable-stats]
1697 Statistics on arena.<i>.tcache_list mutex (arena scope; tcache to
1698 arena association related). This mutex is expected to be accessed
1699 less often. {counter} is one of the counters in mutex profiling
1700 counters.
1701
1703 Although the heap profiling functionality was originally designed to be
1704 compatible with the pprof command that is developed as part of the
1705 gperftools package[3], the addition of per thread heap profiling
1706 functionality required a different heap profile format. The jeprof
1707 command is derived from pprof, with enhancements to support the heap
1708 profile format described here.
1709
1710 In the following hypothetical heap profile, [...] indicates elision
1711 for the sake of compactness.
1712
1713 heap_v2/524288
1714 t*: 28106: 56637512 [0: 0]
1715 [...]
1716 t3: 352: 16777344 [0: 0]
1717 [...]
1718 t99: 17754: 29341640 [0: 0]
1719 [...]
1720 @ 0x5f86da8 0x5f5a1dc [...] 0x29e4d4e 0xa200316 0xabb2988 [...]
1721 t*: 13: 6688 [0: 0]
1722 t3: 12: 6496 [0: 0]
1723 t99: 1: 192 [0: 0]
1724 [...]
1725
1726 MAPPED_LIBRARIES:
1727 [...]
1728
1729 The following matches the above heap profile, but most tokens are
1730 replaced with <description> to indicate descriptions of the
1731 corresponding fields.
1732
1733 <heap_profile_format_version>/<mean_sample_interval>
1734 <aggregate>: <curobjs>: <curbytes> [<cumobjs>: <cumbytes>]
1735 [...]
1736 <thread_3_aggregate>: <curobjs>: <curbytes> [<cumobjs>: <cumbytes>]
1737 [...]
1738 <thread_99_aggregate>: <curobjs>: <curbytes> [<cumobjs>: <cumbytes>]
1739 [...]
1740 @ <top_frame> <frame> [...] <frame> <frame> <frame> [...]
1741 <backtrace_aggregate>: <curobjs>: <curbytes> [<cumobjs>: <cumbytes>]
1742 <backtrace_thread_3>: <curobjs>: <curbytes> [<cumobjs>: <cumbytes>]
1743 <backtrace_thread_99>: <curobjs>: <curbytes> [<cumobjs>: <cumbytes>]
1744 [...]
1745
1746 MAPPED_LIBRARIES:
1747 </proc/<pid>/maps>
1748
1750 When debugging, it is a good idea to configure/build jemalloc with the
1751 --enable-debug and --enable-fill options, and recompile the program
1752 with suitable options and symbols for debugger support. When so
1753 configured, jemalloc incorporates a wide variety of run-time assertions
1754 that catch application errors such as double-free, write-after-free,
1755 etc.
1756
1757 Programs often accidentally depend on “uninitialized” memory actually
1758 being filled with zero bytes. Junk filling (see the opt.junk option)
1759 tends to expose such bugs in the form of obviously incorrect results
1760 and/or coredumps. Conversely, zero filling (see the opt.zero option)
1761 eliminates the symptoms of such bugs. Between these two options, it is
1762 usually possible to quickly detect, diagnose, and eliminate such bugs.
1763
1764 This implementation does not provide much detail about the problems it
1765 detects, because the performance impact for storing such information
1766 would be prohibitive.
1767
1769 If any of the memory allocation/deallocation functions detect an error
1770 or warning condition, a message will be printed to file descriptor
1771 STDERR_FILENO. Errors will result in the process dumping core. If the
1772 opt.abort option is set, most warnings are treated as errors.
1773
1774 The malloc_message variable allows the programmer to override the
1775 function which emits the text strings forming the errors and warnings
1776 if for some reason the STDERR_FILENO file descriptor is not suitable
1777 for this. malloc_message() takes the cbopaque pointer argument that is
1778 NULL unless overridden by the arguments in a call to
1779 malloc_stats_print(), followed by a string pointer. Please note that
1780 doing anything which tries to allocate memory in this function is
1781 likely to result in a crash or deadlock.
1782
1783 All messages are prefixed by “<jemalloc>: ”.
1784
1786 Standard API
1787 The malloc() and calloc() functions return a pointer to the allocated
1788 memory if successful; otherwise a NULL pointer is returned and errno is
1789 set to ENOMEM.
1790
1791 The posix_memalign() function returns the value 0 if successful;
1792 otherwise it returns an error value. The posix_memalign() function will
1793 fail if:
1794
1795 EINVAL
1796 The alignment parameter is not a power of 2 at least as large as
1797 sizeof(void *).
1798
1799 ENOMEM
1800 Memory allocation error.
1801
1802 The aligned_alloc() function returns a pointer to the allocated memory
1803 if successful; otherwise a NULL pointer is returned and errno is set.
1804 The aligned_alloc() function will fail if:
1805
1806 EINVAL
1807 The alignment parameter is not a power of 2.
1808
1809 ENOMEM
1810 Memory allocation error.
1811
1812 The realloc() function returns a pointer, possibly identical to ptr, to
1813 the allocated memory if successful; otherwise a NULL pointer is
1814 returned, and errno is set to ENOMEM if the error was the result of an
1815 allocation failure. The realloc() function always leaves the original
1816 buffer intact when an error occurs.
1817
1818 The free() function returns no value.
1819
1820 Non-standard API
1821 The mallocx() and rallocx() functions return a pointer to the allocated
1822 memory if successful; otherwise a NULL pointer is returned to indicate
1823 insufficient contiguous memory was available to service the allocation
1824 request.
1825
1826 The xallocx() function returns the real size of the resulting resized
1827 allocation pointed to by ptr, which is a value less than size if the
1828 allocation could not be adequately grown in place.
1829
1830 The sallocx() function returns the real size of the allocation pointed
1831 to by ptr.
1832
1833 The nallocx() returns the real size that would result from a successful
1834 equivalent mallocx() function call, or zero if insufficient memory is
1835 available to perform the size computation.
1836
1837 The mallctl(), mallctlnametomib(), and mallctlbymib() functions return
1838 0 on success; otherwise they return an error value. The functions will
1839 fail if:
1840
1841 EINVAL
1842 newp is not NULL, and newlen is too large or too small.
1843 Alternatively, *oldlenp is too large or too small; when it happens,
1844 except for a very few cases explicitly documented otherwise, as
1845 much data as possible are read despite the error, with the amount
1846 of data read being recorded in *oldlenp.
1847
1848 ENOENT
1849 name or mib specifies an unknown/invalid value.
1850
1851 EPERM
1852 Attempt to read or write void value, or attempt to write read-only
1853 value.
1854
1855 EAGAIN
1856 A memory allocation failure occurred.
1857
1858 EFAULT
1859 An interface with side effects failed in some way not directly
1860 related to mallctl*() read/write processing.
1861
1862 The malloc_usable_size() function returns the usable size of the
1863 allocation pointed to by ptr.
1864
1866 The following environment variable affects the execution of the
1867 allocation functions:
1868
1869 MALLOC_CONF
1870 If the environment variable MALLOC_CONF is set, the characters it
1871 contains will be interpreted as options.
1872
1874 To dump core whenever a problem occurs:
1875
1876 ln -s 'abort:true' /etc/malloc.conf
1877
1878 To specify in the source that only one arena should be automatically
1879 created:
1880
1881 malloc_conf = "narenas:1";
1882
1884 madvise(2), mmap(2), sbrk(2), utrace(2), alloca(3), atexit(3),
1885 getpagesize(3)
1886
1888 The malloc(), calloc(), realloc(), and free() functions conform to
1889 ISO/IEC 9899:1990 (“ISO C90”).
1890
1891 The posix_memalign() function conforms to IEEE Std 1003.1-2001
1892 (“POSIX.1”).
1893
1895 Jason Evans
1896
1898 1. jemalloc website
1899 http://jemalloc.net/
1900
1901 2. JSON format
1902 http://www.json.org/
1903
1904 3. gperftools package
1905 http://code.google.com/p/gperftools/
1906
1907
1908
1909jemalloc 5.3.0-0-g54eaed1d8b56 05/06/2022 JEMALLOC(3)