1JEMALLOC(3) User Manual JEMALLOC(3)
2
3
4
6 jemalloc - general purpose memory allocation functions
7
9 This manual describes jemalloc
10 5.2.1-0-gea6b3e973b477b8061e0076bb257dbd7f3faa756. More information can
11 be found at the jemalloc website[1].
12
14 #include <jemalloc/jemalloc.h>
15
16 Standard API
17 void *malloc(size_t size);
18
19 void *calloc(size_t number, size_t size);
20
21 int posix_memalign(void **ptr, size_t alignment, size_t size);
22
23 void *aligned_alloc(size_t alignment, size_t size);
24
25 void *realloc(void *ptr, size_t size);
26
27 void free(void *ptr);
28
29 Non-standard API
30 void *mallocx(size_t size, int flags);
31
32 void *rallocx(void *ptr, size_t size, int flags);
33
34 size_t xallocx(void *ptr, size_t size, size_t extra, int flags);
35
36 size_t sallocx(void *ptr, int flags);
37
38 void dallocx(void *ptr, int flags);
39
40 void sdallocx(void *ptr, size_t size, int flags);
41
42 size_t nallocx(size_t size, int flags);
43
44 int mallctl(const char *name, void *oldp, size_t *oldlenp, void *newp,
45 size_t newlen);
46
47 int mallctlnametomib(const char *name, size_t *mibp, size_t *miblenp);
48
49 int mallctlbymib(const size_t *mib, size_t miblen, void *oldp,
50 size_t *oldlenp, void *newp, size_t newlen);
51
52 void malloc_stats_print(void (*write_cb) (void *, const char *),
53 void *cbopaque, const char *opts);
54
55 size_t malloc_usable_size(const void *ptr);
56
57 void (*malloc_message)(void *cbopaque, const char *s);
58
59 const char *malloc_conf;
60
62 Standard API
63 The malloc() function allocates size bytes of uninitialized memory. The
64 allocated space is suitably aligned (after possible pointer coercion)
65 for storage of any type of object.
66
67 The calloc() function allocates space for number objects, each size
68 bytes in length. The result is identical to calling malloc() with an
69 argument of number * size, with the exception that the allocated memory
70 is explicitly initialized to zero bytes.
71
72 The posix_memalign() function allocates size bytes of memory such that
73 the allocation's base address is a multiple of alignment, and returns
74 the allocation in the value pointed to by ptr. The requested alignment
75 must be a power of 2 at least as large as sizeof(void *).
76
77 The aligned_alloc() function allocates size bytes of memory such that
78 the allocation's base address is a multiple of alignment. The requested
79 alignment must be a power of 2. Behavior is undefined if size is not an
80 integral multiple of alignment.
81
82 The realloc() function changes the size of the previously allocated
83 memory referenced by ptr to size bytes. The contents of the memory are
84 unchanged up to the lesser of the new and old sizes. If the new size is
85 larger, the contents of the newly allocated portion of the memory are
86 undefined. Upon success, the memory referenced by ptr is freed and a
87 pointer to the newly allocated memory is returned. Note that realloc()
88 may move the memory allocation, resulting in a different return value
89 than ptr. If ptr is NULL, the realloc() function behaves identically to
90 malloc() for the specified size.
91
92 The free() function causes the allocated memory referenced by ptr to be
93 made available for future allocations. If ptr is NULL, no action
94 occurs.
95
96 Non-standard API
97 The mallocx(), rallocx(), xallocx(), sallocx(), dallocx(), sdallocx(),
98 and nallocx() functions all have a flags argument that can be used to
99 specify options. The functions only check the options that are
100 contextually relevant. Use bitwise or (|) operations to specify one or
101 more of the following:
102
103 MALLOCX_LG_ALIGN(la)
104 Align the memory allocation to start at an address that is a
105 multiple of (1 << la). This macro does not validate that la is
106 within the valid range.
107
108 MALLOCX_ALIGN(a)
109 Align the memory allocation to start at an address that is a
110 multiple of a, where a is a power of two. This macro does not
111 validate that a is a power of 2.
112
113 MALLOCX_ZERO
114 Initialize newly allocated memory to contain zero bytes. In the
115 growing reallocation case, the real size prior to reallocation
116 defines the boundary between untouched bytes and those that are
117 initialized to contain zero bytes. If this macro is absent, newly
118 allocated memory is uninitialized.
119
120 MALLOCX_TCACHE(tc)
121 Use the thread-specific cache (tcache) specified by the identifier
122 tc, which must have been acquired via the tcache.create mallctl.
123 This macro does not validate that tc specifies a valid identifier.
124
125 MALLOCX_TCACHE_NONE
126 Do not use a thread-specific cache (tcache). Unless
127 MALLOCX_TCACHE(tc) or MALLOCX_TCACHE_NONE is specified, an
128 automatically managed tcache will be used under many circumstances.
129 This macro cannot be used in the same flags argument as
130 MALLOCX_TCACHE(tc).
131
132 MALLOCX_ARENA(a)
133 Use the arena specified by the index a. This macro has no effect
134 for regions that were allocated via an arena other than the one
135 specified. This macro does not validate that a specifies an arena
136 index in the valid range.
137
138 The mallocx() function allocates at least size bytes of memory, and
139 returns a pointer to the base address of the allocation. Behavior is
140 undefined if size is 0.
141
142 The rallocx() function resizes the allocation at ptr to be at least
143 size bytes, and returns a pointer to the base address of the resulting
144 allocation, which may or may not have moved from its original location.
145 Behavior is undefined if size is 0.
146
147 The xallocx() function resizes the allocation at ptr in place to be at
148 least size bytes, and returns the real size of the allocation. If extra
149 is non-zero, an attempt is made to resize the allocation to be at least
150 (size + extra) bytes, though inability to allocate the extra byte(s)
151 will not by itself result in failure to resize. Behavior is undefined
152 if size is 0, or if (size + extra > SIZE_T_MAX).
153
154 The sallocx() function returns the real size of the allocation at ptr.
155
156 The dallocx() function causes the memory referenced by ptr to be made
157 available for future allocations.
158
159 The sdallocx() function is an extension of dallocx() with a size
160 parameter to allow the caller to pass in the allocation size as an
161 optimization. The minimum valid input size is the original requested
162 size of the allocation, and the maximum valid input size is the
163 corresponding value returned by nallocx() or sallocx().
164
165 The nallocx() function allocates no memory, but it performs the same
166 size computation as the mallocx() function, and returns the real size
167 of the allocation that would result from the equivalent mallocx()
168 function call, or 0 if the inputs exceed the maximum supported size
169 class and/or alignment. Behavior is undefined if size is 0.
170
171 The mallctl() function provides a general interface for introspecting
172 the memory allocator, as well as setting modifiable parameters and
173 triggering actions. The period-separated name argument specifies a
174 location in a tree-structured namespace; see the MALLCTL NAMESPACE
175 section for documentation on the tree contents. To read a value, pass a
176 pointer via oldp to adequate space to contain the value, and a pointer
177 to its length via oldlenp; otherwise pass NULL and NULL. Similarly, to
178 write a value, pass a pointer to the value via newp, and its length via
179 newlen; otherwise pass NULL and 0.
180
181 The mallctlnametomib() function provides a way to avoid repeated name
182 lookups for applications that repeatedly query the same portion of the
183 namespace, by translating a name to a “Management Information Base”
184 (MIB) that can be passed repeatedly to mallctlbymib(). Upon successful
185 return from mallctlnametomib(), mibp contains an array of *miblenp
186 integers, where *miblenp is the lesser of the number of components in
187 name and the input value of *miblenp. Thus it is possible to pass a
188 *miblenp that is smaller than the number of period-separated name
189 components, which results in a partial MIB that can be used as the
190 basis for constructing a complete MIB. For name components that are
191 integers (e.g. the 2 in arenas.bin.2.size), the corresponding MIB
192 component will always be that integer. Therefore, it is legitimate to
193 construct code like the following:
194
195 unsigned nbins, i;
196 size_t mib[4];
197 size_t len, miblen;
198
199 len = sizeof(nbins);
200 mallctl("arenas.nbins", &nbins, &len, NULL, 0);
201
202 miblen = 4;
203 mallctlnametomib("arenas.bin.0.size", mib, &miblen);
204 for (i = 0; i < nbins; i++) {
205 size_t bin_size;
206
207 mib[2] = i;
208 len = sizeof(bin_size);
209 mallctlbymib(mib, miblen, (void *)&bin_size, &len, NULL, 0);
210 /* Do something with bin_size... */
211 }
212
213 The malloc_stats_print() function writes summary statistics via the
214 write_cb callback function pointer and cbopaque data passed to
215 write_cb, or malloc_message() if write_cb is NULL. The statistics are
216 presented in human-readable form unless “J” is specified as a character
217 within the opts string, in which case the statistics are presented in
218 JSON format[2]. This function can be called repeatedly. General
219 information that never changes during execution can be omitted by
220 specifying “g” as a character within the opts string. Note that
221 malloc_stats_print() uses the mallctl*() functions internally, so
222 inconsistent statistics can be reported if multiple threads use these
223 functions simultaneously. If --enable-stats is specified during
224 configuration, “m”, “d”, and “a” can be specified to omit merged arena,
225 destroyed merged arena, and per arena statistics, respectively; “b” and
226 “l” can be specified to omit per size class statistics for bins and
227 large objects, respectively; “x” can be specified to omit all mutex
228 statistics; “e” can be used to omit extent statistics. Unrecognized
229 characters are silently ignored. Note that thread caching may prevent
230 some statistics from being completely up to date, since extra locking
231 would be required to merge counters that track thread cache operations.
232
233 The malloc_usable_size() function returns the usable size of the
234 allocation pointed to by ptr. The return value may be larger than the
235 size that was requested during allocation. The malloc_usable_size()
236 function is not a mechanism for in-place realloc(); rather it is
237 provided solely as a tool for introspection purposes. Any discrepancy
238 between the requested allocation size and the size reported by
239 malloc_usable_size() should not be depended on, since such behavior is
240 entirely implementation-dependent.
241
243 Once, when the first call is made to one of the memory allocation
244 routines, the allocator initializes its internals based in part on
245 various options that can be specified at compile- or run-time.
246
247 The string specified via --with-malloc-conf, the string pointed to by
248 the global variable malloc_conf, the “name” of the file referenced by
249 the symbolic link named /etc/malloc.conf, and the value of the
250 environment variable MALLOC_CONF, will be interpreted, in that order,
251 from left to right as options. Note that malloc_conf may be read before
252 main() is entered, so the declaration of malloc_conf should specify an
253 initializer that contains the final value to be read by jemalloc.
254 --with-malloc-conf and malloc_conf are compile-time mechanisms, whereas
255 /etc/malloc.conf and MALLOC_CONF can be safely set any time prior to
256 program invocation.
257
258 An options string is a comma-separated list of option:value pairs.
259 There is one key corresponding to each opt.* mallctl (see the MALLCTL
260 NAMESPACE section for options documentation). For example,
261 abort:true,narenas:1 sets the opt.abort and opt.narenas options. Some
262 options have boolean values (true/false), others have integer values
263 (base 8, 10, or 16, depending on prefix), and yet others have raw
264 string values.
265
267 Traditionally, allocators have used sbrk(2) to obtain memory, which is
268 suboptimal for several reasons, including race conditions, increased
269 fragmentation, and artificial limitations on maximum usable memory. If
270 sbrk(2) is supported by the operating system, this allocator uses both
271 mmap(2) and sbrk(2), in that order of preference; otherwise only
272 mmap(2) is used.
273
274 This allocator uses multiple arenas in order to reduce lock contention
275 for threaded programs on multi-processor systems. This works well with
276 regard to threading scalability, but incurs some costs. There is a
277 small fixed per-arena overhead, and additionally, arenas manage memory
278 completely independently of each other, which means a small fixed
279 increase in overall memory fragmentation. These overheads are not
280 generally an issue, given the number of arenas normally used. Note that
281 using substantially more arenas than the default is not likely to
282 improve performance, mainly due to reduced cache performance. However,
283 it may make sense to reduce the number of arenas if an application does
284 not make much use of the allocation functions.
285
286 In addition to multiple arenas, this allocator supports thread-specific
287 caching, in order to make it possible to completely avoid
288 synchronization for most allocation requests. Such caching allows very
289 fast allocation in the common case, but it increases memory usage and
290 fragmentation, since a bounded number of objects can remain allocated
291 in each thread cache.
292
293 Memory is conceptually broken into extents. Extents are always aligned
294 to multiples of the page size. This alignment makes it possible to find
295 metadata for user objects quickly. User objects are broken into two
296 categories according to size: small and large. Contiguous small objects
297 comprise a slab, which resides within a single extent, whereas large
298 objects each have their own extents backing them.
299
300 Small objects are managed in groups by slabs. Each slab maintains a
301 bitmap to track which regions are in use. Allocation requests that are
302 no more than half the quantum (8 or 16, depending on architecture) are
303 rounded up to the nearest power of two that is at least sizeof(double).
304 All other object size classes are multiples of the quantum, spaced such
305 that there are four size classes for each doubling in size, which
306 limits internal fragmentation to approximately 20% for all but the
307 smallest size classes. Small size classes are smaller than four times
308 the page size, and large size classes extend from four times the page
309 size up to the largest size class that does not exceed PTRDIFF_MAX.
310
311 Allocations are packed tightly together, which can be an issue for
312 multi-threaded applications. If you need to assure that allocations do
313 not suffer from cacheline sharing, round your allocation requests up to
314 the nearest multiple of the cacheline size, or specify cacheline
315 alignment when allocating.
316
317 The realloc(), rallocx(), and xallocx() functions may resize
318 allocations without moving them under limited circumstances. Unlike the
319 *allocx() API, the standard API does not officially round up the usable
320 size of an allocation to the nearest size class, so technically it is
321 necessary to call realloc() to grow e.g. a 9-byte allocation to 16
322 bytes, or shrink a 16-byte allocation to 9 bytes. Growth and shrinkage
323 trivially succeeds in place as long as the pre-size and post-size both
324 round up to the same size class. No other API guarantees are made
325 regarding in-place resizing, but the current implementation also tries
326 to resize large allocations in place, as long as the pre-size and
327 post-size are both large. For shrinkage to succeed, the extent
328 allocator must support splitting (see arena.<i>.extent_hooks). Growth
329 only succeeds if the trailing memory is currently available, and the
330 extent allocator supports merging.
331
332 Assuming 4 KiB pages and a 16-byte quantum on a 64-bit system, the size
333 classes in each category are as shown in Table 1.
334
335 Table 1. Size classes
336 ┌─────────┬─────────┬─────────────────────┐
337 │Category │ Spacing │ Size │
338 ├─────────┼─────────┼─────────────────────┤
339 │ │ lg │ [8] │
340 │ ├─────────┼─────────────────────┤
341 │ │ 16 │ [16, 32, 48, 64, │
342 │ │ │ 80, 96, 112, 128] │
343 │ ├─────────┼─────────────────────┤
344 │ │ 32 │ [160, 192, 224, │
345 │ │ │ 256] │
346 │ ├─────────┼─────────────────────┤
347 │ │ 64 │ [320, 384, 448, │
348 │ │ │ 512] │
349 │ ├─────────┼─────────────────────┤
350 │ │ 128 │ [640, 768, 896, │
351 │Small │ │ 1024] │
352 │ ├─────────┼─────────────────────┤
353 │ │ 256 │ [1280, 1536, 1792, │
354 │ │ │ 2048] │
355 │ ├─────────┼─────────────────────┤
356 │ │ 512 │ [2560, 3072, 3584, │
357 │ │ │ 4096] │
358 │ ├─────────┼─────────────────────┤
359 │ │ 1 KiB │ [5 KiB, 6 KiB, 7 │
360 │ │ │ KiB, 8 KiB] │
361 │ ├─────────┼─────────────────────┤
362 │ │ 2 KiB │ [10 KiB, 12 KiB, 14 │
363 │ │ │ KiB] │
364 ├─────────┼─────────┼─────────────────────┤
365 │ │ 2 KiB │ [16 KiB] │
366 │ ├─────────┼─────────────────────┤
367 │ │ 4 KiB │ [20 KiB, 24 KiB, 28 │
368 │ │ │ KiB, 32 KiB] │
369 │ ├─────────┼─────────────────────┤
370 │ │ 8 KiB │ [40 KiB, 48 KiB, 54 │
371 │ │ │ KiB, 64 KiB] │
372 │ ├─────────┼─────────────────────┤
373 │ │ 16 KiB │ [80 KiB, 96 KiB, │
374 │ │ │ 112 KiB, 128 KiB] │
375 │ ├─────────┼─────────────────────┤
376 │ │ 32 KiB │ [160 KiB, 192 KiB, │
377 │ │ │ 224 KiB, 256 KiB] │
378 │ ├─────────┼─────────────────────┤
379 │ │ 64 KiB │ [320 KiB, 384 KiB, │
380 │ │ │ 448 KiB, 512 KiB] │
381 │ ├─────────┼─────────────────────┤
382 │ │ 128 KiB │ [640 KiB, 768 KiB, │
383 │ │ │ 896 KiB, 1 MiB] │
384 │ ├─────────┼─────────────────────┤
385 │ │ 256 KiB │ [1280 KiB, 1536 │
386 │ │ │ KiB, 1792 KiB, 2 │
387 │Large │ │ MiB] │
388 │ ├─────────┼─────────────────────┤
389 │ │ 512 KiB │ [2560 KiB, 3 MiB, │
390 │ │ │ 3584 KiB, 4 MiB] │
391 │ ├─────────┼─────────────────────┤
392 │ │ 1 MiB │ [5 MiB, 6 MiB, 7 │
393 │ │ │ MiB, 8 MiB] │
394 │ ├─────────┼─────────────────────┤
395 │ │ 2 MiB │ [10 MiB, 12 MiB, 14 │
396 │ │ │ MiB, 16 MiB] │
397 │ ├─────────┼─────────────────────┤
398 │ │ 4 MiB │ [20 MiB, 24 MiB, 28 │
399 │ │ │ MiB, 32 MiB] │
400 │ ├─────────┼─────────────────────┤
401 │ │ 8 MiB │ [40 MiB, 48 MiB, 56 │
402 │ │ │ MiB, 64 MiB] │
403 │ ├─────────┼─────────────────────┤
404 │ │ ... │ ... │
405 │ ├─────────┼─────────────────────┤
406 │ │ 512 PiB │ [2560 PiB, 3 EiB, │
407 │ │ │ 3584 PiB, 4 EiB] │
408 │ ├─────────┼─────────────────────┤
409 │ │ 1 EiB │ [5 EiB, 6 EiB, 7 │
410 │ │ │ EiB] │
411 └─────────┴─────────┴─────────────────────┘
412
414 The following names are defined in the namespace accessible via the
415 mallctl*() functions. Value types are specified in parentheses, their
416 readable/writable statuses are encoded as rw, r-, -w, or --, and
417 required build configuration flags follow, if any. A name element
418 encoded as <i> or <j> indicates an integer component, where the integer
419 varies from 0 to some upper value that must be determined via
420 introspection. In the case of stats.arenas.<i>.* and
421 arena.<i>.{initialized,purge,decay,dss}, <i> equal to
422 MALLCTL_ARENAS_ALL can be used to operate on all arenas or access the
423 summation of statistics from all arenas; similarly <i> equal to
424 MALLCTL_ARENAS_DESTROYED can be used to access the summation of
425 statistics from all destroyed arenas. These constants can be utilized
426 either via mallctlnametomib() followed by mallctlbymib(), or via code
427 such as the following:
428
429 #define STRINGIFY_HELPER(x) #x
430 #define STRINGIFY(x) STRINGIFY_HELPER(x)
431
432 mallctl("arena." STRINGIFY(MALLCTL_ARENAS_ALL) ".decay",
433 NULL, NULL, NULL, 0);
434
435 Take special note of the epoch mallctl, which controls refreshing of
436 cached dynamic statistics.
437
438 version (const char *) r-
439 Return the jemalloc version string.
440
441 epoch (uint64_t) rw
442 If a value is passed in, refresh the data from which the mallctl*()
443 functions report values, and increment the epoch. Return the
444 current epoch. This is useful for detecting whether another thread
445 caused a refresh.
446
447 background_thread (bool) rw
448 Enable/disable internal background worker threads. When set to
449 true, background threads are created on demand (the number of
450 background threads will be no more than the number of CPUs or
451 active arenas). Threads run periodically, and handle purging
452 asynchronously. When switching off, background threads are
453 terminated synchronously. Note that after fork(2) function, the
454 state in the child process will be disabled regardless the state in
455 parent process. See stats.background_thread for related stats.
456 opt.background_thread can be used to set the default option. This
457 option is only available on selected pthread-based platforms.
458
459 max_background_threads (size_t) rw
460 Maximum number of background worker threads that will be created.
461 This value is capped at opt.max_background_threads at startup.
462
463 config.cache_oblivious (bool) r-
464 --enable-cache-oblivious was specified during build configuration.
465
466 config.debug (bool) r-
467 --enable-debug was specified during build configuration.
468
469 config.fill (bool) r-
470 --enable-fill was specified during build configuration.
471
472 config.lazy_lock (bool) r-
473 --enable-lazy-lock was specified during build configuration.
474
475 config.malloc_conf (const char *) r-
476 Embedded configure-time-specified run-time options string, empty
477 unless --with-malloc-conf was specified during build configuration.
478
479 config.prof (bool) r-
480 --enable-prof was specified during build configuration.
481
482 config.prof_libgcc (bool) r-
483 --disable-prof-libgcc was not specified during build configuration.
484
485 config.prof_libunwind (bool) r-
486 --enable-prof-libunwind was specified during build configuration.
487
488 config.stats (bool) r-
489 --enable-stats was specified during build configuration.
490
491 config.utrace (bool) r-
492 --enable-utrace was specified during build configuration.
493
494 config.xmalloc (bool) r-
495 --enable-xmalloc was specified during build configuration.
496
497 opt.abort (bool) r-
498 Abort-on-warning enabled/disabled. If true, most warnings are
499 fatal. Note that runtime option warnings are not included (see
500 opt.abort_conf for that). The process will call abort(3) in these
501 cases. This option is disabled by default unless --enable-debug is
502 specified during configuration, in which case it is enabled by
503 default.
504
505 opt.confirm_conf (bool) r-
506 Confirm-runtime-options-when-program-starts enabled/disabled. If
507 true, the string specified via --with-malloc-conf, the string
508 pointed to by the global variable malloc_conf, the “name” of the
509 file referenced by the symbolic link named /etc/malloc.conf, and
510 the value of the environment variable MALLOC_CONF, will be printed
511 in order. Then, each option being set will be individually printed.
512 This option is disabled by default.
513
514 opt.abort_conf (bool) r-
515 Abort-on-invalid-configuration enabled/disabled. If true, invalid
516 runtime options are fatal. The process will call abort(3) in these
517 cases. This option is disabled by default unless --enable-debug is
518 specified during configuration, in which case it is enabled by
519 default.
520
521 opt.metadata_thp (const char *) r-
522 Controls whether to allow jemalloc to use transparent huge page
523 (THP) for internal metadata (see stats.metadata). “always” allows
524 such usage. “auto” uses no THP initially, but may begin to do so
525 when metadata usage reaches certain level. The default is
526 “disabled”.
527
528 opt.retain (bool) r-
529 If true, retain unused virtual memory for later reuse rather than
530 discarding it by calling munmap(2) or equivalent (see
531 stats.retained for related details). It also makes jemalloc use
532 mmap(2) or equivalent in a more greedy way, mapping larger chunks
533 in one go. This option is disabled by default unless discarding
534 virtual memory is known to trigger platform-specific performance
535 problems, namely 1) for [64-bit] Linux, which has a quirk in its
536 virtual memory allocation algorithm that causes semi-permanent VM
537 map holes under normal jemalloc operation; and 2) for [64-bit]
538 Windows, which disallows split / merged regions with MEM_RELEASE.
539 Although the same issues may present on 32-bit platforms as well,
540 retaining virtual memory for 32-bit Linux and Windows is disabled
541 by default due to the practical possibility of address space
542 exhaustion.
543
544 opt.dss (const char *) r-
545 dss (sbrk(2)) allocation precedence as related to mmap(2)
546 allocation. The following settings are supported if sbrk(2) is
547 supported by the operating system: “disabled”, “primary”, and
548 “secondary”; otherwise only “disabled” is supported. The default is
549 “secondary” if sbrk(2) is supported by the operating system;
550 “disabled” otherwise.
551
552 opt.narenas (unsigned) r-
553 Maximum number of arenas to use for automatic multiplexing of
554 threads and arenas. The default is four times the number of CPUs,
555 or one if there is a single CPU.
556
557 opt.oversize_threshold (size_t) r-
558 The threshold in bytes of which requests are considered oversize.
559 Allocation requests with greater sizes are fulfilled from a
560 dedicated arena (automatically managed, however not within
561 narenas), in order to reduce fragmentation by not mixing huge
562 allocations with small ones. In addition, the decay API guarantees
563 on the extents greater than the specified threshold may be
564 overridden. Note that requests with arena index specified via
565 MALLOCX_ARENA, or threads associated with explicit arenas will not
566 be considered. The default threshold is 8MiB. Values not within
567 large size classes disables this feature.
568
569 opt.percpu_arena (const char *) r-
570 Per CPU arena mode. Use the “percpu” setting to enable this
571 feature, which uses number of CPUs to determine number of arenas,
572 and bind threads to arenas dynamically based on the CPU the thread
573 runs on currently. “phycpu” setting uses one arena per physical
574 CPU, which means the two hyper threads on the same CPU share one
575 arena. Note that no runtime checking regarding the availability of
576 hyper threading is done at the moment. When set to “disabled”,
577 narenas and thread to arena association will not be impacted by
578 this option. The default is “disabled”.
579
580 opt.background_thread (bool) r-
581 Internal background worker threads enabled/disabled. Because of
582 potential circular dependencies, enabling background thread using
583 this option may cause crash or deadlock during initialization. For
584 a reliable way to use this feature, see background_thread for
585 dynamic control options and details. This option is disabled by
586 default.
587
588 opt.max_background_threads (size_t) r-
589 Maximum number of background threads that will be created if
590 background_thread is set. Defaults to number of cpus.
591
592 opt.dirty_decay_ms (ssize_t) r-
593 Approximate time in milliseconds from the creation of a set of
594 unused dirty pages until an equivalent set of unused dirty pages is
595 purged (i.e. converted to muzzy via e.g. madvise(...MADV_FREE) if
596 supported by the operating system, or converted to clean otherwise)
597 and/or reused. Dirty pages are defined as previously having been
598 potentially written to by the application, and therefore consuming
599 physical memory, yet having no current use. The pages are
600 incrementally purged according to a sigmoidal decay curve that
601 starts and ends with zero purge rate. A decay time of 0 causes all
602 unused dirty pages to be purged immediately upon creation. A decay
603 time of -1 disables purging. The default decay time is 10 seconds.
604 See arenas.dirty_decay_ms and arena.<i>.dirty_decay_ms for related
605 dynamic control options. See opt.muzzy_decay_ms for a description
606 of muzzy pages.for a description of muzzy pages. Note that when the
607 oversize_threshold feature is enabled, the arenas reserved for
608 oversize requests may have its own default decay settings.
609
610 opt.muzzy_decay_ms (ssize_t) r-
611 Approximate time in milliseconds from the creation of a set of
612 unused muzzy pages until an equivalent set of unused muzzy pages is
613 purged (i.e. converted to clean) and/or reused. Muzzy pages are
614 defined as previously having been unused dirty pages that were
615 subsequently purged in a manner that left them subject to the
616 reclamation whims of the operating system (e.g.
617 madvise(...MADV_FREE)), and therefore in an indeterminate state.
618 The pages are incrementally purged according to a sigmoidal decay
619 curve that starts and ends with zero purge rate. A decay time of 0
620 causes all unused muzzy pages to be purged immediately upon
621 creation. A decay time of -1 disables purging. The default decay
622 time is 10 seconds. See arenas.muzzy_decay_ms and
623 arena.<i>.muzzy_decay_ms for related dynamic control options.
624
625 opt.lg_extent_max_active_fit (size_t) r-
626 When reusing dirty extents, this determines the (log base 2 of the)
627 maximum ratio between the size of the active extent selected (to
628 split off from) and the size of the requested allocation. This
629 prevents the splitting of large active extents for smaller
630 allocations, which can reduce fragmentation over the long run
631 (especially for non-active extents). Lower value may reduce
632 fragmentation, at the cost of extra active extents. The default
633 value is 6, which gives a maximum ratio of 64 (2^6).
634
635 opt.stats_print (bool) r-
636 Enable/disable statistics printing at exit. If enabled, the
637 malloc_stats_print() function is called at program exit via an
638 atexit(3) function. opt.stats_print_opts can be combined to
639 specify output options. If --enable-stats is specified during
640 configuration, this has the potential to cause deadlock for a
641 multi-threaded process that exits while one or more threads are
642 executing in the memory allocation functions. Furthermore, atexit()
643 may allocate memory during application initialization and then
644 deadlock internally when jemalloc in turn calls atexit(), so this
645 option is not universally usable (though the application can
646 register its own atexit() function with equivalent functionality).
647 Therefore, this option should only be used with care; it is
648 primarily intended as a performance tuning aid during application
649 development. This option is disabled by default.
650
651 opt.stats_print_opts (const char *) r-
652 Options (the opts string) to pass to the malloc_stats_print() at
653 exit (enabled through opt.stats_print). See available options in
654 malloc_stats_print(). Has no effect unless opt.stats_print is
655 enabled. The default is “”.
656
657 opt.junk (const char *) r- [--enable-fill]
658 Junk filling. If set to “alloc”, each byte of uninitialized
659 allocated memory will be initialized to 0xa5. If set to “free”, all
660 deallocated memory will be initialized to 0x5a. If set to “true”,
661 both allocated and deallocated memory will be initialized, and if
662 set to “false”, junk filling be disabled entirely. This is intended
663 for debugging and will impact performance negatively. This option
664 is “false” by default unless --enable-debug is specified during
665 configuration, in which case it is “true” by default.
666
667 opt.zero (bool) r- [--enable-fill]
668 Zero filling enabled/disabled. If enabled, each byte of
669 uninitialized allocated memory will be initialized to 0. Note that
670 this initialization only happens once for each byte, so realloc()
671 and rallocx() calls do not zero memory that was previously
672 allocated. This is intended for debugging and will impact
673 performance negatively. This option is disabled by default.
674
675 opt.utrace (bool) r- [--enable-utrace]
676 Allocation tracing based on utrace(2) enabled/disabled. This option
677 is disabled by default.
678
679 opt.xmalloc (bool) r- [--enable-xmalloc]
680 Abort-on-out-of-memory enabled/disabled. If enabled, rather than
681 returning failure for any allocation function, display a diagnostic
682 message on STDERR_FILENO and cause the program to drop core (using
683 abort(3)). If an application is designed to depend on this
684 behavior, set the option at compile time by including the following
685 in the source code:
686
687 malloc_conf = "xmalloc:true";
688
689 This option is disabled by default.
690
691 opt.tcache (bool) r-
692 Thread-specific caching (tcache) enabled/disabled. When there are
693 multiple threads, each thread uses a tcache for objects up to a
694 certain size. Thread-specific caching allows many allocations to be
695 satisfied without performing any thread synchronization, at the
696 cost of increased memory use. See the opt.lg_tcache_max option for
697 related tuning information. This option is enabled by default.
698
699 opt.lg_tcache_max (size_t) r-
700 Maximum size class (log base 2) to cache in the thread-specific
701 cache (tcache). At a minimum, all small size classes are cached,
702 and at a maximum all large size classes are cached. The default
703 maximum is 32 KiB (2^15).
704
705 opt.thp (const char *) r-
706 Transparent hugepage (THP) mode. Settings "always", "never" and
707 "default" are available if THP is supported by the operating
708 system. The "always" setting enables transparent hugepage for all
709 user memory mappings with MADV_HUGEPAGE; "never" ensures no
710 transparent hugepage with MADV_NOHUGEPAGE; the default setting
711 "default" makes no changes. Note that: this option does not affect
712 THP for jemalloc internal metadata (see opt.metadata_thp); in
713 addition, for arenas with customized extent_hooks, this option is
714 bypassed as it is implemented as part of the default extent hooks.
715
716 opt.prof (bool) r- [--enable-prof]
717 Memory profiling enabled/disabled. If enabled, profile memory
718 allocation activity. See the opt.prof_active option for on-the-fly
719 activation/deactivation. See the opt.lg_prof_sample option for
720 probabilistic sampling control. See the opt.prof_accum option for
721 control of cumulative sample reporting. See the
722 opt.lg_prof_interval option for information on interval-triggered
723 profile dumping, the opt.prof_gdump option for information on
724 high-water-triggered profile dumping, and the opt.prof_final option
725 for final profile dumping. Profile output is compatible with the
726 jeprof command, which is based on the pprof that is developed as
727 part of the gperftools package[3]. See HEAP PROFILE FORMAT for heap
728 profile format documentation.
729
730 opt.prof_prefix (const char *) r- [--enable-prof]
731 Filename prefix for profile dumps. If the prefix is set to the
732 empty string, no automatic dumps will occur; this is primarily
733 useful for disabling the automatic final heap dump (which also
734 disables leak reporting, if enabled). The default prefix is jeprof.
735
736 opt.prof_active (bool) r- [--enable-prof]
737 Profiling activated/deactivated. This is a secondary control
738 mechanism that makes it possible to start the application with
739 profiling enabled (see the opt.prof option) but inactive, then
740 toggle profiling at any time during program execution with the
741 prof.active mallctl. This option is enabled by default.
742
743 opt.prof_thread_active_init (bool) r- [--enable-prof]
744 Initial setting for thread.prof.active in newly created threads.
745 The initial setting for newly created threads can also be changed
746 during execution via the prof.thread_active_init mallctl. This
747 option is enabled by default.
748
749 opt.lg_prof_sample (size_t) r- [--enable-prof]
750 Average interval (log base 2) between allocation samples, as
751 measured in bytes of allocation activity. Increasing the sampling
752 interval decreases profile fidelity, but also decreases the
753 computational overhead. The default sample interval is 512 KiB
754 (2^19 B).
755
756 opt.prof_accum (bool) r- [--enable-prof]
757 Reporting of cumulative object/byte counts in profile dumps
758 enabled/disabled. If this option is enabled, every unique backtrace
759 must be stored for the duration of execution. Depending on the
760 application, this can impose a large memory overhead, and the
761 cumulative counts are not always of interest. This option is
762 disabled by default.
763
764 opt.lg_prof_interval (ssize_t) r- [--enable-prof]
765 Average interval (log base 2) between memory profile dumps, as
766 measured in bytes of allocation activity. The actual interval
767 between dumps may be sporadic because decentralized allocation
768 counters are used to avoid synchronization bottlenecks. Profiles
769 are dumped to files named according to the pattern
770 <prefix>.<pid>.<seq>.i<iseq>.heap, where <prefix> is controlled by
771 the opt.prof_prefix option. By default, interval-triggered profile
772 dumping is disabled (encoded as -1).
773
774 opt.prof_gdump (bool) r- [--enable-prof]
775 Set the initial state of prof.gdump, which when enabled triggers a
776 memory profile dump every time the total virtual memory exceeds the
777 previous maximum. This option is disabled by default.
778
779 opt.prof_final (bool) r- [--enable-prof]
780 Use an atexit(3) function to dump final memory usage to a file
781 named according to the pattern <prefix>.<pid>.<seq>.f.heap, where
782 <prefix> is controlled by the opt.prof_prefix option. Note that
783 atexit() may allocate memory during application initialization and
784 then deadlock internally when jemalloc in turn calls atexit(), so
785 this option is not universally usable (though the application can
786 register its own atexit() function with equivalent functionality).
787 This option is disabled by default.
788
789 opt.prof_leak (bool) r- [--enable-prof]
790 Leak reporting enabled/disabled. If enabled, use an atexit(3)
791 function to report memory leaks detected by allocation sampling.
792 See the opt.prof option for information on analyzing heap profile
793 output. This option is disabled by default.
794
795 thread.arena (unsigned) rw
796 Get or set the arena associated with the calling thread. If the
797 specified arena was not initialized beforehand (see the
798 arena.i.initialized mallctl), it will be automatically initialized
799 as a side effect of calling this interface.
800
801 thread.allocated (uint64_t) r- [--enable-stats]
802 Get the total number of bytes ever allocated by the calling thread.
803 This counter has the potential to wrap around; it is up to the
804 application to appropriately interpret the counter in such cases.
805
806 thread.allocatedp (uint64_t *) r- [--enable-stats]
807 Get a pointer to the the value that is returned by the
808 thread.allocated mallctl. This is useful for avoiding the overhead
809 of repeated mallctl*() calls.
810
811 thread.deallocated (uint64_t) r- [--enable-stats]
812 Get the total number of bytes ever deallocated by the calling
813 thread. This counter has the potential to wrap around; it is up to
814 the application to appropriately interpret the counter in such
815 cases.
816
817 thread.deallocatedp (uint64_t *) r- [--enable-stats]
818 Get a pointer to the the value that is returned by the
819 thread.deallocated mallctl. This is useful for avoiding the
820 overhead of repeated mallctl*() calls.
821
822 thread.tcache.enabled (bool) rw
823 Enable/disable calling thread's tcache. The tcache is implicitly
824 flushed as a side effect of becoming disabled (see
825 thread.tcache.flush).
826
827 thread.tcache.flush (void) --
828 Flush calling thread's thread-specific cache (tcache). This
829 interface releases all cached objects and internal data structures
830 associated with the calling thread's tcache. Ordinarily, this
831 interface need not be called, since automatic periodic incremental
832 garbage collection occurs, and the thread cache is automatically
833 discarded when a thread exits. However, garbage collection is
834 triggered by allocation activity, so it is possible for a thread
835 that stops allocating/deallocating to retain its cache
836 indefinitely, in which case the developer may find manual flushing
837 useful.
838
839 thread.prof.name (const char *) r- or -w [--enable-prof]
840 Get/set the descriptive name associated with the calling thread in
841 memory profile dumps. An internal copy of the name string is
842 created, so the input string need not be maintained after this
843 interface completes execution. The output string of this interface
844 should be copied for non-ephemeral uses, because multiple
845 implementation details can cause asynchronous string deallocation.
846 Furthermore, each invocation of this interface can only read or
847 write; simultaneous read/write is not supported due to string
848 lifetime limitations. The name string must be nil-terminated and
849 comprised only of characters in the sets recognized by isgraph(3)
850 and isblank(3).
851
852 thread.prof.active (bool) rw [--enable-prof]
853 Control whether sampling is currently active for the calling
854 thread. This is an activation mechanism in addition to prof.active;
855 both must be active for the calling thread to sample. This flag is
856 enabled by default.
857
858 tcache.create (unsigned) r-
859 Create an explicit thread-specific cache (tcache) and return an
860 identifier that can be passed to the MALLOCX_TCACHE(tc) macro to
861 explicitly use the specified cache rather than the automatically
862 managed one that is used by default. Each explicit cache can be
863 used by only one thread at a time; the application must assure that
864 this constraint holds.
865
866 tcache.flush (unsigned) -w
867 Flush the specified thread-specific cache (tcache). The same
868 considerations apply to this interface as to thread.tcache.flush,
869 except that the tcache will never be automatically discarded.
870
871 tcache.destroy (unsigned) -w
872 Flush the specified thread-specific cache (tcache) and make the
873 identifier available for use during a future tcache creation.
874
875 arena.<i>.initialized (bool) r-
876 Get whether the specified arena's statistics are initialized (i.e.
877 the arena was initialized prior to the current epoch). This
878 interface can also be nominally used to query whether the merged
879 statistics corresponding to MALLCTL_ARENAS_ALL are initialized
880 (always true).
881
882 arena.<i>.decay (void) --
883 Trigger decay-based purging of unused dirty/muzzy pages for arena
884 <i>, or for all arenas if <i> equals MALLCTL_ARENAS_ALL. The
885 proportion of unused dirty/muzzy pages to be purged depends on the
886 current time; see opt.dirty_decay_ms and opt.muzy_decay_ms for
887 details.
888
889 arena.<i>.purge (void) --
890 Purge all unused dirty pages for arena <i>, or for all arenas if
891 <i> equals MALLCTL_ARENAS_ALL.
892
893 arena.<i>.reset (void) --
894 Discard all of the arena's extant allocations. This interface can
895 only be used with arenas explicitly created via arenas.create. None
896 of the arena's discarded/cached allocations may accessed afterward.
897 As part of this requirement, all thread caches which were used to
898 allocate/deallocate in conjunction with the arena must be flushed
899 beforehand.
900
901 arena.<i>.destroy (void) --
902 Destroy the arena. Discard all of the arena's extant allocations
903 using the same mechanism as for arena.<i>.reset (with all the same
904 constraints and side effects), merge the arena stats into those
905 accessible at arena index MALLCTL_ARENAS_DESTROYED, and then
906 completely discard all metadata associated with the arena. Future
907 calls to arenas.create may recycle the arena index. Destruction
908 will fail if any threads are currently associated with the arena as
909 a result of calls to thread.arena.
910
911 arena.<i>.dss (const char *) rw
912 Set the precedence of dss allocation as related to mmap allocation
913 for arena <i>, or for all arenas if <i> equals MALLCTL_ARENAS_ALL.
914 See opt.dss for supported settings.
915
916 arena.<i>.dirty_decay_ms (ssize_t) rw
917 Current per-arena approximate time in milliseconds from the
918 creation of a set of unused dirty pages until an equivalent set of
919 unused dirty pages is purged and/or reused. Each time this
920 interface is set, all currently unused dirty pages are considered
921 to have fully decayed, which causes immediate purging of all unused
922 dirty pages unless the decay time is set to -1 (i.e. purging
923 disabled). See opt.dirty_decay_ms for additional information.
924
925 arena.<i>.muzzy_decay_ms (ssize_t) rw
926 Current per-arena approximate time in milliseconds from the
927 creation of a set of unused muzzy pages until an equivalent set of
928 unused muzzy pages is purged and/or reused. Each time this
929 interface is set, all currently unused muzzy pages are considered
930 to have fully decayed, which causes immediate purging of all unused
931 muzzy pages unless the decay time is set to -1 (i.e. purging
932 disabled). See opt.muzzy_decay_ms for additional information.
933
934 arena.<i>.retain_grow_limit (size_t) rw
935 Maximum size to grow retained region (only relevant when opt.retain
936 is enabled). This controls the maximum increment to expand virtual
937 memory, or allocation through arena.<i>extent_hooks. In particular,
938 if customized extent hooks reserve physical memory (e.g. 1G huge
939 pages), this is useful to control the allocation hook's input size.
940 The default is no limit.
941
942 arena.<i>.extent_hooks (extent_hooks_t *) rw
943 Get or set the extent management hook functions for arena <i>. The
944 functions must be capable of operating on all extant extents
945 associated with arena <i>, usually by passing unknown extents to
946 the replaced functions. In practice, it is feasible to control
947 allocation for arenas explicitly created via arenas.create such
948 that all extents originate from an application-supplied extent
949 allocator (by specifying the custom extent hook functions during
950 arena creation). However, the API guarantees for the automatically
951 created arenas may be relaxed -- hooks set there may be called in a
952 "best effort" fashion; in addition there may be extents created
953 prior to the application having an opportunity to take over extent
954 allocation.
955
956 typedef extent_hooks_s extent_hooks_t;
957 struct extent_hooks_s {
958 extent_alloc_t *alloc;
959 extent_dalloc_t *dalloc;
960 extent_destroy_t *destroy;
961 extent_commit_t *commit;
962 extent_decommit_t *decommit;
963 extent_purge_t *purge_lazy;
964 extent_purge_t *purge_forced;
965 extent_split_t *split;
966 extent_merge_t *merge;
967 };
968
969 The extent_hooks_t structure comprises function pointers which are
970 described individually below. jemalloc uses these functions to
971 manage extent lifetime, which starts off with allocation of mapped
972 committed memory, in the simplest case followed by deallocation.
973 However, there are performance and platform reasons to retain
974 extents for later reuse. Cleanup attempts cascade from deallocation
975 to decommit to forced purging to lazy purging, which gives the
976 extent management functions opportunities to reject the most
977 permanent cleanup operations in favor of less permanent (and often
978 less costly) operations. All operations except allocation can be
979 universally opted out of by setting the hook pointers to NULL, or
980 selectively opted out of by returning failure. Note that once the
981 extent hook is set, the structure is accessed directly by the
982 associated arenas, so it must remain valid for the entire lifetime
983 of the arenas.
984
985 typedef void *(extent_alloc_t)(extent_hooks_t *extent_hooks,
986 void *new_addr, size_t size,
987 size_t alignment, bool *zero,
988 bool *commit, unsigned arena_ind);
989
990
991 An extent allocation function conforms to the extent_alloc_t type
992 and upon success returns a pointer to size bytes of mapped memory
993 on behalf of arena arena_ind such that the extent's base address is
994 a multiple of alignment, as well as setting *zero to indicate
995 whether the extent is zeroed and *commit to indicate whether the
996 extent is committed. Upon error the function returns NULL and
997 leaves *zero and *commit unmodified. The size parameter is always a
998 multiple of the page size. The alignment parameter is always a
999 power of two at least as large as the page size. Zeroing is
1000 mandatory if *zero is true upon function entry. Committing is
1001 mandatory if *commit is true upon function entry. If new_addr is
1002 not NULL, the returned pointer must be new_addr on success or NULL
1003 on error. Committed memory may be committed in absolute terms as on
1004 a system that does not overcommit, or in implicit terms as on a
1005 system that overcommits and satisfies physical memory needs on
1006 demand via soft page faults. Note that replacing the default extent
1007 allocation function makes the arena's arena.<i>.dss setting
1008 irrelevant.
1009
1010 typedef bool (extent_dalloc_t)(extent_hooks_t *extent_hooks,
1011 void *addr, size_t size,
1012 bool committed, unsigned arena_ind);
1013
1014
1015 An extent deallocation function conforms to the extent_dalloc_t
1016 type and deallocates an extent at given addr and size with
1017 committed/decommited memory as indicated, on behalf of arena
1018 arena_ind, returning false upon success. If the function returns
1019 true, this indicates opt-out from deallocation; the virtual memory
1020 mapping associated with the extent remains mapped, in the same
1021 commit state, and available for future use, in which case it will
1022 be automatically retained for later reuse.
1023
1024 typedef void (extent_destroy_t)(extent_hooks_t *extent_hooks,
1025 void *addr, size_t size,
1026 bool committed,
1027 unsigned arena_ind);
1028
1029
1030 An extent destruction function conforms to the extent_destroy_t
1031 type and unconditionally destroys an extent at given addr and size
1032 with committed/decommited memory as indicated, on behalf of arena
1033 arena_ind. This function may be called to destroy retained extents
1034 during arena destruction (see arena.<i>.destroy).
1035
1036 typedef bool (extent_commit_t)(extent_hooks_t *extent_hooks,
1037 void *addr, size_t size,
1038 size_t offset, size_t length,
1039 unsigned arena_ind);
1040
1041
1042 An extent commit function conforms to the extent_commit_t type and
1043 commits zeroed physical memory to back pages within an extent at
1044 given addr and size at offset bytes, extending for length on behalf
1045 of arena arena_ind, returning false upon success. Committed memory
1046 may be committed in absolute terms as on a system that does not
1047 overcommit, or in implicit terms as on a system that overcommits
1048 and satisfies physical memory needs on demand via soft page faults.
1049 If the function returns true, this indicates insufficient physical
1050 memory to satisfy the request.
1051
1052 typedef bool (extent_decommit_t)(extent_hooks_t *extent_hooks,
1053 void *addr, size_t size,
1054 size_t offset, size_t length,
1055 unsigned arena_ind);
1056
1057
1058 An extent decommit function conforms to the extent_decommit_t type
1059 and decommits any physical memory that is backing pages within an
1060 extent at given addr and size at offset bytes, extending for length
1061 on behalf of arena arena_ind, returning false upon success, in
1062 which case the pages will be committed via the extent commit
1063 function before being reused. If the function returns true, this
1064 indicates opt-out from decommit; the memory remains committed and
1065 available for future use, in which case it will be automatically
1066 retained for later reuse.
1067
1068 typedef bool (extent_purge_t)(extent_hooks_t *extent_hooks,
1069 void *addr, size_t size,
1070 size_t offset, size_t length,
1071 unsigned arena_ind);
1072
1073
1074 An extent purge function conforms to the extent_purge_t type and
1075 discards physical pages within the virtual memory mapping
1076 associated with an extent at given addr and size at offset bytes,
1077 extending for length on behalf of arena arena_ind. A lazy extent
1078 purge function (e.g. implemented via madvise(...MADV_FREE)) can
1079 delay purging indefinitely and leave the pages within the purged
1080 virtual memory range in an indeterminite state, whereas a forced
1081 extent purge function immediately purges, and the pages within the
1082 virtual memory range will be zero-filled the next time they are
1083 accessed. If the function returns true, this indicates failure to
1084 purge.
1085
1086 typedef bool (extent_split_t)(extent_hooks_t *extent_hooks,
1087 void *addr, size_t size,
1088 size_t size_a, size_t size_b,
1089 bool committed, unsigned arena_ind);
1090
1091
1092 An extent split function conforms to the extent_split_t type and
1093 optionally splits an extent at given addr and size into two
1094 adjacent extents, the first of size_a bytes, and the second of
1095 size_b bytes, operating on committed/decommitted memory as
1096 indicated, on behalf of arena arena_ind, returning false upon
1097 success. If the function returns true, this indicates that the
1098 extent remains unsplit and therefore should continue to be operated
1099 on as a whole.
1100
1101 typedef bool (extent_merge_t)(extent_hooks_t *extent_hooks,
1102 void *addr_a, size_t size_a,
1103 void *addr_b, size_t size_b,
1104 bool committed, unsigned arena_ind);
1105
1106
1107 An extent merge function conforms to the extent_merge_t type and
1108 optionally merges adjacent extents, at given addr_a and size_a with
1109 given addr_b and size_b into one contiguous extent, operating on
1110 committed/decommitted memory as indicated, on behalf of arena
1111 arena_ind, returning false upon success. If the function returns
1112 true, this indicates that the extents remain distinct mappings and
1113 therefore should continue to be operated on independently.
1114
1115 arenas.narenas (unsigned) r-
1116 Current limit on number of arenas.
1117
1118 arenas.dirty_decay_ms (ssize_t) rw
1119 Current default per-arena approximate time in milliseconds from the
1120 creation of a set of unused dirty pages until an equivalent set of
1121 unused dirty pages is purged and/or reused, used to initialize
1122 arena.<i>.dirty_decay_ms during arena creation. See
1123 opt.dirty_decay_ms for additional information.
1124
1125 arenas.muzzy_decay_ms (ssize_t) rw
1126 Current default per-arena approximate time in milliseconds from the
1127 creation of a set of unused muzzy pages until an equivalent set of
1128 unused muzzy pages is purged and/or reused, used to initialize
1129 arena.<i>.muzzy_decay_ms during arena creation. See
1130 opt.muzzy_decay_ms for additional information.
1131
1132 arenas.quantum (size_t) r-
1133 Quantum size.
1134
1135 arenas.page (size_t) r-
1136 Page size.
1137
1138 arenas.tcache_max (size_t) r-
1139 Maximum thread-cached size class.
1140
1141 arenas.nbins (unsigned) r-
1142 Number of bin size classes.
1143
1144 arenas.nhbins (unsigned) r-
1145 Total number of thread cache bin size classes.
1146
1147 arenas.bin.<i>.size (size_t) r-
1148 Maximum size supported by size class.
1149
1150 arenas.bin.<i>.nregs (uint32_t) r-
1151 Number of regions per slab.
1152
1153 arenas.bin.<i>.slab_size (size_t) r-
1154 Number of bytes per slab.
1155
1156 arenas.nlextents (unsigned) r-
1157 Total number of large size classes.
1158
1159 arenas.lextent.<i>.size (size_t) r-
1160 Maximum size supported by this large size class.
1161
1162 arenas.create (unsigned, extent_hooks_t *) rw
1163 Explicitly create a new arena outside the range of automatically
1164 managed arenas, with optionally specified extent hooks, and return
1165 the new arena index.
1166
1167 arenas.lookup (unsigned, void*) rw
1168 Index of the arena to which an allocation belongs to.
1169
1170 prof.thread_active_init (bool) rw [--enable-prof]
1171 Control the initial setting for thread.prof.active in newly created
1172 threads. See the opt.prof_thread_active_init option for additional
1173 information.
1174
1175 prof.active (bool) rw [--enable-prof]
1176 Control whether sampling is currently active. See the
1177 opt.prof_active option for additional information, as well as the
1178 interrelated thread.prof.active mallctl.
1179
1180 prof.dump (const char *) -w [--enable-prof]
1181 Dump a memory profile to the specified file, or if NULL is
1182 specified, to a file according to the pattern
1183 <prefix>.<pid>.<seq>.m<mseq>.heap, where <prefix> is controlled by
1184 the opt.prof_prefix option.
1185
1186 prof.gdump (bool) rw [--enable-prof]
1187 When enabled, trigger a memory profile dump every time the total
1188 virtual memory exceeds the previous maximum. Profiles are dumped to
1189 files named according to the pattern
1190 <prefix>.<pid>.<seq>.u<useq>.heap, where <prefix> is controlled by
1191 the opt.prof_prefix option.
1192
1193 prof.reset (size_t) -w [--enable-prof]
1194 Reset all memory profile statistics, and optionally update the
1195 sample rate (see opt.lg_prof_sample and prof.lg_sample).
1196
1197 prof.lg_sample (size_t) r- [--enable-prof]
1198 Get the current sample rate (see opt.lg_prof_sample).
1199
1200 prof.interval (uint64_t) r- [--enable-prof]
1201 Average number of bytes allocated between interval-based profile
1202 dumps. See the opt.lg_prof_interval option for additional
1203 information.
1204
1205 stats.allocated (size_t) r- [--enable-stats]
1206 Total number of bytes allocated by the application.
1207
1208 stats.active (size_t) r- [--enable-stats]
1209 Total number of bytes in active pages allocated by the application.
1210 This is a multiple of the page size, and greater than or equal to
1211 stats.allocated. This does not include stats.arenas.<i>.pdirty,
1212 stats.arenas.<i>.pmuzzy, nor pages entirely devoted to allocator
1213 metadata.
1214
1215 stats.metadata (size_t) r- [--enable-stats]
1216 Total number of bytes dedicated to metadata, which comprise base
1217 allocations used for bootstrap-sensitive allocator metadata
1218 structures (see stats.arenas.<i>.base) and internal allocations
1219 (see stats.arenas.<i>.internal). Transparent huge page (enabled
1220 with opt.metadata_thp) usage is not considered.
1221
1222 stats.metadata_thp (size_t) r- [--enable-stats]
1223 Number of transparent huge pages (THP) used for metadata. See
1224 stats.metadata and opt.metadata_thp) for details.
1225
1226 stats.resident (size_t) r- [--enable-stats]
1227 Maximum number of bytes in physically resident data pages mapped by
1228 the allocator, comprising all pages dedicated to allocator
1229 metadata, pages backing active allocations, and unused dirty pages.
1230 This is a maximum rather than precise because pages may not
1231 actually be physically resident if they correspond to demand-zeroed
1232 virtual memory that has not yet been touched. This is a multiple of
1233 the page size, and is larger than stats.active.
1234
1235 stats.mapped (size_t) r- [--enable-stats]
1236 Total number of bytes in active extents mapped by the allocator.
1237 This is larger than stats.active. This does not include inactive
1238 extents, even those that contain unused dirty pages, which means
1239 that there is no strict ordering between this and stats.resident.
1240
1241 stats.retained (size_t) r- [--enable-stats]
1242 Total number of bytes in virtual memory mappings that were retained
1243 rather than being returned to the operating system via e.g.
1244 munmap(2) or similar. Retained virtual memory is typically
1245 untouched, decommitted, or purged, so it has no strongly associated
1246 physical memory (see extent hooks for details). Retained memory is
1247 excluded from mapped memory statistics, e.g. stats.mapped.
1248
1249 stats.background_thread.num_threads (size_t) r- [--enable-stats]
1250 Number of background threads running currently.
1251
1252 stats.background_thread.num_runs (uint64_t) r- [--enable-stats]
1253 Total number of runs from all background threads.
1254
1255 stats.background_thread.run_interval (uint64_t) r- [--enable-stats]
1256 Average run interval in nanoseconds of background threads.
1257
1258 stats.mutexes.ctl.{counter}; (counter specific type) r-
1259 [--enable-stats]
1260 Statistics on ctl mutex (global scope; mallctl related). {counter}
1261 is one of the counters below:
1262
1263 num_ops (uint64_t): Total number of lock acquisition operations
1264 on this mutex.
1265
1266 num_spin_acq (uint64_t): Number of times the mutex was
1267 spin-acquired. When the mutex is currently locked and cannot be
1268 acquired immediately, a short period of spin-retry within
1269 jemalloc will be performed. Acquired through spin generally
1270 means the contention was lightweight and not causing context
1271 switches.
1272
1273 num_wait (uint64_t): Number of times the mutex was
1274 wait-acquired, which means the mutex contention was not solved
1275 by spin-retry, and blocking operation was likely involved in
1276 order to acquire the mutex. This event generally implies higher
1277 cost / longer delay, and should be investigated if it happens
1278 often.
1279
1280 max_wait_time (uint64_t): Maximum length of time in nanoseconds
1281 spent on a single wait-acquired lock operation. Note that to
1282 avoid profiling overhead on the common path, this does not
1283 consider spin-acquired cases.
1284
1285 total_wait_time (uint64_t): Cumulative time in nanoseconds
1286 spent on wait-acquired lock operations. Similarly,
1287 spin-acquired cases are not considered.
1288
1289 max_num_thds (uint32_t): Maximum number of threads waiting on
1290 this mutex simultaneously. Similarly, spin-acquired cases are
1291 not considered.
1292
1293 num_owner_switch (uint64_t): Number of times the current mutex
1294 owner is different from the previous one. This event does not
1295 generally imply an issue; rather it is an indicator of how
1296 often the protected data are accessed by different threads.
1297
1298 stats.mutexes.background_thread.{counter} (counter specific type) r-
1299 [--enable-stats]
1300 Statistics on background_thread mutex (global scope;
1301 background_thread related). {counter} is one of the counters in
1302 mutex profiling counters.
1303
1304 stats.mutexes.prof.{counter} (counter specific type) r-
1305 [--enable-stats]
1306 Statistics on prof mutex (global scope; profiling related).
1307 {counter} is one of the counters in mutex profiling counters.
1308
1309 stats.mutexes.reset (void) -- [--enable-stats]
1310 Reset all mutex profile statistics, including global mutexes, arena
1311 mutexes and bin mutexes.
1312
1313 stats.arenas.<i>.dss (const char *) r-
1314 dss (sbrk(2)) allocation precedence as related to mmap(2)
1315 allocation. See opt.dss for details.
1316
1317 stats.arenas.<i>.dirty_decay_ms (ssize_t) r-
1318 Approximate time in milliseconds from the creation of a set of
1319 unused dirty pages until an equivalent set of unused dirty pages is
1320 purged and/or reused. See opt.dirty_decay_ms for details.
1321
1322 stats.arenas.<i>.muzzy_decay_ms (ssize_t) r-
1323 Approximate time in milliseconds from the creation of a set of
1324 unused muzzy pages until an equivalent set of unused muzzy pages is
1325 purged and/or reused. See opt.muzzy_decay_ms for details.
1326
1327 stats.arenas.<i>.nthreads (unsigned) r-
1328 Number of threads currently assigned to arena.
1329
1330 stats.arenas.<i>.uptime (uint64_t) r-
1331 Time elapsed (in nanoseconds) since the arena was created. If <i>
1332 equals 0 or MALLCTL_ARENAS_ALL, this is the uptime since malloc
1333 initialization.
1334
1335 stats.arenas.<i>.pactive (size_t) r-
1336 Number of pages in active extents.
1337
1338 stats.arenas.<i>.pdirty (size_t) r-
1339 Number of pages within unused extents that are potentially dirty,
1340 and for which madvise() or similar has not been called. See
1341 opt.dirty_decay_ms for a description of dirty pages.
1342
1343 stats.arenas.<i>.pmuzzy (size_t) r-
1344 Number of pages within unused extents that are muzzy. See
1345 opt.muzzy_decay_ms for a description of muzzy pages.
1346
1347 stats.arenas.<i>.mapped (size_t) r- [--enable-stats]
1348 Number of mapped bytes.
1349
1350 stats.arenas.<i>.retained (size_t) r- [--enable-stats]
1351 Number of retained bytes. See stats.retained for details.
1352
1353 stats.arenas.<i>.extent_avail (size_t) r- [--enable-stats]
1354 Number of allocated (but unused) extent structs in this arena.
1355
1356 stats.arenas.<i>.base (size_t) r- [--enable-stats]
1357 Number of bytes dedicated to bootstrap-sensitive allocator metadata
1358 structures.
1359
1360 stats.arenas.<i>.internal (size_t) r- [--enable-stats]
1361 Number of bytes dedicated to internal allocations. Internal
1362 allocations differ from application-originated allocations in that
1363 they are for internal use, and that they are omitted from heap
1364 profiles.
1365
1366 stats.arenas.<i>.metadata_thp (size_t) r- [--enable-stats]
1367 Number of transparent huge pages (THP) used for metadata. See
1368 opt.metadata_thp for details.
1369
1370 stats.arenas.<i>.resident (size_t) r- [--enable-stats]
1371 Maximum number of bytes in physically resident data pages mapped by
1372 the arena, comprising all pages dedicated to allocator metadata,
1373 pages backing active allocations, and unused dirty pages. This is a
1374 maximum rather than precise because pages may not actually be
1375 physically resident if they correspond to demand-zeroed virtual
1376 memory that has not yet been touched. This is a multiple of the
1377 page size.
1378
1379 stats.arenas.<i>.dirty_npurge (uint64_t) r- [--enable-stats]
1380 Number of dirty page purge sweeps performed.
1381
1382 stats.arenas.<i>.dirty_nmadvise (uint64_t) r- [--enable-stats]
1383 Number of madvise() or similar calls made to purge dirty pages.
1384
1385 stats.arenas.<i>.dirty_purged (uint64_t) r- [--enable-stats]
1386 Number of dirty pages purged.
1387
1388 stats.arenas.<i>.muzzy_npurge (uint64_t) r- [--enable-stats]
1389 Number of muzzy page purge sweeps performed.
1390
1391 stats.arenas.<i>.muzzy_nmadvise (uint64_t) r- [--enable-stats]
1392 Number of madvise() or similar calls made to purge muzzy pages.
1393
1394 stats.arenas.<i>.muzzy_purged (uint64_t) r- [--enable-stats]
1395 Number of muzzy pages purged.
1396
1397 stats.arenas.<i>.small.allocated (size_t) r- [--enable-stats]
1398 Number of bytes currently allocated by small objects.
1399
1400 stats.arenas.<i>.small.nmalloc (uint64_t) r- [--enable-stats]
1401 Cumulative number of times a small allocation was requested from
1402 the arena's bins, whether to fill the relevant tcache if opt.tcache
1403 is enabled, or to directly satisfy an allocation request otherwise.
1404
1405 stats.arenas.<i>.small.ndalloc (uint64_t) r- [--enable-stats]
1406 Cumulative number of times a small allocation was returned to the
1407 arena's bins, whether to flush the relevant tcache if opt.tcache is
1408 enabled, or to directly deallocate an allocation otherwise.
1409
1410 stats.arenas.<i>.small.nrequests (uint64_t) r- [--enable-stats]
1411 Cumulative number of allocation requests satisfied by all bin size
1412 classes.
1413
1414 stats.arenas.<i>.small.nfills (uint64_t) r- [--enable-stats]
1415 Cumulative number of tcache fills by all small size classes.
1416
1417 stats.arenas.<i>.small.nflushes (uint64_t) r- [--enable-stats]
1418 Cumulative number of tcache flushes by all small size classes.
1419
1420 stats.arenas.<i>.large.allocated (size_t) r- [--enable-stats]
1421 Number of bytes currently allocated by large objects.
1422
1423 stats.arenas.<i>.large.nmalloc (uint64_t) r- [--enable-stats]
1424 Cumulative number of times a large extent was allocated from the
1425 arena, whether to fill the relevant tcache if opt.tcache is enabled
1426 and the size class is within the range being cached, or to directly
1427 satisfy an allocation request otherwise.
1428
1429 stats.arenas.<i>.large.ndalloc (uint64_t) r- [--enable-stats]
1430 Cumulative number of times a large extent was returned to the
1431 arena, whether to flush the relevant tcache if opt.tcache is
1432 enabled and the size class is within the range being cached, or to
1433 directly deallocate an allocation otherwise.
1434
1435 stats.arenas.<i>.large.nrequests (uint64_t) r- [--enable-stats]
1436 Cumulative number of allocation requests satisfied by all large
1437 size classes.
1438
1439 stats.arenas.<i>.large.nfills (uint64_t) r- [--enable-stats]
1440 Cumulative number of tcache fills by all large size classes.
1441
1442 stats.arenas.<i>.large.nflushes (uint64_t) r- [--enable-stats]
1443 Cumulative number of tcache flushes by all large size classes.
1444
1445 stats.arenas.<i>.bins.<j>.nmalloc (uint64_t) r- [--enable-stats]
1446 Cumulative number of times a bin region of the corresponding size
1447 class was allocated from the arena, whether to fill the relevant
1448 tcache if opt.tcache is enabled, or to directly satisfy an
1449 allocation request otherwise.
1450
1451 stats.arenas.<i>.bins.<j>.ndalloc (uint64_t) r- [--enable-stats]
1452 Cumulative number of times a bin region of the corresponding size
1453 class was returned to the arena, whether to flush the relevant
1454 tcache if opt.tcache is enabled, or to directly deallocate an
1455 allocation otherwise.
1456
1457 stats.arenas.<i>.bins.<j>.nrequests (uint64_t) r- [--enable-stats]
1458 Cumulative number of allocation requests satisfied by bin regions
1459 of the corresponding size class.
1460
1461 stats.arenas.<i>.bins.<j>.curregs (size_t) r- [--enable-stats]
1462 Current number of regions for this size class.
1463
1464 stats.arenas.<i>.bins.<j>.nfills (uint64_t) r-
1465 Cumulative number of tcache fills.
1466
1467 stats.arenas.<i>.bins.<j>.nflushes (uint64_t) r-
1468 Cumulative number of tcache flushes.
1469
1470 stats.arenas.<i>.bins.<j>.nslabs (uint64_t) r- [--enable-stats]
1471 Cumulative number of slabs created.
1472
1473 stats.arenas.<i>.bins.<j>.nreslabs (uint64_t) r- [--enable-stats]
1474 Cumulative number of times the current slab from which to allocate
1475 changed.
1476
1477 stats.arenas.<i>.bins.<j>.curslabs (size_t) r- [--enable-stats]
1478 Current number of slabs.
1479
1480 stats.arenas.<i>.bins.<j>.nonfull_slabs (size_t) r- [--enable-stats]
1481 Current number of nonfull slabs.
1482
1483 stats.arenas.<i>.bins.<j>.mutex.{counter} (counter specific type) r-
1484 [--enable-stats]
1485 Statistics on arena.<i>.bins.<j> mutex (arena bin scope; bin
1486 operation related). {counter} is one of the counters in mutex
1487 profiling counters.
1488
1489 stats.arenas.<i>.extents.<j>.n{extent_type} (size_t) r-
1490 [--enable-stats]
1491 Number of extents of the given type in this arena in the bucket
1492 corresponding to page size index <j>. The extent type is one of
1493 dirty, muzzy, or retained.
1494
1495 stats.arenas.<i>.extents.<j>.{extent_type}_bytes (size_t) r-
1496 [--enable-stats]
1497 Sum of the bytes managed by extents of the given type in this arena
1498 in the bucket corresponding to page size index <j>. The extent type
1499 is one of dirty, muzzy, or retained.
1500
1501 stats.arenas.<i>.lextents.<j>.nmalloc (uint64_t) r- [--enable-stats]
1502 Cumulative number of times a large extent of the corresponding size
1503 class was allocated from the arena, whether to fill the relevant
1504 tcache if opt.tcache is enabled and the size class is within the
1505 range being cached, or to directly satisfy an allocation request
1506 otherwise.
1507
1508 stats.arenas.<i>.lextents.<j>.ndalloc (uint64_t) r- [--enable-stats]
1509 Cumulative number of times a large extent of the corresponding size
1510 class was returned to the arena, whether to flush the relevant
1511 tcache if opt.tcache is enabled and the size class is within the
1512 range being cached, or to directly deallocate an allocation
1513 otherwise.
1514
1515 stats.arenas.<i>.lextents.<j>.nrequests (uint64_t) r- [--enable-stats]
1516 Cumulative number of allocation requests satisfied by large extents
1517 of the corresponding size class.
1518
1519 stats.arenas.<i>.lextents.<j>.curlextents (size_t) r- [--enable-stats]
1520 Current number of large allocations for this size class.
1521
1522 stats.arenas.<i>.mutexes.large.{counter} (counter specific type) r-
1523 [--enable-stats]
1524 Statistics on arena.<i>.large mutex (arena scope; large allocation
1525 related). {counter} is one of the counters in mutex profiling
1526 counters.
1527
1528 stats.arenas.<i>.mutexes.extent_avail.{counter} (counter specific type)
1529 r- [--enable-stats]
1530 Statistics on arena.<i>.extent_avail mutex (arena scope; extent
1531 avail related). {counter} is one of the counters in mutex
1532 profiling counters.
1533
1534 stats.arenas.<i>.mutexes.extents_dirty.{counter} (counter specific
1535 type) r- [--enable-stats]
1536 Statistics on arena.<i>.extents_dirty mutex (arena scope; dirty
1537 extents related). {counter} is one of the counters in mutex
1538 profiling counters.
1539
1540 stats.arenas.<i>.mutexes.extents_muzzy.{counter} (counter specific
1541 type) r- [--enable-stats]
1542 Statistics on arena.<i>.extents_muzzy mutex (arena scope; muzzy
1543 extents related). {counter} is one of the counters in mutex
1544 profiling counters.
1545
1546 stats.arenas.<i>.mutexes.extents_retained.{counter} (counter specific
1547 type) r- [--enable-stats]
1548 Statistics on arena.<i>.extents_retained mutex (arena scope;
1549 retained extents related). {counter} is one of the counters in
1550 mutex profiling counters.
1551
1552 stats.arenas.<i>.mutexes.decay_dirty.{counter} (counter specific type)
1553 r- [--enable-stats]
1554 Statistics on arena.<i>.decay_dirty mutex (arena scope; decay for
1555 dirty pages related). {counter} is one of the counters in mutex
1556 profiling counters.
1557
1558 stats.arenas.<i>.mutexes.decay_muzzy.{counter} (counter specific type)
1559 r- [--enable-stats]
1560 Statistics on arena.<i>.decay_muzzy mutex (arena scope; decay for
1561 muzzy pages related). {counter} is one of the counters in mutex
1562 profiling counters.
1563
1564 stats.arenas.<i>.mutexes.base.{counter} (counter specific type) r-
1565 [--enable-stats]
1566 Statistics on arena.<i>.base mutex (arena scope; base allocator
1567 related). {counter} is one of the counters in mutex profiling
1568 counters.
1569
1570 stats.arenas.<i>.mutexes.tcache_list.{counter} (counter specific type)
1571 r- [--enable-stats]
1572 Statistics on arena.<i>.tcache_list mutex (arena scope; tcache to
1573 arena association related). This mutex is expected to be accessed
1574 less often. {counter} is one of the counters in mutex profiling
1575 counters.
1576
1578 Although the heap profiling functionality was originally designed to be
1579 compatible with the pprof command that is developed as part of the
1580 gperftools package[3], the addition of per thread heap profiling
1581 functionality required a different heap profile format. The jeprof
1582 command is derived from pprof, with enhancements to support the heap
1583 profile format described here.
1584
1585 In the following hypothetical heap profile, [...] indicates elision
1586 for the sake of compactness.
1587
1588 heap_v2/524288
1589 t*: 28106: 56637512 [0: 0]
1590 [...]
1591 t3: 352: 16777344 [0: 0]
1592 [...]
1593 t99: 17754: 29341640 [0: 0]
1594 [...]
1595 @ 0x5f86da8 0x5f5a1dc [...] 0x29e4d4e 0xa200316 0xabb2988 [...]
1596 t*: 13: 6688 [0: 0]
1597 t3: 12: 6496 [0: ]
1598 t99: 1: 192 [0: 0]
1599 [...]
1600
1601 MAPPED_LIBRARIES:
1602 [...]
1603
1604 The following matches the above heap profile, but most tokens are
1605 replaced with <description> to indicate descriptions of the
1606 corresponding fields.
1607
1608 <heap_profile_format_version>/<mean_sample_interval>
1609 <aggregate>: <curobjs>: <curbytes> [<cumobjs>: <cumbytes>]
1610 [...]
1611 <thread_3_aggregate>: <curobjs>: <curbytes>[<cumobjs>: <cumbytes>]
1612 [...]
1613 <thread_99_aggregate>: <curobjs>: <curbytes>[<cumobjs>: <cumbytes>]
1614 [...]
1615 @ <top_frame> <frame> [...] <frame> <frame> <frame> [...]
1616 <backtrace_aggregate>: <curobjs>: <curbytes> [<cumobjs>: <cumbytes>]
1617 <backtrace_thread_3>: <curobjs>: <curbytes> [<cumobjs>: <cumbytes>]
1618 <backtrace_thread_99>: <curobjs>: <curbytes> [<cumobjs>: <cumbytes>]
1619 [...]
1620
1621 MAPPED_LIBRARIES:
1622 </proc/<pid>/maps>
1623
1625 When debugging, it is a good idea to configure/build jemalloc with the
1626 --enable-debug and --enable-fill options, and recompile the program
1627 with suitable options and symbols for debugger support. When so
1628 configured, jemalloc incorporates a wide variety of run-time assertions
1629 that catch application errors such as double-free, write-after-free,
1630 etc.
1631
1632 Programs often accidentally depend on “uninitialized” memory actually
1633 being filled with zero bytes. Junk filling (see the opt.junk option)
1634 tends to expose such bugs in the form of obviously incorrect results
1635 and/or coredumps. Conversely, zero filling (see the opt.zero option)
1636 eliminates the symptoms of such bugs. Between these two options, it is
1637 usually possible to quickly detect, diagnose, and eliminate such bugs.
1638
1639 This implementation does not provide much detail about the problems it
1640 detects, because the performance impact for storing such information
1641 would be prohibitive.
1642
1644 If any of the memory allocation/deallocation functions detect an error
1645 or warning condition, a message will be printed to file descriptor
1646 STDERR_FILENO. Errors will result in the process dumping core. If the
1647 opt.abort option is set, most warnings are treated as errors.
1648
1649 The malloc_message variable allows the programmer to override the
1650 function which emits the text strings forming the errors and warnings
1651 if for some reason the STDERR_FILENO file descriptor is not suitable
1652 for this. malloc_message() takes the cbopaque pointer argument that is
1653 NULL unless overridden by the arguments in a call to
1654 malloc_stats_print(), followed by a string pointer. Please note that
1655 doing anything which tries to allocate memory in this function is
1656 likely to result in a crash or deadlock.
1657
1658 All messages are prefixed by “<jemalloc>: ”.
1659
1661 Standard API
1662 The malloc() and calloc() functions return a pointer to the allocated
1663 memory if successful; otherwise a NULL pointer is returned and errno is
1664 set to ENOMEM.
1665
1666 The posix_memalign() function returns the value 0 if successful;
1667 otherwise it returns an error value. The posix_memalign() function will
1668 fail if:
1669
1670 EINVAL
1671 The alignment parameter is not a power of 2 at least as large as
1672 sizeof(void *).
1673
1674 ENOMEM
1675 Memory allocation error.
1676
1677 The aligned_alloc() function returns a pointer to the allocated memory
1678 if successful; otherwise a NULL pointer is returned and errno is set.
1679 The aligned_alloc() function will fail if:
1680
1681 EINVAL
1682 The alignment parameter is not a power of 2.
1683
1684 ENOMEM
1685 Memory allocation error.
1686
1687 The realloc() function returns a pointer, possibly identical to ptr, to
1688 the allocated memory if successful; otherwise a NULL pointer is
1689 returned, and errno is set to ENOMEM if the error was the result of an
1690 allocation failure. The realloc() function always leaves the original
1691 buffer intact when an error occurs.
1692
1693 The free() function returns no value.
1694
1695 Non-standard API
1696 The mallocx() and rallocx() functions return a pointer to the allocated
1697 memory if successful; otherwise a NULL pointer is returned to indicate
1698 insufficient contiguous memory was available to service the allocation
1699 request.
1700
1701 The xallocx() function returns the real size of the resulting resized
1702 allocation pointed to by ptr, which is a value less than size if the
1703 allocation could not be adequately grown in place.
1704
1705 The sallocx() function returns the real size of the allocation pointed
1706 to by ptr.
1707
1708 The nallocx() returns the real size that would result from a successful
1709 equivalent mallocx() function call, or zero if insufficient memory is
1710 available to perform the size computation.
1711
1712 The mallctl(), mallctlnametomib(), and mallctlbymib() functions return
1713 0 on success; otherwise they return an error value. The functions will
1714 fail if:
1715
1716 EINVAL
1717 newp is not NULL, and newlen is too large or too small.
1718 Alternatively, *oldlenp is too large or too small; in this case as
1719 much data as possible are read despite the error.
1720
1721 ENOENT
1722 name or mib specifies an unknown/invalid value.
1723
1724 EPERM
1725 Attempt to read or write void value, or attempt to write read-only
1726 value.
1727
1728 EAGAIN
1729 A memory allocation failure occurred.
1730
1731 EFAULT
1732 An interface with side effects failed in some way not directly
1733 related to mallctl*() read/write processing.
1734
1735 The malloc_usable_size() function returns the usable size of the
1736 allocation pointed to by ptr.
1737
1739 The following environment variable affects the execution of the
1740 allocation functions:
1741
1742 MALLOC_CONF
1743 If the environment variable MALLOC_CONF is set, the characters it
1744 contains will be interpreted as options.
1745
1747 To dump core whenever a problem occurs:
1748
1749 ln -s 'abort:true' /etc/malloc.conf
1750
1751 To specify in the source that only one arena should be automatically
1752 created:
1753
1754 malloc_conf = "narenas:1";
1755
1757 madvise(2), mmap(2), sbrk(2), utrace(2), alloca(3), atexit(3),
1758 getpagesize(3)
1759
1761 The malloc(), calloc(), realloc(), and free() functions conform to
1762 ISO/IEC 9899:1990 (“ISO C90”).
1763
1764 The posix_memalign() function conforms to IEEE Std 1003.1-2001
1765 (“POSIX.1”).
1766
1768 Jason Evans
1769
1771 1. jemalloc website
1772 http://jemalloc.net/
1773
1774 2. JSON format
1775 http://www.json.org/
1776
1777 3. gperftools package
1778 http://code.google.com/p/gperftools/
1779
1780
1781
1782jemalloc 5.2.1-0-gea6b3e973b47 08/05/2019 JEMALLOC(3)