1JEMALLOC(3) User Manual JEMALLOC(3)
2
3
4
6 jemalloc - general purpose memory allocation functions
7
9 This manual describes jemalloc
10 5.1.0-0-g61efbda7098de6fe64c362d309824864308c36d4. More information can
11 be found at the jemalloc website[1].
12
14 #include <jemalloc/jemalloc.h>
15
16 Standard API
17 void *malloc(size_t size);
18
19 void *calloc(size_t number, size_t size);
20
21 int posix_memalign(void **ptr, size_t alignment, size_t size);
22
23 void *aligned_alloc(size_t alignment, size_t size);
24
25 void *realloc(void *ptr, size_t size);
26
27 void free(void *ptr);
28
29 Non-standard API
30 void *mallocx(size_t size, int flags);
31
32 void *rallocx(void *ptr, size_t size, int flags);
33
34 size_t xallocx(void *ptr, size_t size, size_t extra, int flags);
35
36 size_t sallocx(void *ptr, int flags);
37
38 void dallocx(void *ptr, int flags);
39
40 void sdallocx(void *ptr, size_t size, int flags);
41
42 size_t nallocx(size_t size, int flags);
43
44 int mallctl(const char *name, void *oldp, size_t *oldlenp, void *newp,
45 size_t newlen);
46
47 int mallctlnametomib(const char *name, size_t *mibp, size_t *miblenp);
48
49 int mallctlbymib(const size_t *mib, size_t miblen, void *oldp,
50 size_t *oldlenp, void *newp, size_t newlen);
51
52 void malloc_stats_print(void (*write_cb) (void *, const char *),
53 void *cbopaque, const char *opts);
54
55 size_t malloc_usable_size(const void *ptr);
56
57 void (*malloc_message)(void *cbopaque, const char *s);
58
59 const char *malloc_conf;
60
62 Standard API
63 The malloc() function allocates size bytes of uninitialized memory. The
64 allocated space is suitably aligned (after possible pointer coercion)
65 for storage of any type of object.
66
67 The calloc() function allocates space for number objects, each size
68 bytes in length. The result is identical to calling malloc() with an
69 argument of number * size, with the exception that the allocated memory
70 is explicitly initialized to zero bytes.
71
72 The posix_memalign() function allocates size bytes of memory such that
73 the allocation's base address is a multiple of alignment, and returns
74 the allocation in the value pointed to by ptr. The requested alignment
75 must be a power of 2 at least as large as sizeof(void *).
76
77 The aligned_alloc() function allocates size bytes of memory such that
78 the allocation's base address is a multiple of alignment. The requested
79 alignment must be a power of 2. Behavior is undefined if size is not an
80 integral multiple of alignment.
81
82 The realloc() function changes the size of the previously allocated
83 memory referenced by ptr to size bytes. The contents of the memory are
84 unchanged up to the lesser of the new and old sizes. If the new size is
85 larger, the contents of the newly allocated portion of the memory are
86 undefined. Upon success, the memory referenced by ptr is freed and a
87 pointer to the newly allocated memory is returned. Note that realloc()
88 may move the memory allocation, resulting in a different return value
89 than ptr. If ptr is NULL, the realloc() function behaves identically to
90 malloc() for the specified size.
91
92 The free() function causes the allocated memory referenced by ptr to be
93 made available for future allocations. If ptr is NULL, no action
94 occurs.
95
96 Non-standard API
97 The mallocx(), rallocx(), xallocx(), sallocx(), dallocx(), sdallocx(),
98 and nallocx() functions all have a flags argument that can be used to
99 specify options. The functions only check the options that are
100 contextually relevant. Use bitwise or (|) operations to specify one or
101 more of the following:
102
103 MALLOCX_LG_ALIGN(la)
104 Align the memory allocation to start at an address that is a
105 multiple of (1 << la). This macro does not validate that la is
106 within the valid range.
107
108 MALLOCX_ALIGN(a)
109 Align the memory allocation to start at an address that is a
110 multiple of a, where a is a power of two. This macro does not
111 validate that a is a power of 2.
112
113 MALLOCX_ZERO
114 Initialize newly allocated memory to contain zero bytes. In the
115 growing reallocation case, the real size prior to reallocation
116 defines the boundary between untouched bytes and those that are
117 initialized to contain zero bytes. If this macro is absent, newly
118 allocated memory is uninitialized.
119
120 MALLOCX_TCACHE(tc)
121 Use the thread-specific cache (tcache) specified by the identifier
122 tc, which must have been acquired via the tcache.create mallctl.
123 This macro does not validate that tc specifies a valid identifier.
124
125 MALLOCX_TCACHE_NONE
126 Do not use a thread-specific cache (tcache). Unless
127 MALLOCX_TCACHE(tc) or MALLOCX_TCACHE_NONE is specified, an
128 automatically managed tcache will be used under many circumstances.
129 This macro cannot be used in the same flags argument as
130 MALLOCX_TCACHE(tc).
131
132 MALLOCX_ARENA(a)
133 Use the arena specified by the index a. This macro has no effect
134 for regions that were allocated via an arena other than the one
135 specified. This macro does not validate that a specifies an arena
136 index in the valid range.
137
138 The mallocx() function allocates at least size bytes of memory, and
139 returns a pointer to the base address of the allocation. Behavior is
140 undefined if size is 0.
141
142 The rallocx() function resizes the allocation at ptr to be at least
143 size bytes, and returns a pointer to the base address of the resulting
144 allocation, which may or may not have moved from its original location.
145 Behavior is undefined if size is 0.
146
147 The xallocx() function resizes the allocation at ptr in place to be at
148 least size bytes, and returns the real size of the allocation. If extra
149 is non-zero, an attempt is made to resize the allocation to be at least
150 (size + extra) bytes, though inability to allocate the extra byte(s)
151 will not by itself result in failure to resize. Behavior is undefined
152 if size is 0, or if (size + extra > SIZE_T_MAX).
153
154 The sallocx() function returns the real size of the allocation at ptr.
155
156 The dallocx() function causes the memory referenced by ptr to be made
157 available for future allocations.
158
159 The sdallocx() function is an extension of dallocx() with a size
160 parameter to allow the caller to pass in the allocation size as an
161 optimization. The minimum valid input size is the original requested
162 size of the allocation, and the maximum valid input size is the
163 corresponding value returned by nallocx() or sallocx().
164
165 The nallocx() function allocates no memory, but it performs the same
166 size computation as the mallocx() function, and returns the real size
167 of the allocation that would result from the equivalent mallocx()
168 function call, or 0 if the inputs exceed the maximum supported size
169 class and/or alignment. Behavior is undefined if size is 0.
170
171 The mallctl() function provides a general interface for introspecting
172 the memory allocator, as well as setting modifiable parameters and
173 triggering actions. The period-separated name argument specifies a
174 location in a tree-structured namespace; see the MALLCTL NAMESPACE
175 section for documentation on the tree contents. To read a value, pass a
176 pointer via oldp to adequate space to contain the value, and a pointer
177 to its length via oldlenp; otherwise pass NULL and NULL. Similarly, to
178 write a value, pass a pointer to the value via newp, and its length via
179 newlen; otherwise pass NULL and 0.
180
181 The mallctlnametomib() function provides a way to avoid repeated name
182 lookups for applications that repeatedly query the same portion of the
183 namespace, by translating a name to a “Management Information Base”
184 (MIB) that can be passed repeatedly to mallctlbymib(). Upon successful
185 return from mallctlnametomib(), mibp contains an array of *miblenp
186 integers, where *miblenp is the lesser of the number of components in
187 name and the input value of *miblenp. Thus it is possible to pass a
188 *miblenp that is smaller than the number of period-separated name
189 components, which results in a partial MIB that can be used as the
190 basis for constructing a complete MIB. For name components that are
191 integers (e.g. the 2 in arenas.bin.2.size), the corresponding MIB
192 component will always be that integer. Therefore, it is legitimate to
193 construct code like the following:
194
195 unsigned nbins, i;
196 size_t mib[4];
197 size_t len, miblen;
198
199 len = sizeof(nbins);
200 mallctl("arenas.nbins", &nbins, &len, NULL, 0);
201
202 miblen = 4;
203 mallctlnametomib("arenas.bin.0.size", mib, &miblen);
204 for (i = 0; i < nbins; i++) {
205 size_t bin_size;
206
207 mib[2] = i;
208 len = sizeof(bin_size);
209 mallctlbymib(mib, miblen, (void *)&bin_size, &len, NULL, 0);
210 /* Do something with bin_size... */
211 }
212
213 The malloc_stats_print() function writes summary statistics via the
214 write_cb callback function pointer and cbopaque data passed to
215 write_cb, or malloc_message() if write_cb is NULL. The statistics are
216 presented in human-readable form unless “J” is specified as a character
217 within the opts string, in which case the statistics are presented in
218 JSON format[2]. This function can be called repeatedly. General
219 information that never changes during execution can be omitted by
220 specifying “g” as a character within the opts string. Note that
221 malloc_message() uses the mallctl*() functions internally, so
222 inconsistent statistics can be reported if multiple threads use these
223 functions simultaneously. If --enable-stats is specified during
224 configuration, “m”, “d”, and “a” can be specified to omit merged arena,
225 destroyed merged arena, and per arena statistics, respectively; “b” and
226 “l” can be specified to omit per size class statistics for bins and
227 large objects, respectively; “x” can be specified to omit all mutex
228 statistics. Unrecognized characters are silently ignored. Note that
229 thread caching may prevent some statistics from being completely up to
230 date, since extra locking would be required to merge counters that
231 track thread cache operations.
232
233 The malloc_usable_size() function returns the usable size of the
234 allocation pointed to by ptr. The return value may be larger than the
235 size that was requested during allocation. The malloc_usable_size()
236 function is not a mechanism for in-place realloc(); rather it is
237 provided solely as a tool for introspection purposes. Any discrepancy
238 between the requested allocation size and the size reported by
239 malloc_usable_size() should not be depended on, since such behavior is
240 entirely implementation-dependent.
241
243 Once, when the first call is made to one of the memory allocation
244 routines, the allocator initializes its internals based in part on
245 various options that can be specified at compile- or run-time.
246
247 The string specified via --with-malloc-conf, the string pointed to by
248 the global variable malloc_conf, the “name” of the file referenced by
249 the symbolic link named /etc/malloc.conf, and the value of the
250 environment variable MALLOC_CONF, will be interpreted, in that order,
251 from left to right as options. Note that malloc_conf may be read before
252 main() is entered, so the declaration of malloc_conf should specify an
253 initializer that contains the final value to be read by jemalloc.
254 --with-malloc-conf and malloc_conf are compile-time mechanisms, whereas
255 /etc/malloc.conf and MALLOC_CONF can be safely set any time prior to
256 program invocation.
257
258 An options string is a comma-separated list of option:value pairs.
259 There is one key corresponding to each opt.* mallctl (see the MALLCTL
260 NAMESPACE section for options documentation). For example,
261 abort:true,narenas:1 sets the opt.abort and opt.narenas options. Some
262 options have boolean values (true/false), others have integer values
263 (base 8, 10, or 16, depending on prefix), and yet others have raw
264 string values.
265
267 Traditionally, allocators have used sbrk(2) to obtain memory, which is
268 suboptimal for several reasons, including race conditions, increased
269 fragmentation, and artificial limitations on maximum usable memory. If
270 sbrk(2) is supported by the operating system, this allocator uses both
271 mmap(2) and sbrk(2), in that order of preference; otherwise only
272 mmap(2) is used.
273
274 This allocator uses multiple arenas in order to reduce lock contention
275 for threaded programs on multi-processor systems. This works well with
276 regard to threading scalability, but incurs some costs. There is a
277 small fixed per-arena overhead, and additionally, arenas manage memory
278 completely independently of each other, which means a small fixed
279 increase in overall memory fragmentation. These overheads are not
280 generally an issue, given the number of arenas normally used. Note that
281 using substantially more arenas than the default is not likely to
282 improve performance, mainly due to reduced cache performance. However,
283 it may make sense to reduce the number of arenas if an application does
284 not make much use of the allocation functions.
285
286 In addition to multiple arenas, this allocator supports thread-specific
287 caching, in order to make it possible to completely avoid
288 synchronization for most allocation requests. Such caching allows very
289 fast allocation in the common case, but it increases memory usage and
290 fragmentation, since a bounded number of objects can remain allocated
291 in each thread cache.
292
293 Memory is conceptually broken into extents. Extents are always aligned
294 to multiples of the page size. This alignment makes it possible to find
295 metadata for user objects quickly. User objects are broken into two
296 categories according to size: small and large. Contiguous small objects
297 comprise a slab, which resides within a single extent, whereas large
298 objects each have their own extents backing them.
299
300 Small objects are managed in groups by slabs. Each slab maintains a
301 bitmap to track which regions are in use. Allocation requests that are
302 no more than half the quantum (8 or 16, depending on architecture) are
303 rounded up to the nearest power of two that is at least sizeof(double).
304 All other object size classes are multiples of the quantum, spaced such
305 that there are four size classes for each doubling in size, which
306 limits internal fragmentation to approximately 20% for all but the
307 smallest size classes. Small size classes are smaller than four times
308 the page size, and large size classes extend from four times the page
309 size up to the largest size class that does not exceed PTRDIFF_MAX.
310
311 Allocations are packed tightly together, which can be an issue for
312 multi-threaded applications. If you need to assure that allocations do
313 not suffer from cacheline sharing, round your allocation requests up to
314 the nearest multiple of the cacheline size, or specify cacheline
315 alignment when allocating.
316
317 The realloc(), rallocx(), and xallocx() functions may resize
318 allocations without moving them under limited circumstances. Unlike the
319 *allocx() API, the standard API does not officially round up the usable
320 size of an allocation to the nearest size class, so technically it is
321 necessary to call realloc() to grow e.g. a 9-byte allocation to 16
322 bytes, or shrink a 16-byte allocation to 9 bytes. Growth and shrinkage
323 trivially succeeds in place as long as the pre-size and post-size both
324 round up to the same size class. No other API guarantees are made
325 regarding in-place resizing, but the current implementation also tries
326 to resize large allocations in place, as long as the pre-size and
327 post-size are both large. For shrinkage to succeed, the extent
328 allocator must support splitting (see arena.<i>.extent_hooks). Growth
329 only succeeds if the trailing memory is currently available, and the
330 extent allocator supports merging.
331
332 Assuming 4 KiB pages and a 16-byte quantum on a 64-bit system, the size
333 classes in each category are as shown in Table 1.
334
335 Table 1. Size classes
336 ┌─────────┬─────────┬─────────────────────┐
337 │Category │ Spacing │ Size │
338 ├─────────┼─────────┼─────────────────────┤
339 │ │ lg │ [8] │
340 │ ├─────────┼─────────────────────┤
341 │ │ 16 │ [16, 32, 48, 64, │
342 │ │ │ 80, 96, 112, 128] │
343 │ ├─────────┼─────────────────────┤
344 │ │ 32 │ [160, 192, 224, │
345 │ │ │ 256] │
346 │ ├─────────┼─────────────────────┤
347 │ │ 64 │ [320, 384, 448, │
348 │ │ │ 512] │
349 │ ├─────────┼─────────────────────┤
350 │ │ 128 │ [640, 768, 896, │
351 │Small │ │ 1024] │
352 │ ├─────────┼─────────────────────┤
353 │ │ 256 │ [1280, 1536, 1792, │
354 │ │ │ 2048] │
355 │ ├─────────┼─────────────────────┤
356 │ │ 512 │ [2560, 3072, 3584, │
357 │ │ │ 4096] │
358 │ ├─────────┼─────────────────────┤
359 │ │ 1 KiB │ [5 KiB, 6 KiB, 7 │
360 │ │ │ KiB, 8 KiB] │
361 │ ├─────────┼─────────────────────┤
362 │ │ 2 KiB │ [10 KiB, 12 KiB, 14 │
363 │ │ │ KiB] │
364 ├─────────┼─────────┼─────────────────────┤
365 │ │ 2 KiB │ [16 KiB] │
366 │ ├─────────┼─────────────────────┤
367 │ │ 4 KiB │ [20 KiB, 24 KiB, 28 │
368 │ │ │ KiB, 32 KiB] │
369 │ ├─────────┼─────────────────────┤
370 │ │ 8 KiB │ [40 KiB, 48 KiB, 54 │
371 │ │ │ KiB, 64 KiB] │
372 │ ├─────────┼─────────────────────┤
373 │ │ 16 KiB │ [80 KiB, 96 KiB, │
374 │ │ │ 112 KiB, 128 KiB] │
375 │ ├─────────┼─────────────────────┤
376 │ │ 32 KiB │ [160 KiB, 192 KiB, │
377 │ │ │ 224 KiB, 256 KiB] │
378 │ ├─────────┼─────────────────────┤
379 │ │ 64 KiB │ [320 KiB, 384 KiB, │
380 │ │ │ 448 KiB, 512 KiB] │
381 │ ├─────────┼─────────────────────┤
382 │ │ 128 KiB │ [640 KiB, 768 KiB, │
383 │ │ │ 896 KiB, 1 MiB] │
384 │ ├─────────┼─────────────────────┤
385 │ │ 256 KiB │ [1280 KiB, 1536 │
386 │ │ │ KiB, 1792 KiB, 2 │
387 │Large │ │ MiB] │
388 │ ├─────────┼─────────────────────┤
389 │ │ 512 KiB │ [2560 KiB, 3 MiB, │
390 │ │ │ 3584 KiB, 4 MiB] │
391 │ ├─────────┼─────────────────────┤
392 │ │ 1 MiB │ [5 MiB, 6 MiB, 7 │
393 │ │ │ MiB, 8 MiB] │
394 │ ├─────────┼─────────────────────┤
395 │ │ 2 MiB │ [10 MiB, 12 MiB, 14 │
396 │ │ │ MiB, 16 MiB] │
397 │ ├─────────┼─────────────────────┤
398 │ │ 4 MiB │ [20 MiB, 24 MiB, 28 │
399 │ │ │ MiB, 32 MiB] │
400 │ ├─────────┼─────────────────────┤
401 │ │ 8 MiB │ [40 MiB, 48 MiB, 56 │
402 │ │ │ MiB, 64 MiB] │
403 │ ├─────────┼─────────────────────┤
404 │ │ ... │ ... │
405 │ ├─────────┼─────────────────────┤
406 │ │ 512 PiB │ [2560 PiB, 3 EiB, │
407 │ │ │ 3584 PiB, 4 EiB] │
408 │ ├─────────┼─────────────────────┤
409 │ │ 1 EiB │ [5 EiB, 6 EiB, 7 │
410 │ │ │ EiB] │
411 └─────────┴─────────┴─────────────────────┘
412
414 The following names are defined in the namespace accessible via the
415 mallctl*() functions. Value types are specified in parentheses, their
416 readable/writable statuses are encoded as rw, r-, -w, or --, and
417 required build configuration flags follow, if any. A name element
418 encoded as <i> or <j> indicates an integer component, where the integer
419 varies from 0 to some upper value that must be determined via
420 introspection. In the case of stats.arenas.<i>.* and
421 arena.<i>.{initialized,purge,decay,dss}, <i> equal to
422 MALLCTL_ARENAS_ALL can be used to operate on all arenas or access the
423 summation of statistics from all arenas; similarly <i> equal to
424 MALLCTL_ARENAS_DESTROYED can be used to access the summation of
425 statistics from all destroyed arenas. These constants can be utilized
426 either via mallctlnametomib() followed by mallctlbymib(), or via code
427 such as the following:
428
429 #define STRINGIFY_HELPER(x) #x
430 #define STRINGIFY(x) STRINGIFY_HELPER(x)
431
432 mallctl("arena." STRINGIFY(MALLCTL_ARENAS_ALL) ".decay",
433 NULL, NULL, NULL, 0);
434
435 Take special note of the epoch mallctl, which controls refreshing of
436 cached dynamic statistics.
437
438 version (const char *) r-
439 Return the jemalloc version string.
440
441 epoch (uint64_t) rw
442 If a value is passed in, refresh the data from which the mallctl*()
443 functions report values, and increment the epoch. Return the
444 current epoch. This is useful for detecting whether another thread
445 caused a refresh.
446
447 background_thread (bool) rw
448 Enable/disable internal background worker threads. When set to
449 true, background threads are created on demand (the number of
450 background threads will be no more than the number of CPUs or
451 active arenas). Threads run periodically, and handle purging
452 asynchronously. When switching off, background threads are
453 terminated synchronously. Note that after fork(2) function, the
454 state in the child process will be disabled regardless the state in
455 parent process. See stats.background_thread for related stats.
456 opt.background_thread can be used to set the default option. This
457 option is only available on selected pthread-based platforms.
458
459 max_background_threads (size_t) rw
460 Maximum number of background worker threads that will be created.
461 This value is capped at opt.max_background_threads at startup.
462
463 config.cache_oblivious (bool) r-
464 --enable-cache-oblivious was specified during build configuration.
465
466 config.debug (bool) r-
467 --enable-debug was specified during build configuration.
468
469 config.fill (bool) r-
470 --enable-fill was specified during build configuration.
471
472 config.lazy_lock (bool) r-
473 --enable-lazy-lock was specified during build configuration.
474
475 config.malloc_conf (const char *) r-
476 Embedded configure-time-specified run-time options string, empty
477 unless --with-malloc-conf was specified during build configuration.
478
479 config.prof (bool) r-
480 --enable-prof was specified during build configuration.
481
482 config.prof_libgcc (bool) r-
483 --disable-prof-libgcc was not specified during build configuration.
484
485 config.prof_libunwind (bool) r-
486 --enable-prof-libunwind was specified during build configuration.
487
488 config.stats (bool) r-
489 --enable-stats was specified during build configuration.
490
491 config.utrace (bool) r-
492 --enable-utrace was specified during build configuration.
493
494 config.xmalloc (bool) r-
495 --enable-xmalloc was specified during build configuration.
496
497 opt.abort (bool) r-
498 Abort-on-warning enabled/disabled. If true, most warnings are
499 fatal. Note that runtime option warnings are not included (see
500 opt.abort_conf for that). The process will call abort(3) in these
501 cases. This option is disabled by default unless --enable-debug is
502 specified during configuration, in which case it is enabled by
503 default.
504
505 opt.abort_conf (bool) r-
506 Abort-on-invalid-configuration enabled/disabled. If true, invalid
507 runtime options are fatal. The process will call abort(3) in these
508 cases. This option is disabled by default unless --enable-debug is
509 specified during configuration, in which case it is enabled by
510 default.
511
512 opt.metadata_thp (const char *) r-
513 Controls whether to allow jemalloc to use transparent huge page
514 (THP) for internal metadata (see stats.metadata). “always” allows
515 such usage. “auto” uses no THP initially, but may begin to do so
516 when metadata usage reaches certain level. The default is
517 “disabled”.
518
519 opt.retain (bool) r-
520 If true, retain unused virtual memory for later reuse rather than
521 discarding it by calling munmap(2) or equivalent (see
522 stats.retained for related details). This option is disabled by
523 default unless discarding virtual memory is known to trigger
524 platform-specific performance problems, e.g. for [64-bit] Linux,
525 which has a quirk in its virtual memory allocation algorithm that
526 causes semi-permanent VM map holes under normal jemalloc operation.
527 Although munmap(2) causes issues on 32-bit Linux as well, retaining
528 virtual memory for 32-bit Linux is disabled by default due to the
529 practical possibility of address space exhaustion.
530
531 opt.dss (const char *) r-
532 dss (sbrk(2)) allocation precedence as related to mmap(2)
533 allocation. The following settings are supported if sbrk(2) is
534 supported by the operating system: “disabled”, “primary”, and
535 “secondary”; otherwise only “disabled” is supported. The default is
536 “secondary” if sbrk(2) is supported by the operating system;
537 “disabled” otherwise.
538
539 opt.narenas (unsigned) r-
540 Maximum number of arenas to use for automatic multiplexing of
541 threads and arenas. The default is four times the number of CPUs,
542 or one if there is a single CPU.
543
544 opt.percpu_arena (const char *) r-
545 Per CPU arena mode. Use the “percpu” setting to enable this
546 feature, which uses number of CPUs to determine number of arenas,
547 and bind threads to arenas dynamically based on the CPU the thread
548 runs on currently. “phycpu” setting uses one arena per physical
549 CPU, which means the two hyper threads on the same CPU share one
550 arena. Note that no runtime checking regarding the availability of
551 hyper threading is done at the moment. When set to “disabled”,
552 narenas and thread to arena association will not be impacted by
553 this option. The default is “disabled”.
554
555 opt.background_thread (const bool) r-
556 Internal background worker threads enabled/disabled. Because of
557 potential circular dependencies, enabling background thread using
558 this option may cause crash or deadlock during initialization. For
559 a reliable way to use this feature, see background_thread for
560 dynamic control options and details. This option is disabled by
561 default.
562
563 opt.max_background_threads (const size_t) r-
564 Maximum number of background threads that will be created if
565 background_thread is set. Defaults to number of cpus.
566
567 opt.dirty_decay_ms (ssize_t) r-
568 Approximate time in milliseconds from the creation of a set of
569 unused dirty pages until an equivalent set of unused dirty pages is
570 purged (i.e. converted to muzzy via e.g. madvise(...MADV_FREE) if
571 supported by the operating system, or converted to clean otherwise)
572 and/or reused. Dirty pages are defined as previously having been
573 potentially written to by the application, and therefore consuming
574 physical memory, yet having no current use. The pages are
575 incrementally purged according to a sigmoidal decay curve that
576 starts and ends with zero purge rate. A decay time of 0 causes all
577 unused dirty pages to be purged immediately upon creation. A decay
578 time of -1 disables purging. The default decay time is 10 seconds.
579 See arenas.dirty_decay_ms and arena.<i>.dirty_decay_ms for related
580 dynamic control options. See opt.muzzy_decay_ms for a description
581 of muzzy pages.
582
583 opt.muzzy_decay_ms (ssize_t) r-
584 Approximate time in milliseconds from the creation of a set of
585 unused muzzy pages until an equivalent set of unused muzzy pages is
586 purged (i.e. converted to clean) and/or reused. Muzzy pages are
587 defined as previously having been unused dirty pages that were
588 subsequently purged in a manner that left them subject to the
589 reclamation whims of the operating system (e.g.
590 madvise(...MADV_FREE)), and therefore in an indeterminate state.
591 The pages are incrementally purged according to a sigmoidal decay
592 curve that starts and ends with zero purge rate. A decay time of 0
593 causes all unused muzzy pages to be purged immediately upon
594 creation. A decay time of -1 disables purging. The default decay
595 time is 10 seconds. See arenas.muzzy_decay_ms and
596 arena.<i>.muzzy_decay_ms for related dynamic control options.
597
598 opt.lg_extent_max_active_fit (size_t) r-
599 When reusing dirty extents, this determines the (log base 2 of the)
600 maximum ratio between the size of the active extent selected (to
601 split off from) and the size of the requested allocation. This
602 prevents the splitting of large active extents for smaller
603 allocations, which can reduce fragmentation over the long run
604 (especially for non-active extents). Lower value may reduce
605 fragmentation, at the cost of extra active extents. The default
606 value is 6, which gives a maximum ratio of 64 (2^6).
607
608 opt.stats_print (bool) r-
609 Enable/disable statistics printing at exit. If enabled, the
610 malloc_stats_print() function is called at program exit via an
611 atexit(3) function. opt.stats_print_opts can be combined to
612 specify output options. If --enable-stats is specified during
613 configuration, this has the potential to cause deadlock for a
614 multi-threaded process that exits while one or more threads are
615 executing in the memory allocation functions. Furthermore, atexit()
616 may allocate memory during application initialization and then
617 deadlock internally when jemalloc in turn calls atexit(), so this
618 option is not universally usable (though the application can
619 register its own atexit() function with equivalent functionality).
620 Therefore, this option should only be used with care; it is
621 primarily intended as a performance tuning aid during application
622 development. This option is disabled by default.
623
624 opt.stats_print_opts (const char *) r-
625 Options (the opts string) to pass to the malloc_stats_print() at
626 exit (enabled through opt.stats_print). See available options in
627 malloc_stats_print(). Has no effect unless opt.stats_print is
628 enabled. The default is “”.
629
630 opt.junk (const char *) r- [--enable-fill]
631 Junk filling. If set to “alloc”, each byte of uninitialized
632 allocated memory will be initialized to 0xa5. If set to “free”, all
633 deallocated memory will be initialized to 0x5a. If set to “true”,
634 both allocated and deallocated memory will be initialized, and if
635 set to “false”, junk filling be disabled entirely. This is intended
636 for debugging and will impact performance negatively. This option
637 is “false” by default unless --enable-debug is specified during
638 configuration, in which case it is “true” by default.
639
640 opt.zero (bool) r- [--enable-fill]
641 Zero filling enabled/disabled. If enabled, each byte of
642 uninitialized allocated memory will be initialized to 0. Note that
643 this initialization only happens once for each byte, so realloc()
644 and rallocx() calls do not zero memory that was previously
645 allocated. This is intended for debugging and will impact
646 performance negatively. This option is disabled by default.
647
648 opt.utrace (bool) r- [--enable-utrace]
649 Allocation tracing based on utrace(2) enabled/disabled. This option
650 is disabled by default.
651
652 opt.xmalloc (bool) r- [--enable-xmalloc]
653 Abort-on-out-of-memory enabled/disabled. If enabled, rather than
654 returning failure for any allocation function, display a diagnostic
655 message on STDERR_FILENO and cause the program to drop core (using
656 abort(3)). If an application is designed to depend on this
657 behavior, set the option at compile time by including the following
658 in the source code:
659
660 malloc_conf = "xmalloc:true";
661
662 This option is disabled by default.
663
664 opt.tcache (bool) r-
665 Thread-specific caching (tcache) enabled/disabled. When there are
666 multiple threads, each thread uses a tcache for objects up to a
667 certain size. Thread-specific caching allows many allocations to be
668 satisfied without performing any thread synchronization, at the
669 cost of increased memory use. See the opt.lg_tcache_max option for
670 related tuning information. This option is enabled by default.
671
672 opt.lg_tcache_max (size_t) r-
673 Maximum size class (log base 2) to cache in the thread-specific
674 cache (tcache). At a minimum, all small size classes are cached,
675 and at a maximum all large size classes are cached. The default
676 maximum is 32 KiB (2^15).
677
678 opt.thp (const char *) r-
679 Transparent hugepage (THP) mode. Settings "always", "never" and
680 "default" are available if THP is supported by the operating
681 system. The "always" setting enables transparent hugepage for all
682 user memory mappings with MADV_HUGEPAGE; "never" ensures no
683 transparent hugepage with MADV_NOHUGEPAGE; the default setting
684 "default" makes no changes. Note that: this option does not affect
685 THP for jemalloc internal metadata (see opt.metadata_thp); in
686 addition, for arenas with customized extent_hooks, this option is
687 bypassed as it is implemented as part of the default extent hooks.
688
689 opt.prof (bool) r- [--enable-prof]
690 Memory profiling enabled/disabled. If enabled, profile memory
691 allocation activity. See the opt.prof_active option for on-the-fly
692 activation/deactivation. See the opt.lg_prof_sample option for
693 probabilistic sampling control. See the opt.prof_accum option for
694 control of cumulative sample reporting. See the
695 opt.lg_prof_interval option for information on interval-triggered
696 profile dumping, the opt.prof_gdump option for information on
697 high-water-triggered profile dumping, and the opt.prof_final option
698 for final profile dumping. Profile output is compatible with the
699 jeprof command, which is based on the pprof that is developed as
700 part of the gperftools package[3]. See HEAP PROFILE FORMAT for heap
701 profile format documentation.
702
703 opt.prof_prefix (const char *) r- [--enable-prof]
704 Filename prefix for profile dumps. If the prefix is set to the
705 empty string, no automatic dumps will occur; this is primarily
706 useful for disabling the automatic final heap dump (which also
707 disables leak reporting, if enabled). The default prefix is jeprof.
708
709 opt.prof_active (bool) r- [--enable-prof]
710 Profiling activated/deactivated. This is a secondary control
711 mechanism that makes it possible to start the application with
712 profiling enabled (see the opt.prof option) but inactive, then
713 toggle profiling at any time during program execution with the
714 prof.active mallctl. This option is enabled by default.
715
716 opt.prof_thread_active_init (bool) r- [--enable-prof]
717 Initial setting for thread.prof.active in newly created threads.
718 The initial setting for newly created threads can also be changed
719 during execution via the prof.thread_active_init mallctl. This
720 option is enabled by default.
721
722 opt.lg_prof_sample (size_t) r- [--enable-prof]
723 Average interval (log base 2) between allocation samples, as
724 measured in bytes of allocation activity. Increasing the sampling
725 interval decreases profile fidelity, but also decreases the
726 computational overhead. The default sample interval is 512 KiB
727 (2^19 B).
728
729 opt.prof_accum (bool) r- [--enable-prof]
730 Reporting of cumulative object/byte counts in profile dumps
731 enabled/disabled. If this option is enabled, every unique backtrace
732 must be stored for the duration of execution. Depending on the
733 application, this can impose a large memory overhead, and the
734 cumulative counts are not always of interest. This option is
735 disabled by default.
736
737 opt.lg_prof_interval (ssize_t) r- [--enable-prof]
738 Average interval (log base 2) between memory profile dumps, as
739 measured in bytes of allocation activity. The actual interval
740 between dumps may be sporadic because decentralized allocation
741 counters are used to avoid synchronization bottlenecks. Profiles
742 are dumped to files named according to the pattern
743 <prefix>.<pid>.<seq>.i<iseq>.heap, where <prefix> is controlled by
744 the opt.prof_prefix option. By default, interval-triggered profile
745 dumping is disabled (encoded as -1).
746
747 opt.prof_gdump (bool) r- [--enable-prof]
748 Set the initial state of prof.gdump, which when enabled triggers a
749 memory profile dump every time the total virtual memory exceeds the
750 previous maximum. This option is disabled by default.
751
752 opt.prof_final (bool) r- [--enable-prof]
753 Use an atexit(3) function to dump final memory usage to a file
754 named according to the pattern <prefix>.<pid>.<seq>.f.heap, where
755 <prefix> is controlled by the opt.prof_prefix option. Note that
756 atexit() may allocate memory during application initialization and
757 then deadlock internally when jemalloc in turn calls atexit(), so
758 this option is not universally usable (though the application can
759 register its own atexit() function with equivalent functionality).
760 This option is disabled by default.
761
762 opt.prof_leak (bool) r- [--enable-prof]
763 Leak reporting enabled/disabled. If enabled, use an atexit(3)
764 function to report memory leaks detected by allocation sampling.
765 See the opt.prof option for information on analyzing heap profile
766 output. This option is disabled by default.
767
768 thread.arena (unsigned) rw
769 Get or set the arena associated with the calling thread. If the
770 specified arena was not initialized beforehand (see the
771 arena.i.initialized mallctl), it will be automatically initialized
772 as a side effect of calling this interface.
773
774 thread.allocated (uint64_t) r- [--enable-stats]
775 Get the total number of bytes ever allocated by the calling thread.
776 This counter has the potential to wrap around; it is up to the
777 application to appropriately interpret the counter in such cases.
778
779 thread.allocatedp (uint64_t *) r- [--enable-stats]
780 Get a pointer to the the value that is returned by the
781 thread.allocated mallctl. This is useful for avoiding the overhead
782 of repeated mallctl*() calls.
783
784 thread.deallocated (uint64_t) r- [--enable-stats]
785 Get the total number of bytes ever deallocated by the calling
786 thread. This counter has the potential to wrap around; it is up to
787 the application to appropriately interpret the counter in such
788 cases.
789
790 thread.deallocatedp (uint64_t *) r- [--enable-stats]
791 Get a pointer to the the value that is returned by the
792 thread.deallocated mallctl. This is useful for avoiding the
793 overhead of repeated mallctl*() calls.
794
795 thread.tcache.enabled (bool) rw
796 Enable/disable calling thread's tcache. The tcache is implicitly
797 flushed as a side effect of becoming disabled (see
798 thread.tcache.flush).
799
800 thread.tcache.flush (void) --
801 Flush calling thread's thread-specific cache (tcache). This
802 interface releases all cached objects and internal data structures
803 associated with the calling thread's tcache. Ordinarily, this
804 interface need not be called, since automatic periodic incremental
805 garbage collection occurs, and the thread cache is automatically
806 discarded when a thread exits. However, garbage collection is
807 triggered by allocation activity, so it is possible for a thread
808 that stops allocating/deallocating to retain its cache
809 indefinitely, in which case the developer may find manual flushing
810 useful.
811
812 thread.prof.name (const char *) r- or -w [--enable-prof]
813 Get/set the descriptive name associated with the calling thread in
814 memory profile dumps. An internal copy of the name string is
815 created, so the input string need not be maintained after this
816 interface completes execution. The output string of this interface
817 should be copied for non-ephemeral uses, because multiple
818 implementation details can cause asynchronous string deallocation.
819 Furthermore, each invocation of this interface can only read or
820 write; simultaneous read/write is not supported due to string
821 lifetime limitations. The name string must be nil-terminated and
822 comprised only of characters in the sets recognized by isgraph(3)
823 and isblank(3).
824
825 thread.prof.active (bool) rw [--enable-prof]
826 Control whether sampling is currently active for the calling
827 thread. This is an activation mechanism in addition to prof.active;
828 both must be active for the calling thread to sample. This flag is
829 enabled by default.
830
831 tcache.create (unsigned) r-
832 Create an explicit thread-specific cache (tcache) and return an
833 identifier that can be passed to the MALLOCX_TCACHE(tc) macro to
834 explicitly use the specified cache rather than the automatically
835 managed one that is used by default. Each explicit cache can be
836 used by only one thread at a time; the application must assure that
837 this constraint holds.
838
839 tcache.flush (unsigned) -w
840 Flush the specified thread-specific cache (tcache). The same
841 considerations apply to this interface as to thread.tcache.flush,
842 except that the tcache will never be automatically discarded.
843
844 tcache.destroy (unsigned) -w
845 Flush the specified thread-specific cache (tcache) and make the
846 identifier available for use during a future tcache creation.
847
848 arena.<i>.initialized (bool) r-
849 Get whether the specified arena's statistics are initialized (i.e.
850 the arena was initialized prior to the current epoch). This
851 interface can also be nominally used to query whether the merged
852 statistics corresponding to MALLCTL_ARENAS_ALL are initialized
853 (always true).
854
855 arena.<i>.decay (void) --
856 Trigger decay-based purging of unused dirty/muzzy pages for arena
857 <i>, or for all arenas if <i> equals MALLCTL_ARENAS_ALL. The
858 proportion of unused dirty/muzzy pages to be purged depends on the
859 current time; see opt.dirty_decay_ms and opt.muzy_decay_ms for
860 details.
861
862 arena.<i>.purge (void) --
863 Purge all unused dirty pages for arena <i>, or for all arenas if
864 <i> equals MALLCTL_ARENAS_ALL.
865
866 arena.<i>.reset (void) --
867 Discard all of the arena's extant allocations. This interface can
868 only be used with arenas explicitly created via arenas.create. None
869 of the arena's discarded/cached allocations may accessed afterward.
870 As part of this requirement, all thread caches which were used to
871 allocate/deallocate in conjunction with the arena must be flushed
872 beforehand.
873
874 arena.<i>.destroy (void) --
875 Destroy the arena. Discard all of the arena's extant allocations
876 using the same mechanism as for arena.<i>.reset (with all the same
877 constraints and side effects), merge the arena stats into those
878 accessible at arena index MALLCTL_ARENAS_DESTROYED, and then
879 completely discard all metadata associated with the arena. Future
880 calls to arenas.create may recycle the arena index. Destruction
881 will fail if any threads are currently associated with the arena as
882 a result of calls to thread.arena.
883
884 arena.<i>.dss (const char *) rw
885 Set the precedence of dss allocation as related to mmap allocation
886 for arena <i>, or for all arenas if <i> equals MALLCTL_ARENAS_ALL.
887 See opt.dss for supported settings.
888
889 arena.<i>.dirty_decay_ms (ssize_t) rw
890 Current per-arena approximate time in milliseconds from the
891 creation of a set of unused dirty pages until an equivalent set of
892 unused dirty pages is purged and/or reused. Each time this
893 interface is set, all currently unused dirty pages are considered
894 to have fully decayed, which causes immediate purging of all unused
895 dirty pages unless the decay time is set to -1 (i.e. purging
896 disabled). See opt.dirty_decay_ms for additional information.
897
898 arena.<i>.muzzy_decay_ms (ssize_t) rw
899 Current per-arena approximate time in milliseconds from the
900 creation of a set of unused muzzy pages until an equivalent set of
901 unused muzzy pages is purged and/or reused. Each time this
902 interface is set, all currently unused muzzy pages are considered
903 to have fully decayed, which causes immediate purging of all unused
904 muzzy pages unless the decay time is set to -1 (i.e. purging
905 disabled). See opt.muzzy_decay_ms for additional information.
906
907 arena.<i>.retain_grow_limit (size_t) rw
908 Maximum size to grow retained region (only relevant when opt.retain
909 is enabled). This controls the maximum increment to expand virtual
910 memory, or allocation through arena.<i>extent_hooks. In particular,
911 if customized extent hooks reserve physical memory (e.g. 1G huge
912 pages), this is useful to control the allocation hook's input size.
913 The default is no limit.
914
915 arena.<i>.extent_hooks (extent_hooks_t *) rw
916 Get or set the extent management hook functions for arena <i>. The
917 functions must be capable of operating on all extant extents
918 associated with arena <i>, usually by passing unknown extents to
919 the replaced functions. In practice, it is feasible to control
920 allocation for arenas explicitly created via arenas.create such
921 that all extents originate from an application-supplied extent
922 allocator (by specifying the custom extent hook functions during
923 arena creation), but the automatically created arenas will have
924 already created extents prior to the application having an
925 opportunity to take over extent allocation.
926
927 typedef extent_hooks_s extent_hooks_t;
928 struct extent_hooks_s {
929 extent_alloc_t *alloc;
930 extent_dalloc_t *dalloc;
931 extent_destroy_t *destroy;
932 extent_commit_t *commit;
933 extent_decommit_t *decommit;
934 extent_purge_t *purge_lazy;
935 extent_purge_t *purge_forced;
936 extent_split_t *split;
937 extent_merge_t *merge;
938 };
939
940 The extent_hooks_t structure comprises function pointers which are
941 described individually below. jemalloc uses these functions to
942 manage extent lifetime, which starts off with allocation of mapped
943 committed memory, in the simplest case followed by deallocation.
944 However, there are performance and platform reasons to retain
945 extents for later reuse. Cleanup attempts cascade from deallocation
946 to decommit to forced purging to lazy purging, which gives the
947 extent management functions opportunities to reject the most
948 permanent cleanup operations in favor of less permanent (and often
949 less costly) operations. All operations except allocation can be
950 universally opted out of by setting the hook pointers to NULL, or
951 selectively opted out of by returning failure. Note that once the
952 extent hook is set, the structure is accessed directly by the
953 associated arenas, so it must remain valid for the entire lifetime
954 of the arenas.
955
956 typedef void *(extent_alloc_t)(extent_hooks_t *extent_hooks,
957 void *new_addr, size_t size,
958 size_t alignment, bool *zero,
959 bool *commit, unsigned arena_ind);
960
961
962 An extent allocation function conforms to the extent_alloc_t type
963 and upon success returns a pointer to size bytes of mapped memory
964 on behalf of arena arena_ind such that the extent's base address is
965 a multiple of alignment, as well as setting *zero to indicate
966 whether the extent is zeroed and *commit to indicate whether the
967 extent is committed. Upon error the function returns NULL and
968 leaves *zero and *commit unmodified. The size parameter is always a
969 multiple of the page size. The alignment parameter is always a
970 power of two at least as large as the page size. Zeroing is
971 mandatory if *zero is true upon function entry. Committing is
972 mandatory if *commit is true upon function entry. If new_addr is
973 not NULL, the returned pointer must be new_addr on success or NULL
974 on error. Committed memory may be committed in absolute terms as on
975 a system that does not overcommit, or in implicit terms as on a
976 system that overcommits and satisfies physical memory needs on
977 demand via soft page faults. Note that replacing the default extent
978 allocation function makes the arena's arena.<i>.dss setting
979 irrelevant.
980
981 typedef bool (extent_dalloc_t)(extent_hooks_t *extent_hooks,
982 void *addr, size_t size,
983 bool committed, unsigned arena_ind);
984
985
986 An extent deallocation function conforms to the extent_dalloc_t
987 type and deallocates an extent at given addr and size with
988 committed/decommited memory as indicated, on behalf of arena
989 arena_ind, returning false upon success. If the function returns
990 true, this indicates opt-out from deallocation; the virtual memory
991 mapping associated with the extent remains mapped, in the same
992 commit state, and available for future use, in which case it will
993 be automatically retained for later reuse.
994
995 typedef void (extent_destroy_t)(extent_hooks_t *extent_hooks,
996 void *addr, size_t size,
997 bool committed,
998 unsigned arena_ind);
999
1000
1001 An extent destruction function conforms to the extent_destroy_t
1002 type and unconditionally destroys an extent at given addr and size
1003 with committed/decommited memory as indicated, on behalf of arena
1004 arena_ind. This function may be called to destroy retained extents
1005 during arena destruction (see arena.<i>.destroy).
1006
1007 typedef bool (extent_commit_t)(extent_hooks_t *extent_hooks,
1008 void *addr, size_t size,
1009 size_t offset, size_t length,
1010 unsigned arena_ind);
1011
1012
1013 An extent commit function conforms to the extent_commit_t type and
1014 commits zeroed physical memory to back pages within an extent at
1015 given addr and size at offset bytes, extending for length on behalf
1016 of arena arena_ind, returning false upon success. Committed memory
1017 may be committed in absolute terms as on a system that does not
1018 overcommit, or in implicit terms as on a system that overcommits
1019 and satisfies physical memory needs on demand via soft page faults.
1020 If the function returns true, this indicates insufficient physical
1021 memory to satisfy the request.
1022
1023 typedef bool (extent_decommit_t)(extent_hooks_t *extent_hooks,
1024 void *addr, size_t size,
1025 size_t offset, size_t length,
1026 unsigned arena_ind);
1027
1028
1029 An extent decommit function conforms to the extent_decommit_t type
1030 and decommits any physical memory that is backing pages within an
1031 extent at given addr and size at offset bytes, extending for length
1032 on behalf of arena arena_ind, returning false upon success, in
1033 which case the pages will be committed via the extent commit
1034 function before being reused. If the function returns true, this
1035 indicates opt-out from decommit; the memory remains committed and
1036 available for future use, in which case it will be automatically
1037 retained for later reuse.
1038
1039 typedef bool (extent_purge_t)(extent_hooks_t *extent_hooks,
1040 void *addr, size_t size,
1041 size_t offset, size_t length,
1042 unsigned arena_ind);
1043
1044
1045 An extent purge function conforms to the extent_purge_t type and
1046 discards physical pages within the virtual memory mapping
1047 associated with an extent at given addr and size at offset bytes,
1048 extending for length on behalf of arena arena_ind. A lazy extent
1049 purge function (e.g. implemented via madvise(...MADV_FREE)) can
1050 delay purging indefinitely and leave the pages within the purged
1051 virtual memory range in an indeterminite state, whereas a forced
1052 extent purge function immediately purges, and the pages within the
1053 virtual memory range will be zero-filled the next time they are
1054 accessed. If the function returns true, this indicates failure to
1055 purge.
1056
1057 typedef bool (extent_split_t)(extent_hooks_t *extent_hooks,
1058 void *addr, size_t size,
1059 size_t size_a, size_t size_b,
1060 bool committed, unsigned arena_ind);
1061
1062
1063 An extent split function conforms to the extent_split_t type and
1064 optionally splits an extent at given addr and size into two
1065 adjacent extents, the first of size_a bytes, and the second of
1066 size_b bytes, operating on committed/decommitted memory as
1067 indicated, on behalf of arena arena_ind, returning false upon
1068 success. If the function returns true, this indicates that the
1069 extent remains unsplit and therefore should continue to be operated
1070 on as a whole.
1071
1072 typedef bool (extent_merge_t)(extent_hooks_t *extent_hooks,
1073 void *addr_a, size_t size_a,
1074 void *addr_b, size_t size_b,
1075 bool committed, unsigned arena_ind);
1076
1077
1078 An extent merge function conforms to the extent_merge_t type and
1079 optionally merges adjacent extents, at given addr_a and size_a with
1080 given addr_b and size_b into one contiguous extent, operating on
1081 committed/decommitted memory as indicated, on behalf of arena
1082 arena_ind, returning false upon success. If the function returns
1083 true, this indicates that the extents remain distinct mappings and
1084 therefore should continue to be operated on independently.
1085
1086 arenas.narenas (unsigned) r-
1087 Current limit on number of arenas.
1088
1089 arenas.dirty_decay_ms (ssize_t) rw
1090 Current default per-arena approximate time in milliseconds from the
1091 creation of a set of unused dirty pages until an equivalent set of
1092 unused dirty pages is purged and/or reused, used to initialize
1093 arena.<i>.dirty_decay_ms during arena creation. See
1094 opt.dirty_decay_ms for additional information.
1095
1096 arenas.muzzy_decay_ms (ssize_t) rw
1097 Current default per-arena approximate time in milliseconds from the
1098 creation of a set of unused muzzy pages until an equivalent set of
1099 unused muzzy pages is purged and/or reused, used to initialize
1100 arena.<i>.muzzy_decay_ms during arena creation. See
1101 opt.muzzy_decay_ms for additional information.
1102
1103 arenas.quantum (size_t) r-
1104 Quantum size.
1105
1106 arenas.page (size_t) r-
1107 Page size.
1108
1109 arenas.tcache_max (size_t) r-
1110 Maximum thread-cached size class.
1111
1112 arenas.nbins (unsigned) r-
1113 Number of bin size classes.
1114
1115 arenas.nhbins (unsigned) r-
1116 Total number of thread cache bin size classes.
1117
1118 arenas.bin.<i>.size (size_t) r-
1119 Maximum size supported by size class.
1120
1121 arenas.bin.<i>.nregs (uint32_t) r-
1122 Number of regions per slab.
1123
1124 arenas.bin.<i>.slab_size (size_t) r-
1125 Number of bytes per slab.
1126
1127 arenas.nlextents (unsigned) r-
1128 Total number of large size classes.
1129
1130 arenas.lextent.<i>.size (size_t) r-
1131 Maximum size supported by this large size class.
1132
1133 arenas.create (unsigned, extent_hooks_t *) rw
1134 Explicitly create a new arena outside the range of automatically
1135 managed arenas, with optionally specified extent hooks, and return
1136 the new arena index.
1137
1138 arenas.lookup (unsigned, void*) rw
1139 Index of the arena to which an allocation belongs to.
1140
1141 prof.thread_active_init (bool) rw [--enable-prof]
1142 Control the initial setting for thread.prof.active in newly created
1143 threads. See the opt.prof_thread_active_init option for additional
1144 information.
1145
1146 prof.active (bool) rw [--enable-prof]
1147 Control whether sampling is currently active. See the
1148 opt.prof_active option for additional information, as well as the
1149 interrelated thread.prof.active mallctl.
1150
1151 prof.dump (const char *) -w [--enable-prof]
1152 Dump a memory profile to the specified file, or if NULL is
1153 specified, to a file according to the pattern
1154 <prefix>.<pid>.<seq>.m<mseq>.heap, where <prefix> is controlled by
1155 the opt.prof_prefix option.
1156
1157 prof.gdump (bool) rw [--enable-prof]
1158 When enabled, trigger a memory profile dump every time the total
1159 virtual memory exceeds the previous maximum. Profiles are dumped to
1160 files named according to the pattern
1161 <prefix>.<pid>.<seq>.u<useq>.heap, where <prefix> is controlled by
1162 the opt.prof_prefix option.
1163
1164 prof.reset (size_t) -w [--enable-prof]
1165 Reset all memory profile statistics, and optionally update the
1166 sample rate (see opt.lg_prof_sample and prof.lg_sample).
1167
1168 prof.lg_sample (size_t) r- [--enable-prof]
1169 Get the current sample rate (see opt.lg_prof_sample).
1170
1171 prof.interval (uint64_t) r- [--enable-prof]
1172 Average number of bytes allocated between interval-based profile
1173 dumps. See the opt.lg_prof_interval option for additional
1174 information.
1175
1176 stats.allocated (size_t) r- [--enable-stats]
1177 Total number of bytes allocated by the application.
1178
1179 stats.active (size_t) r- [--enable-stats]
1180 Total number of bytes in active pages allocated by the application.
1181 This is a multiple of the page size, and greater than or equal to
1182 stats.allocated. This does not include stats.arenas.<i>.pdirty,
1183 stats.arenas.<i>.pmuzzy, nor pages entirely devoted to allocator
1184 metadata.
1185
1186 stats.metadata (size_t) r- [--enable-stats]
1187 Total number of bytes dedicated to metadata, which comprise base
1188 allocations used for bootstrap-sensitive allocator metadata
1189 structures (see stats.arenas.<i>.base) and internal allocations
1190 (see stats.arenas.<i>.internal). Transparent huge page (enabled
1191 with opt.metadata_thp) usage is not considered.
1192
1193 stats.metadata_thp (size_t) r- [--enable-stats]
1194 Number of transparent huge pages (THP) used for metadata. See
1195 stats.metadata and opt.metadata_thp) for details.
1196
1197 stats.resident (size_t) r- [--enable-stats]
1198 Maximum number of bytes in physically resident data pages mapped by
1199 the allocator, comprising all pages dedicated to allocator
1200 metadata, pages backing active allocations, and unused dirty pages.
1201 This is a maximum rather than precise because pages may not
1202 actually be physically resident if they correspond to demand-zeroed
1203 virtual memory that has not yet been touched. This is a multiple of
1204 the page size, and is larger than stats.active.
1205
1206 stats.mapped (size_t) r- [--enable-stats]
1207 Total number of bytes in active extents mapped by the allocator.
1208 This is larger than stats.active. This does not include inactive
1209 extents, even those that contain unused dirty pages, which means
1210 that there is no strict ordering between this and stats.resident.
1211
1212 stats.retained (size_t) r- [--enable-stats]
1213 Total number of bytes in virtual memory mappings that were retained
1214 rather than being returned to the operating system via e.g.
1215 munmap(2) or similar. Retained virtual memory is typically
1216 untouched, decommitted, or purged, so it has no strongly associated
1217 physical memory (see extent hooks for details). Retained memory is
1218 excluded from mapped memory statistics, e.g. stats.mapped.
1219
1220 stats.background_thread.num_threads (size_t) r- [--enable-stats]
1221 Number of background threads running currently.
1222
1223 stats.background_thread.num_runs (uint64_t) r- [--enable-stats]
1224 Total number of runs from all background threads.
1225
1226 stats.background_thread.run_interval (uint64_t) r- [--enable-stats]
1227 Average run interval in nanoseconds of background threads.
1228
1229 stats.mutexes.ctl.{counter}; (counter specific type) r-
1230 [--enable-stats]
1231 Statistics on ctl mutex (global scope; mallctl related). {counter}
1232 is one of the counters below:
1233
1234 num_ops (uint64_t): Total number of lock acquisition operations
1235 on this mutex.
1236
1237 num_spin_acq (uint64_t): Number of times the mutex was
1238 spin-acquired. When the mutex is currently locked and cannot be
1239 acquired immediately, a short period of spin-retry within
1240 jemalloc will be performed. Acquired through spin generally
1241 means the contention was lightweight and not causing context
1242 switches.
1243
1244 num_wait (uint64_t): Number of times the mutex was
1245 wait-acquired, which means the mutex contention was not solved
1246 by spin-retry, and blocking operation was likely involved in
1247 order to acquire the mutex. This event generally implies higher
1248 cost / longer delay, and should be investigated if it happens
1249 often.
1250
1251 max_wait_time (uint64_t): Maximum length of time in nanoseconds
1252 spent on a single wait-acquired lock operation. Note that to
1253 avoid profiling overhead on the common path, this does not
1254 consider spin-acquired cases.
1255
1256 total_wait_time (uint64_t): Cumulative time in nanoseconds
1257 spent on wait-acquired lock operations. Similarly,
1258 spin-acquired cases are not considered.
1259
1260 max_num_thds (uint32_t): Maximum number of threads waiting on
1261 this mutex simultaneously. Similarly, spin-acquired cases are
1262 not considered.
1263
1264 num_owner_switch (uint64_t): Number of times the current mutex
1265 owner is different from the previous one. This event does not
1266 generally imply an issue; rather it is an indicator of how
1267 often the protected data are accessed by different threads.
1268
1269 stats.mutexes.background_thread.{counter} (counter specific type) r-
1270 [--enable-stats]
1271 Statistics on background_thread mutex (global scope;
1272 background_thread related). {counter} is one of the counters in
1273 mutex profiling counters.
1274
1275 stats.mutexes.prof.{counter} (counter specific type) r-
1276 [--enable-stats]
1277 Statistics on prof mutex (global scope; profiling related).
1278 {counter} is one of the counters in mutex profiling counters.
1279
1280 stats.mutexes.reset (void) -- [--enable-stats]
1281 Reset all mutex profile statistics, including global mutexes, arena
1282 mutexes and bin mutexes.
1283
1284 stats.arenas.<i>.dss (const char *) r-
1285 dss (sbrk(2)) allocation precedence as related to mmap(2)
1286 allocation. See opt.dss for details.
1287
1288 stats.arenas.<i>.dirty_decay_ms (ssize_t) r-
1289 Approximate time in milliseconds from the creation of a set of
1290 unused dirty pages until an equivalent set of unused dirty pages is
1291 purged and/or reused. See opt.dirty_decay_ms for details.
1292
1293 stats.arenas.<i>.muzzy_decay_ms (ssize_t) r-
1294 Approximate time in milliseconds from the creation of a set of
1295 unused muzzy pages until an equivalent set of unused muzzy pages is
1296 purged and/or reused. See opt.muzzy_decay_ms for details.
1297
1298 stats.arenas.<i>.nthreads (unsigned) r-
1299 Number of threads currently assigned to arena.
1300
1301 stats.arenas.<i>.uptime (uint64_t) r-
1302 Time elapsed (in nanoseconds) since the arena was created. If <i>
1303 equals 0 or MALLCTL_ARENAS_ALL, this is the uptime since malloc
1304 initialization.
1305
1306 stats.arenas.<i>.pactive (size_t) r-
1307 Number of pages in active extents.
1308
1309 stats.arenas.<i>.pdirty (size_t) r-
1310 Number of pages within unused extents that are potentially dirty,
1311 and for which madvise() or similar has not been called. See
1312 opt.dirty_decay_ms for a description of dirty pages.
1313
1314 stats.arenas.<i>.pmuzzy (size_t) r-
1315 Number of pages within unused extents that are muzzy. See
1316 opt.muzzy_decay_ms for a description of muzzy pages.
1317
1318 stats.arenas.<i>.mapped (size_t) r- [--enable-stats]
1319 Number of mapped bytes.
1320
1321 stats.arenas.<i>.retained (size_t) r- [--enable-stats]
1322 Number of retained bytes. See stats.retained for details.
1323
1324 stats.arenas.<i>.base (size_t) r- [--enable-stats]
1325 Number of bytes dedicated to bootstrap-sensitive allocator metadata
1326 structures.
1327
1328 stats.arenas.<i>.internal (size_t) r- [--enable-stats]
1329 Number of bytes dedicated to internal allocations. Internal
1330 allocations differ from application-originated allocations in that
1331 they are for internal use, and that they are omitted from heap
1332 profiles.
1333
1334 stats.arenas.<i>.metadata_thp (size_t) r- [--enable-stats]
1335 Number of transparent huge pages (THP) used for metadata. See
1336 opt.metadata_thp for details.
1337
1338 stats.arenas.<i>.resident (size_t) r- [--enable-stats]
1339 Maximum number of bytes in physically resident data pages mapped by
1340 the arena, comprising all pages dedicated to allocator metadata,
1341 pages backing active allocations, and unused dirty pages. This is a
1342 maximum rather than precise because pages may not actually be
1343 physically resident if they correspond to demand-zeroed virtual
1344 memory that has not yet been touched. This is a multiple of the
1345 page size.
1346
1347 stats.arenas.<i>.dirty_npurge (uint64_t) r- [--enable-stats]
1348 Number of dirty page purge sweeps performed.
1349
1350 stats.arenas.<i>.dirty_nmadvise (uint64_t) r- [--enable-stats]
1351 Number of madvise() or similar calls made to purge dirty pages.
1352
1353 stats.arenas.<i>.dirty_purged (uint64_t) r- [--enable-stats]
1354 Number of dirty pages purged.
1355
1356 stats.arenas.<i>.muzzy_npurge (uint64_t) r- [--enable-stats]
1357 Number of muzzy page purge sweeps performed.
1358
1359 stats.arenas.<i>.muzzy_nmadvise (uint64_t) r- [--enable-stats]
1360 Number of madvise() or similar calls made to purge muzzy pages.
1361
1362 stats.arenas.<i>.muzzy_purged (uint64_t) r- [--enable-stats]
1363 Number of muzzy pages purged.
1364
1365 stats.arenas.<i>.small.allocated (size_t) r- [--enable-stats]
1366 Number of bytes currently allocated by small objects.
1367
1368 stats.arenas.<i>.small.nmalloc (uint64_t) r- [--enable-stats]
1369 Cumulative number of times a small allocation was requested from
1370 the arena's bins, whether to fill the relevant tcache if opt.tcache
1371 is enabled, or to directly satisfy an allocation request otherwise.
1372
1373 stats.arenas.<i>.small.ndalloc (uint64_t) r- [--enable-stats]
1374 Cumulative number of times a small allocation was returned to the
1375 arena's bins, whether to flush the relevant tcache if opt.tcache is
1376 enabled, or to directly deallocate an allocation otherwise.
1377
1378 stats.arenas.<i>.small.nrequests (uint64_t) r- [--enable-stats]
1379 Cumulative number of allocation requests satisfied by all bin size
1380 classes.
1381
1382 stats.arenas.<i>.large.allocated (size_t) r- [--enable-stats]
1383 Number of bytes currently allocated by large objects.
1384
1385 stats.arenas.<i>.large.nmalloc (uint64_t) r- [--enable-stats]
1386 Cumulative number of times a large extent was allocated from the
1387 arena, whether to fill the relevant tcache if opt.tcache is enabled
1388 and the size class is within the range being cached, or to directly
1389 satisfy an allocation request otherwise.
1390
1391 stats.arenas.<i>.large.ndalloc (uint64_t) r- [--enable-stats]
1392 Cumulative number of times a large extent was returned to the
1393 arena, whether to flush the relevant tcache if opt.tcache is
1394 enabled and the size class is within the range being cached, or to
1395 directly deallocate an allocation otherwise.
1396
1397 stats.arenas.<i>.large.nrequests (uint64_t) r- [--enable-stats]
1398 Cumulative number of allocation requests satisfied by all large
1399 size classes.
1400
1401 stats.arenas.<i>.bins.<j>.nmalloc (uint64_t) r- [--enable-stats]
1402 Cumulative number of times a bin region of the corresponding size
1403 class was allocated from the arena, whether to fill the relevant
1404 tcache if opt.tcache is enabled, or to directly satisfy an
1405 allocation request otherwise.
1406
1407 stats.arenas.<i>.bins.<j>.ndalloc (uint64_t) r- [--enable-stats]
1408 Cumulative number of times a bin region of the corresponding size
1409 class was returned to the arena, whether to flush the relevant
1410 tcache if opt.tcache is enabled, or to directly deallocate an
1411 allocation otherwise.
1412
1413 stats.arenas.<i>.bins.<j>.nrequests (uint64_t) r- [--enable-stats]
1414 Cumulative number of allocation requests satisfied by bin regions
1415 of the corresponding size class.
1416
1417 stats.arenas.<i>.bins.<j>.curregs (size_t) r- [--enable-stats]
1418 Current number of regions for this size class.
1419
1420 stats.arenas.<i>.bins.<j>.nfills (uint64_t) r-
1421 Cumulative number of tcache fills.
1422
1423 stats.arenas.<i>.bins.<j>.nflushes (uint64_t) r-
1424 Cumulative number of tcache flushes.
1425
1426 stats.arenas.<i>.bins.<j>.nslabs (uint64_t) r- [--enable-stats]
1427 Cumulative number of slabs created.
1428
1429 stats.arenas.<i>.bins.<j>.nreslabs (uint64_t) r- [--enable-stats]
1430 Cumulative number of times the current slab from which to allocate
1431 changed.
1432
1433 stats.arenas.<i>.bins.<j>.curslabs (size_t) r- [--enable-stats]
1434 Current number of slabs.
1435
1436 stats.arenas.<i>.bins.<j>.mutex.{counter} (counter specific type) r-
1437 [--enable-stats]
1438 Statistics on arena.<i>.bins.<j> mutex (arena bin scope; bin
1439 operation related). {counter} is one of the counters in mutex
1440 profiling counters.
1441
1442 stats.arenas.<i>.lextents.<j>.nmalloc (uint64_t) r- [--enable-stats]
1443 Cumulative number of times a large extent of the corresponding size
1444 class was allocated from the arena, whether to fill the relevant
1445 tcache if opt.tcache is enabled and the size class is within the
1446 range being cached, or to directly satisfy an allocation request
1447 otherwise.
1448
1449 stats.arenas.<i>.lextents.<j>.ndalloc (uint64_t) r- [--enable-stats]
1450 Cumulative number of times a large extent of the corresponding size
1451 class was returned to the arena, whether to flush the relevant
1452 tcache if opt.tcache is enabled and the size class is within the
1453 range being cached, or to directly deallocate an allocation
1454 otherwise.
1455
1456 stats.arenas.<i>.lextents.<j>.nrequests (uint64_t) r- [--enable-stats]
1457 Cumulative number of allocation requests satisfied by large extents
1458 of the corresponding size class.
1459
1460 stats.arenas.<i>.lextents.<j>.curlextents (size_t) r- [--enable-stats]
1461 Current number of large allocations for this size class.
1462
1463 stats.arenas.<i>.mutexes.large.{counter} (counter specific type) r-
1464 [--enable-stats]
1465 Statistics on arena.<i>.large mutex (arena scope; large allocation
1466 related). {counter} is one of the counters in mutex profiling
1467 counters.
1468
1469 stats.arenas.<i>.mutexes.extent_avail.{counter} (counter specific type)
1470 r- [--enable-stats]
1471 Statistics on arena.<i>.extent_avail mutex (arena scope; extent
1472 avail related). {counter} is one of the counters in mutex
1473 profiling counters.
1474
1475 stats.arenas.<i>.mutexes.extents_dirty.{counter} (counter specific
1476 type) r- [--enable-stats]
1477 Statistics on arena.<i>.extents_dirty mutex (arena scope; dirty
1478 extents related). {counter} is one of the counters in mutex
1479 profiling counters.
1480
1481 stats.arenas.<i>.mutexes.extents_muzzy.{counter} (counter specific
1482 type) r- [--enable-stats]
1483 Statistics on arena.<i>.extents_muzzy mutex (arena scope; muzzy
1484 extents related). {counter} is one of the counters in mutex
1485 profiling counters.
1486
1487 stats.arenas.<i>.mutexes.extents_retained.{counter} (counter specific
1488 type) r- [--enable-stats]
1489 Statistics on arena.<i>.extents_retained mutex (arena scope;
1490 retained extents related). {counter} is one of the counters in
1491 mutex profiling counters.
1492
1493 stats.arenas.<i>.mutexes.decay_dirty.{counter} (counter specific type)
1494 r- [--enable-stats]
1495 Statistics on arena.<i>.decay_dirty mutex (arena scope; decay for
1496 dirty pages related). {counter} is one of the counters in mutex
1497 profiling counters.
1498
1499 stats.arenas.<i>.mutexes.decay_muzzy.{counter} (counter specific type)
1500 r- [--enable-stats]
1501 Statistics on arena.<i>.decay_muzzy mutex (arena scope; decay for
1502 muzzy pages related). {counter} is one of the counters in mutex
1503 profiling counters.
1504
1505 stats.arenas.<i>.mutexes.base.{counter} (counter specific type) r-
1506 [--enable-stats]
1507 Statistics on arena.<i>.base mutex (arena scope; base allocator
1508 related). {counter} is one of the counters in mutex profiling
1509 counters.
1510
1511 stats.arenas.<i>.mutexes.tcache_list.{counter} (counter specific type)
1512 r- [--enable-stats]
1513 Statistics on arena.<i>.tcache_list mutex (arena scope; tcache to
1514 arena association related). This mutex is expected to be accessed
1515 less often. {counter} is one of the counters in mutex profiling
1516 counters.
1517
1519 Although the heap profiling functionality was originally designed to be
1520 compatible with the pprof command that is developed as part of the
1521 gperftools package[3], the addition of per thread heap profiling
1522 functionality required a different heap profile format. The jeprof
1523 command is derived from pprof, with enhancements to support the heap
1524 profile format described here.
1525
1526 In the following hypothetical heap profile, [...] indicates elision
1527 for the sake of compactness.
1528
1529 heap_v2/524288
1530 t*: 28106: 56637512 [0: 0]
1531 [...]
1532 t3: 352: 16777344 [0: 0]
1533 [...]
1534 t99: 17754: 29341640 [0: 0]
1535 [...]
1536 @ 0x5f86da8 0x5f5a1dc [...] 0x29e4d4e 0xa200316 0xabb2988 [...]
1537 t*: 13: 6688 [0: 0]
1538 t3: 12: 6496 [0: ]
1539 t99: 1: 192 [0: 0]
1540 [...]
1541
1542 MAPPED_LIBRARIES:
1543 [...]
1544
1545 The following matches the above heap profile, but most tokens are
1546 replaced with <description> to indicate descriptions of the
1547 corresponding fields.
1548
1549 <heap_profile_format_version>/<mean_sample_interval>
1550 <aggregate>: <curobjs>: <curbytes> [<cumobjs>: <cumbytes>]
1551 [...]
1552 <thread_3_aggregate>: <curobjs>: <curbytes>[<cumobjs>: <cumbytes>]
1553 [...]
1554 <thread_99_aggregate>: <curobjs>: <curbytes>[<cumobjs>: <cumbytes>]
1555 [...]
1556 @ <top_frame> <frame> [...] <frame> <frame> <frame> [...]
1557 <backtrace_aggregate>: <curobjs>: <curbytes> [<cumobjs>: <cumbytes>]
1558 <backtrace_thread_3>: <curobjs>: <curbytes> [<cumobjs>: <cumbytes>]
1559 <backtrace_thread_99>: <curobjs>: <curbytes> [<cumobjs>: <cumbytes>]
1560 [...]
1561
1562 MAPPED_LIBRARIES:
1563 </proc/<pid>/maps>
1564
1566 When debugging, it is a good idea to configure/build jemalloc with the
1567 --enable-debug and --enable-fill options, and recompile the program
1568 with suitable options and symbols for debugger support. When so
1569 configured, jemalloc incorporates a wide variety of run-time assertions
1570 that catch application errors such as double-free, write-after-free,
1571 etc.
1572
1573 Programs often accidentally depend on “uninitialized” memory actually
1574 being filled with zero bytes. Junk filling (see the opt.junk option)
1575 tends to expose such bugs in the form of obviously incorrect results
1576 and/or coredumps. Conversely, zero filling (see the opt.zero option)
1577 eliminates the symptoms of such bugs. Between these two options, it is
1578 usually possible to quickly detect, diagnose, and eliminate such bugs.
1579
1580 This implementation does not provide much detail about the problems it
1581 detects, because the performance impact for storing such information
1582 would be prohibitive.
1583
1585 If any of the memory allocation/deallocation functions detect an error
1586 or warning condition, a message will be printed to file descriptor
1587 STDERR_FILENO. Errors will result in the process dumping core. If the
1588 opt.abort option is set, most warnings are treated as errors.
1589
1590 The malloc_message variable allows the programmer to override the
1591 function which emits the text strings forming the errors and warnings
1592 if for some reason the STDERR_FILENO file descriptor is not suitable
1593 for this. malloc_message() takes the cbopaque pointer argument that is
1594 NULL unless overridden by the arguments in a call to
1595 malloc_stats_print(), followed by a string pointer. Please note that
1596 doing anything which tries to allocate memory in this function is
1597 likely to result in a crash or deadlock.
1598
1599 All messages are prefixed by “<jemalloc>: ”.
1600
1602 Standard API
1603 The malloc() and calloc() functions return a pointer to the allocated
1604 memory if successful; otherwise a NULL pointer is returned and errno is
1605 set to ENOMEM.
1606
1607 The posix_memalign() function returns the value 0 if successful;
1608 otherwise it returns an error value. The posix_memalign() function will
1609 fail if:
1610
1611 EINVAL
1612 The alignment parameter is not a power of 2 at least as large as
1613 sizeof(void *).
1614
1615 ENOMEM
1616 Memory allocation error.
1617
1618 The aligned_alloc() function returns a pointer to the allocated memory
1619 if successful; otherwise a NULL pointer is returned and errno is set.
1620 The aligned_alloc() function will fail if:
1621
1622 EINVAL
1623 The alignment parameter is not a power of 2.
1624
1625 ENOMEM
1626 Memory allocation error.
1627
1628 The realloc() function returns a pointer, possibly identical to ptr, to
1629 the allocated memory if successful; otherwise a NULL pointer is
1630 returned, and errno is set to ENOMEM if the error was the result of an
1631 allocation failure. The realloc() function always leaves the original
1632 buffer intact when an error occurs.
1633
1634 The free() function returns no value.
1635
1636 Non-standard API
1637 The mallocx() and rallocx() functions return a pointer to the allocated
1638 memory if successful; otherwise a NULL pointer is returned to indicate
1639 insufficient contiguous memory was available to service the allocation
1640 request.
1641
1642 The xallocx() function returns the real size of the resulting resized
1643 allocation pointed to by ptr, which is a value less than size if the
1644 allocation could not be adequately grown in place.
1645
1646 The sallocx() function returns the real size of the allocation pointed
1647 to by ptr.
1648
1649 The nallocx() returns the real size that would result from a successful
1650 equivalent mallocx() function call, or zero if insufficient memory is
1651 available to perform the size computation.
1652
1653 The mallctl(), mallctlnametomib(), and mallctlbymib() functions return
1654 0 on success; otherwise they return an error value. The functions will
1655 fail if:
1656
1657 EINVAL
1658 newp is not NULL, and newlen is too large or too small.
1659 Alternatively, *oldlenp is too large or too small; in this case as
1660 much data as possible are read despite the error.
1661
1662 ENOENT
1663 name or mib specifies an unknown/invalid value.
1664
1665 EPERM
1666 Attempt to read or write void value, or attempt to write read-only
1667 value.
1668
1669 EAGAIN
1670 A memory allocation failure occurred.
1671
1672 EFAULT
1673 An interface with side effects failed in some way not directly
1674 related to mallctl*() read/write processing.
1675
1676 The malloc_usable_size() function returns the usable size of the
1677 allocation pointed to by ptr.
1678
1680 The following environment variable affects the execution of the
1681 allocation functions:
1682
1683 MALLOC_CONF
1684 If the environment variable MALLOC_CONF is set, the characters it
1685 contains will be interpreted as options.
1686
1688 To dump core whenever a problem occurs:
1689
1690 ln -s 'abort:true' /etc/malloc.conf
1691
1692 To specify in the source that only one arena should be automatically
1693 created:
1694
1695 malloc_conf = "narenas:1";
1696
1698 madvise(2), mmap(2), sbrk(2), utrace(2), alloca(3), atexit(3),
1699 getpagesize(3)
1700
1702 The malloc(), calloc(), realloc(), and free() functions conform to
1703 ISO/IEC 9899:1990 (“ISO C90”).
1704
1705 The posix_memalign() function conforms to IEEE Std 1003.1-2001
1706 (“POSIX.1”).
1707
1709 Jason Evans
1710
1712 1. jemalloc website
1713 http://jemalloc.net/
1714
1715 2. JSON format
1716 http://www.json.org/
1717
1718 3. gperftools package
1719 http://code.google.com/p/gperftools/
1720
1721
1722
1723jemalloc 5.1.0-0-g61efbda7098d 05/08/2018 JEMALLOC(3)