1MEMKINDALLOCATOR(3) MEMKINDALLOCATOR MEMKINDALLOCATOR(3)
2
3
4
6 libmemkind::static_kind::allocator<T> - The C++ allocator compatible
7 with the C++ standard library allocator concepts
8 Note: memkind_allocator.h functionality is considered as stable API
9 (STANDARD API).
10
12 #include <memkind_allocator.h>
13
14 Link with -lmemkind
15
16 libmemkind::static_kind::allocator(libmemkind::kinds kind);
17 template <typename U> libmemkind::static_kind::allocator<T>::allocator(const libmemkind::static_kind::allocator<U>&) noexcept;
18 template <typename U> libmemkind::static_kind::allocator(const allocator<U>&& other) noexcept;
19 libmemkind::static_kind::allocator<T>::~allocator();
20 T *libmemkind::static_kind::allocator<T>::allocate(std::size_t n) const;
21 void libmemkind::static_kind::allocator<T>::deallocate(T *p, std::size_t n) const;
22 template <class U, class... Args> void libmemkind::static_kind::allocator<T>::construct(U *p, Args... args) const;
23 void libmemkind::static_kind::allocator<T>::destroy(T *p) const;
24
26 The libmemkind::static_kind::allocator<T> is intended to be used with
27 STL containers to allocate from static kinds memory. Memory management
28 is based on memkind library. Refer memkind(3) man page for more de‐
29 tails.
30
31 The libmemkind::kinds specifies allocator static kinds of memory, rep‐
32 resenting type of memory which offers different characteristics. The
33 available types of allocator kinds of memory:
34
35 libmemkind::kinds::DEFAULT Default allocation using standard memory and
36 default page size.
37
38 libmemkind::kinds::HIGHEST_CAPACITY Allocate from a NUMA node(s) that
39 has the highest capacity among all nodes in the system.
40
41 libmemkind::kinds::HIGHEST_CAPACITY_PREFERRED Same as lib‐
42 memkind::kinds::HIGHEST_CAPACITY except that if there is not enough
43 memory in the NUMA node that has the highest capacity in the local do‐
44 main to satisfy the request, the allocation will fall back on other
45 memory NUMA nodes. Note: For this kind, the allocation will not suc‐
46 ceed if there are two or more NUMA nodes that have the highest capac‐
47 ity.
48
49 libmemkind::kinds::HIGHEST_CAPACITY_LOCAL Allocate from a NUMA node
50 that has the highest capacity among all NUMA Nodes from the local do‐
51 main. NUMA Nodes have the same local domain for a set of CPUs associ‐
52 ated with them, e.g. socket or sub-NUMA cluster. Note: If there are
53 multiple NUMA nodes in the same local domain that have the highest ca‐
54 pacity - allocation will be done from NUMA node with worse latency at‐
55 tribute. This kind requires locality information described in SYSTEM
56 CONFIGURATION section.
57
58 libmemkind::kinds::HIGHEST_CAPACITY_LOCAL_PREFERRED Same as lib‐
59 memkind::kinds::HIGHEST_CAPACITY_LOCAL except that if there is not
60 enough memory in the NUMA node that has the highest capacity to satisfy
61 the request, the allocation will fall back on other memory NUMA nodes.
62
63 libmemkind::kinds::LOWEST_LATENCY_LOCAL Allocate from a NUMA node that
64 has the lowest latency among all NUMA Nodes from the local domain.
65 NUMA Nodes have the same local domain for a set of CPUs associated with
66 them, e.g. socket or sub-NUMA cluster. Note: If there are multiple
67 NUMA nodes in the same local domain that have the lowest latency - al‐
68 location will be done from NUMA node with smaller memory capacity.
69 This kind requires locality and memory performance characteristics in‐
70 formation described in SYSTEM CONFIGURATION section.
71
72 libmemkind::kinds::LOWEST_LATENCY_LOCAL_PREFERRED Same as lib‐
73 memkind::kinds::LOWEST_LATENCY_LOCAL except that if there is not enough
74 memory in the NUMA node that has the lowest latency to satisfy the re‐
75 quest, the allocation will fall back on other memory NUMA nodes.
76
77 libmemkind::kinds::HIGHEST_BANDWIDTH_LOCAL Allocate from a NUMA node
78 that has the highest bandwidth among all NUMA Nodes from the local do‐
79 main. NUMA Nodes have the same local domain for a set of CPUs associ‐
80 ated with them, e.g. socket or sub-NUMA cluster. Note: If there are
81 multiple NUMA nodes in the same local domain that have the highest
82 bandwidth - allocation will be done from NUMA node with smaller memory
83 capacity. This kind requires locality and memory performance charac‐
84 teristics information described in SYSTEM CONFIGURATION section.
85
86 libmemkind::kinds::HIGHEST_BANDWIDTH_LOCAL_PREFERRED Same as lib‐
87 memkind::kinds::HIGHEST_BANDWIDTH_LOCAL except that if there is not
88 enough memory in the NUMA node that has the highest bandwidth to sat‐
89 isfy the request, the allocation will fall back on other memory NUMA
90 nodes.
91
92 libmemkind::kinds::HUGETLB Allocate from standard memory using huge
93 pages. Note: This kind requires huge pages configuration described in
94 SYSTEM CONFIGURATION section.
95
96 libmemkind::kinds::INTERLEAVE Allocate pages interleaved across all
97 NUMA nodes with transparent huge pages disabled.
98
99 libmemkind::kinds::HBW Allocate from the closest high bandwidth memory
100 NUMA node at the time of allocation. If there is not enough high band‐
101 width memory to satisfy the request, errno is set to ENOMEM and the al‐
102 located pointer is set to NULL. Note: This kind requires memory per‐
103 formance characteristics information described in SYSTEM CONFIGURATION
104 section.
105
106 libmemkind::kinds::HBW_ALL Same as libmemkind::kinds::HBW except deci‐
107 sion regarding closest NUMA node is postponed until the time of first
108 write.
109
110 libmemkind::kinds::HBW_HUGETLB Same as libmemkind::kinds::HBW except
111 the allocation is backed by huge pages. Note: This kind requires huge
112 pages configuration described in SYSTEM CONFIGURATION section.
113
114 libmemkind::kinds::HBW_ALL_HUGETLB Combination of lib‐
115 memkind::kinds::HBW_ALL and libmemkind::kinds::HBW_HUGETLB properties.
116 Note: This kind requires huge pages configuration described in SYSTEM
117 CONFIGURATION section.
118
119 libmemkind::kinds::HBW_PREFERRED Same as libmemkind::kinds::HBW except
120 that if there is not enough high bandwidth memory to satisfy the re‐
121 quest, the allocation will fall back on standard memory.
122
123 libmemkind::kinds::HBW_PREFERRED_HUGETLB Same as lib‐
124 memkind::kinds::HBW_PREFERRED except the allocation is backed by huge
125 pages. Note: This kind requires huge pages configuration described in
126 SYSTEM CONFIGURATION section.
127
128 libmemkind::kinds::HBW_INTERLEAVE Same as libmemkind::kinds::HBW except
129 that the pages that support the allocation are interleaved across all
130 high bandwidth nodes and transparent huge pages are disabled.
131
132 libmemkind::kinds::REGULAR Allocate from regular memory using the de‐
133 fault page size. Regular means general purpose memory from the NUMA
134 nodes containing CPUs.
135
136 libmemkind::kinds::DAX_KMEM Allocate from the closest persistent memory
137 NUMA node at the time of allocation. If there is not enough memory in
138 the closest persistent memory NUMA node to satisfy the request, errno
139 is set to ENOMEM and the allocated pointer is set to NULL.
140
141 libmemkind::kinds::DAX_KMEM_ALL Allocate from the closest persistent
142 memory NUMA node available at the time of allocation. If there is not
143 enough memory on any of persistent memory NUMA nodes to satisfy the re‐
144 quest, errno is set to ENOMEM and the allocated pointer is set to NULL.
145
146 libmemkind::kinds::DAX_KMEM_PREFERRED Same as lib‐
147 memkind::kinds::DAX_KMEM except that if there is not enough memory in
148 the closest persistent memory NUMA node to satisfy the request, the al‐
149 location will fall back on other memory NUMA nodes. Note: For this
150 kind, the allocation will not succeed if two or more persistent memory
151 NUMA nodes are in the same shortest distance to the same CPU on which
152 process is eligible to run. Check on that eligibility is done upon
153 starting the application.
154
155 libmemkind::kinds::DAX_KMEM_INTERLEAVE Same as lib‐
156 memkind::kinds::DAX_KMEM except that the pages that support the alloca‐
157 tion are interleaved across all persistent memory NUMA nodes.
158
159 All public member types and functions correspond to standard library
160 allocator concepts and definitions. The current implementation supports
161 C++11 standard.
162
163 Template arguments:
164 T is an object type aliased by value_type.
165 U is an object type.
166
167 Note:
168 T *libmemkind::static_kind::allocator<T>::allocate(std::size_t n) allo‐
169 cates memory using memkind_malloc(). Throw std::bad_alloc when:
170 n = 0
171 or there is not enough memory to satisfy the request.
172
173 libmemkind::static_kind::allocator<T>::deallocate(T *p, std::size_t n)
174 deallocates memory associated with pointer returned by allocate() using
175 memkind_free().
176
178 Interfaces for obtaining 2MB (HUGETLB) memory need allocated huge pages
179 in the kernel's huge page pool.
180
181 HUGETLB (huge pages)
182 Current number of "persistent" huge pages can be read from
183 /proc/sys/vm/nr_hugepages file. Proposed way of setting
184 hugepages is: sudo sysctl vm.nr_hugepages=<number_of_hugepages>.
185 More information can be found here:
186 ⟨https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt⟩
187
188 Interfaces for obtaining locality information are provided by libhwloc
189 dependency. Functionality based on locality requires that memkind li‐
190 brary is configured and built with the support of libhwloc (./configure
191 --enable-hwloc).
192
193 Interfaces for obtaining memory performance characteristics information
194 are based on HMAT (Heterogeneous Memory Attribute Table)
195 ⟨https://uefi.org/sites/default/files/resources/ACPI_6_3_final_Jan30.pdf⟩
196 Functionality based on memory performance characteristics requires that
197 platform configuration fully supports HMAT and memkind library is con‐
198 figured and built with the support of libhwloc (./configure --enable-
199 hwloc).
200
201 Note: For a given target NUMA Node, the OS exposes only the performance
202 characteristics of the best performing NUMA node.
203
204 libhwloc can be reached on: ⟨https://www.open-mpi.org/projects/hwloc⟩
205
207 Copyright (C) 2019 - 2021 Intel Corporation. All rights reserved.
208
210 memkind(3)
211
212
213
214Intel Corporation 2019-09-24 MEMKINDALLOCATOR(3)