1MEMBARRIER(2) Linux Programmer's Manual MEMBARRIER(2)
2
3
4
6 membarrier - issue memory barriers on a set of threads
7
9 #include <linux/membarrier.h>
10
11 int membarrier(int cmd, unsigned int flags, int cpu_id);
12
13 Note: There is no glibc wrapper for this system call; see NOTES.
14
16 The membarrier() system call helps reducing the overhead of the memory
17 barrier instructions required to order memory accesses on multi-core
18 systems. However, this system call is heavier than a memory barrier,
19 so using it effectively is not as simple as replacing memory barriers
20 with this system call, but requires understanding of the details below.
21
22 Use of memory barriers needs to be done taking into account that a mem‐
23 ory barrier always needs to be either matched with its memory barrier
24 counterparts, or that the architecture's memory model doesn't require
25 the matching barriers.
26
27 There are cases where one side of the matching barriers (which we will
28 refer to as "fast side") is executed much more often than the other
29 (which we will refer to as "slow side"). This is a prime target for
30 the use of membarrier(). The key idea is to replace, for these match‐
31 ing barriers, the fast-side memory barriers by simple compiler barri‐
32 ers, for example:
33
34 asm volatile ("" : : : "memory")
35
36 and replace the slow-side memory barriers by calls to membarrier().
37
38 This will add overhead to the slow side, and remove overhead from the
39 fast side, thus resulting in an overall performance increase as long as
40 the slow side is infrequent enough that the overhead of the membar‐
41 rier() calls does not outweigh the performance gain on the fast side.
42
43 The cmd argument is one of the following:
44
45 MEMBARRIER_CMD_QUERY (since Linux 4.3)
46 Query the set of supported commands. The return value of the
47 call is a bit mask of supported commands. MEMBARRIER_CMD_QUERY,
48 which has the value 0, is not itself included in this bit mask.
49 This command is always supported (on kernels where membarrier()
50 is provided).
51
52 MEMBARRIER_CMD_GLOBAL (since Linux 4.16)
53 Ensure that all threads from all processes on the system pass
54 through a state where all memory accesses to user-space ad‐
55 dresses match program order between entry to and return from the
56 membarrier() system call. All threads on the system are tar‐
57 geted by this command.
58
59 MEMBARRIER_CMD_GLOBAL_EXPEDITED (since Linux 4.16)
60 Execute a memory barrier on all running threads of all processes
61 that previously registered with MEMBARRIER_CMD_REGIS‐
62 TER_GLOBAL_EXPEDITED.
63
64 Upon return from the system call, the calling thread has a guar‐
65 antee that all running threads have passed through a state where
66 all memory accesses to user-space addresses match program order
67 between entry to and return from the system call (non-running
68 threads are de facto in such a state). This guarantee is pro‐
69 vided only for the threads of processes that previously regis‐
70 tered with MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED.
71
72 Given that registration is about the intent to receive the bar‐
73 riers, it is valid to invoke MEMBARRIER_CMD_GLOBAL_EXPEDITED
74 from a process that has not employed MEMBARRIER_CMD_REGIS‐
75 TER_GLOBAL_EXPEDITED.
76
77 The "expedited" commands complete faster than the non-expedited
78 ones; they never block, but have the downside of causing extra
79 overhead.
80
81 MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED (since Linux 4.16)
82 Register the process's intent to receive MEMBAR‐
83 RIER_CMD_GLOBAL_EXPEDITED memory barriers.
84
85 MEMBARRIER_CMD_PRIVATE_EXPEDITED (since Linux 4.14)
86 Execute a memory barrier on each running thread belonging to the
87 same process as the calling thread.
88
89 Upon return from the system call, the calling thread has a guar‐
90 antee that all its running thread siblings have passed through a
91 state where all memory accesses to user-space addresses match
92 program order between entry to and return from the system call
93 (non-running threads are de facto in such a state). This guar‐
94 antee is provided only for threads in the same process as the
95 calling thread.
96
97 The "expedited" commands complete faster than the non-expedited
98 ones; they never block, but have the downside of causing extra
99 overhead.
100
101 A process must register its intent to use the private expedited
102 command prior to using it.
103
104 MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED (since Linux 4.14)
105 Register the process's intent to use MEMBARRIER_CMD_PRIVATE_EX‐
106 PEDITED.
107
108 MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE (since Linux 4.16)
109 In addition to providing the memory ordering guarantees de‐
110 scribed in MEMBARRIER_CMD_PRIVATE_EXPEDITED, upon return from
111 system call the calling thread has a guarantee that all its run‐
112 ning thread siblings have executed a core serializing instruc‐
113 tion. This guarantee is provided only for threads in the same
114 process as the calling thread.
115
116 The "expedited" commands complete faster than the non-expedited
117 ones, they never block, but have the downside of causing extra
118 overhead.
119
120 A process must register its intent to use the private expedited
121 sync core command prior to using it.
122
123 MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_SYNC_CORE (since Linux 4.16)
124 Register the process's intent to use MEMBARRIER_CMD_PRIVATE_EX‐
125 PEDITED_SYNC_CORE.
126
127 MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ (since Linux 5.10)
128 Ensure the caller thread, upon return from system call, that all
129 its running thread siblings have any currently running rseq
130 critical sections restarted if flags parameter is 0; if flags
131 parameter is MEMBARRIER_CMD_FLAG_CPU, then this operation is
132 performed only on CPU indicated by cpu_id. This guarantee is
133 provided only for threads in the same process as the calling
134 thread.
135
136 RSEQ membarrier is only available in the "private expedited"
137 form.
138
139 A process must register its intent to use the private expedited
140 rseq command prior to using it.
141
142 MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_RSEQ (since Linux 5.10)
143 Register the process's intent to use MEMBARRIER_CMD_PRIVATE_EX‐
144 PEDITED_RSEQ.
145
146 MEMBARRIER_CMD_SHARED (since Linux 4.3)
147 This is an alias for MEMBARRIER_CMD_GLOBAL that exists for
148 header backward compatibility.
149
150 The flags argument must be specified as 0 unless the command is MEMBAR‐
151 RIER_CMD_PRIVATE_EXPEDITED_RSEQ, in which case flags can be either 0 or
152 MEMBARRIER_CMD_FLAG_CPU.
153
154 The cpu_id argument is ignored unless flags is MEMBARRIER_CMD_FLAG_CPU,
155 in which case it must specify the CPU targeted by this membarrier com‐
156 mand.
157
158 All memory accesses performed in program order from each targeted
159 thread are guaranteed to be ordered with respect to membarrier().
160
161 If we use the semantic barrier() to represent a compiler barrier forc‐
162 ing memory accesses to be performed in program order across the bar‐
163 rier, and smp_mb() to represent explicit memory barriers forcing full
164 memory ordering across the barrier, we have the following ordering ta‐
165 ble for each pairing of barrier(), membarrier(), and smp_mb(). The
166 pair ordering is detailed as (O: ordered, X: not ordered):
167
168 barrier() smp_mb() membarrier()
169 barrier() X X O
170 smp_mb() X O O
171 membarrier() O O O
172
174 On success, the MEMBARRIER_CMD_QUERY operation returns a bit mask of
175 supported commands, and the MEMBARRIER_CMD_GLOBAL, MEMBAR‐
176 RIER_CMD_GLOBAL_EXPEDITED, MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED,
177 MEMBARRIER_CMD_PRIVATE_EXPEDITED, MEMBARRIER_CMD_REGISTER_PRIVATE_EXPE‐
178 DITED, MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE, and MEMBAR‐
179 RIER_CMD_REGISTER_PRIVATE_EXPEDITED_SYNC_CORE operations return zero.
180 On error, -1 is returned, and errno is set appropriately.
181
182 For a given command, with flags set to 0, this system call is guaran‐
183 teed to always return the same value until reboot. Further calls with
184 the same arguments will lead to the same result. Therefore, with flags
185 set to 0, error handling is required only for the first call to membar‐
186 rier().
187
189 EINVAL cmd is invalid, or flags is nonzero, or the MEMBAR‐
190 RIER_CMD_GLOBAL command is disabled because the nohz_full CPU
191 parameter has been set, or the MEMBARRIER_CMD_PRIVATE_EXPE‐
192 DITED_SYNC_CORE and MEMBARRIER_CMD_REGISTER_PRIVATE_EXPE‐
193 DITED_SYNC_CORE commands are not implemented by the architec‐
194 ture.
195
196 ENOSYS The membarrier() system call is not implemented by this kernel.
197
198 EPERM The current process was not registered prior to using private
199 expedited commands.
200
202 The membarrier() system call was added in Linux 4.3.
203
204 Before Linux 5.10, the prototype for membarrier() was:
205
206 int membarrier(int cmd, int flags);
207
209 membarrier() is Linux-specific.
210
212 A memory barrier instruction is part of the instruction set of archi‐
213 tectures with weakly ordered memory models. It orders memory accesses
214 prior to the barrier and after the barrier with respect to matching
215 barriers on other cores. For instance, a load fence can order loads
216 prior to and following that fence with respect to stores ordered by
217 store fences.
218
219 Program order is the order in which instructions are ordered in the
220 program assembly code.
221
222 Examples where membarrier() can be useful include implementations of
223 Read-Copy-Update libraries and garbage collectors.
224
225 Glibc does not provide a wrapper for this system call; call it using
226 syscall(2).
227
229 Assuming a multithreaded application where "fast_path()" is executed
230 very frequently, and where "slow_path()" is executed infrequently, the
231 following code (x86) can be transformed using membarrier():
232
233 #include <stdlib.h>
234
235 static volatile int a, b;
236
237 static void
238 fast_path(int *read_b)
239 {
240 a = 1;
241 asm volatile ("mfence" : : : "memory");
242 *read_b = b;
243 }
244
245 static void
246 slow_path(int *read_a)
247 {
248 b = 1;
249 asm volatile ("mfence" : : : "memory");
250 *read_a = a;
251 }
252
253 int
254 main(int argc, char **argv)
255 {
256 int read_a, read_b;
257
258 /*
259 * Real applications would call fast_path() and slow_path()
260 * from different threads. Call those from main() to keep
261 * this example short.
262 */
263
264 slow_path(&read_a);
265 fast_path(&read_b);
266
267 /*
268 * read_b == 0 implies read_a == 1 and
269 * read_a == 0 implies read_b == 1.
270 */
271
272 if (read_b == 0 && read_a == 0)
273 abort();
274
275 exit(EXIT_SUCCESS);
276 }
277
278 The code above transformed to use membarrier() becomes:
279
280 #define _GNU_SOURCE
281 #include <stdlib.h>
282 #include <stdio.h>
283 #include <unistd.h>
284 #include <sys/syscall.h>
285 #include <linux/membarrier.h>
286
287 static volatile int a, b;
288
289 static int
290 membarrier(int cmd, unsigned int flags, int cpu_id)
291 {
292 return syscall(__NR_membarrier, cmd, flags, cpu_id);
293 }
294
295 static int
296 init_membarrier(void)
297 {
298 int ret;
299
300 /* Check that membarrier() is supported. */
301
302 ret = membarrier(MEMBARRIER_CMD_QUERY, 0, 0);
303 if (ret < 0) {
304 perror("membarrier");
305 return -1;
306 }
307
308 if (!(ret & MEMBARRIER_CMD_GLOBAL)) {
309 fprintf(stderr,
310 "membarrier does not support MEMBARRIER_CMD_GLOBAL\n");
311 return -1;
312 }
313
314 return 0;
315 }
316
317 static void
318 fast_path(int *read_b)
319 {
320 a = 1;
321 asm volatile ("" : : : "memory");
322 *read_b = b;
323 }
324
325 static void
326 slow_path(int *read_a)
327 {
328 b = 1;
329 membarrier(MEMBARRIER_CMD_GLOBAL, 0, 0);
330 *read_a = a;
331 }
332
333 int
334 main(int argc, char **argv)
335 {
336 int read_a, read_b;
337
338 if (init_membarrier())
339 exit(EXIT_FAILURE);
340
341 /*
342 * Real applications would call fast_path() and slow_path()
343 * from different threads. Call those from main() to keep
344 * this example short.
345 */
346
347 slow_path(&read_a);
348 fast_path(&read_b);
349
350 /*
351 * read_b == 0 implies read_a == 1 and
352 * read_a == 0 implies read_b == 1.
353 */
354
355 if (read_b == 0 && read_a == 0)
356 abort();
357
358 exit(EXIT_SUCCESS);
359 }
360
362 This page is part of release 5.10 of the Linux man-pages project. A
363 description of the project, information about reporting bugs, and the
364 latest version of this page, can be found at
365 https://www.kernel.org/doc/man-pages/.
366
367
368
369Linux 2020-11-01 MEMBARRIER(2)