1DRBD.CONF(5) Configuration Files DRBD.CONF(5)
2
3
4
6 drbd.conf - Configuration file for DRBD's devices
7
9 The file /etc/drbd.conf is read by drbdadm.
10
11 The file format was designed as to allow to have a verbatim copy of the
12 file on both nodes of the cluster. It is highly recommended to do so in
13 order to keep your configuration manageable. The file /etc/drbd.conf
14 should be the same on both nodes of the cluster. Changes to
15 /etc/drbd.conf do not apply immediately.
16
17 By convention the main config contains two include statements. The
18 first one includes the file /etc/drbd.d/global_common.conf, the second
19 one all file with a .res suffix.
20
21 resource r0 {
22 net {
23 protocol C;
24 cram-hmac-alg sha1;
25 shared-secret "FooFunFactory";
26 }
27 disk {
28 resync-rate 10M;
29 }
30 on alice {
31 volume 0 {
32 device minor 1;
33 disk /dev/sda7;
34 meta-disk internal;
35 }
36 address 10.1.1.31:7789;
37 }
38 on bob {
39 volume 0 {
40 device minor 1;
41 disk /dev/sda7;
42 meta-disk internal;
43 }
44 address 10.1.1.32:7789;
45 }
46 }
47
48 In this example, there is a single DRBD resource (called r0) which uses
49 protocol C for the connection between its devices. It contains a single
50 volume which runs on host alice uses /dev/drbd1 as devices for its
51 application, and /dev/sda7 as low-level storage for the data. The IP
52 addresses are used to specify the networking interfaces to be used. An
53 eventually running resync process should use about 10MByte/second of IO
54 bandwidth. This resync-rate statement is valid for volume 0, but would
55 also be valid for further volumes. In this example it assigns full
56 10MByte/second to each volume.
57
58 There may be multiple resource sections in a single drbd.conf file. For
59 more examples, please have a look at the DRBD User's Guide[1].
60
62 The file consists of sections and parameters. A section begins with a
63 keyword, sometimes an additional name, and an opening brace (“{”). A
64 section ends with a closing brace (“}”. The braces enclose the
65 parameters.
66
67 section [name] { parameter value; [...] }
68
69 A parameter starts with the identifier of the parameter followed by
70 whitespace. Every subsequent character is considered as part of the
71 parameter's value. A special case are Boolean parameters which consist
72 only of the identifier. Parameters are terminated by a semicolon (“;”).
73
74 Some parameter values have default units which might be overruled by K,
75 M or G. These units are defined in the usual way (K = 2^10 = 1024, M =
76 1024 K, G = 1024 M).
77
78 Comments may be placed into the configuration file and must begin with
79 a hash sign (“#”). Subsequent characters are ignored until the end of
80 the line.
81
82 Sections
83 skip
84
85 Comments out chunks of text, even spanning more than one line.
86 Characters between the keyword skip and the opening brace (“{”) are
87 ignored. Everything enclosed by the braces is skipped. This comes
88 in handy, if you just want to comment out some 'resource [name]
89 {...}' section: just precede it with 'skip'.
90
91 global
92
93 Configures some global parameters. Currently only minor-count,
94 dialog-refresh, disable-ip-verification and usage-count are allowed
95 here. You may only have one global section, preferably as the first
96 section.
97
98 common
99
100 All resources inherit the options set in this section. The common
101 section might have a startup, a options, a handlers, a net and a
102 disk section.
103
104 resource name
105
106 Configures a DRBD resource. Each resource section needs to have two
107 (or more) on host sections and may have a startup, a options, a
108 handlers, a net and a disk section. It might contain volumes
109 sections.
110
111 on host-name
112
113 Carries the necessary configuration parameters for a DRBD device of
114 the enclosing resource. host-name is mandatory and must match the
115 Linux host name (uname -n) of one of the nodes. You may list more
116 than one host name here, in case you want to use the same
117 parameters on several hosts (you'd have to move the IP around
118 usually). Or you may list more than two such sections.
119
120 resource r1 {
121 protocol C;
122 device minor 1;
123 meta-disk internal;
124
125 on alice bob {
126 address 10.2.2.100:7801;
127 disk /dev/mapper/some-san;
128 }
129 on charlie {
130 address 10.2.2.101:7801;
131 disk /dev/mapper/other-san;
132 }
133 on daisy {
134 address 10.2.2.103:7801;
135 disk /dev/mapper/other-san-as-seen-from-daisy;
136 }
137 }
138
139
140 See also the floating section keyword. Required statements in this
141 section: address and volume. Note for backward compatibility and
142 convenience it is valid to embed the statements of a single volume
143 directly into the host section.
144
145 volume vnr
146
147 Defines a volume within a connection. The minor numbers of a
148 replicated volume might be different on different hosts, the volume
149 number (vnr) is what groups them together. Required parameters in
150 this section: device, disk, meta-disk.
151
152 stacked-on-top-of resource
153
154 For a stacked DRBD setup (3 or 4 nodes), a stacked-on-top-of is
155 used instead of an on section. Required parameters in this section:
156 device and address.
157
158 floating AF addr:port
159
160 Carries the necessary configuration parameters for a DRBD device of
161 the enclosing resource. This section is very similar to the on
162 section. The difference to the on section is that the matching of
163 the host sections to machines is done by the IP-address instead of
164 the node name. Required parameters in this section: device, disk,
165 meta-disk, all of which may be inherited from the resource section,
166 in which case you may shorten this section down to just the address
167 identifier.
168
169 resource r2 {
170 protocol C;
171 device minor 2;
172 disk /dev/sda7;
173 meta-disk internal;
174
175 # short form, device, disk and meta-disk inherited
176 floating 10.1.1.31:7802;
177
178 # longer form, only device inherited
179 floating 10.1.1.32:7802 {
180 disk /dev/sdb;
181 meta-disk /dev/sdc8;
182 }
183 }
184
185
186 disk
187
188 This section is used to fine tune DRBD's properties in respect to
189 the low level storage. Please refer to drbdsetup(8) for detailed
190 description of the parameters. Optional parameters: on-io-error,
191 size, fencing, disk-barrier, disk-flushes, disk-drain, md-flushes,
192 max-bio-bvecs, resync-rate, resync-after, al-extents, al-updates,
193 c-plan-ahead, c-fill-target, c-delay-target, c-max-rate,
194 c-min-rate, disk-timeout, discard-zeroes-if-aligned,
195 rs-discard-granularity, read-balancing.
196
197 net
198
199 This section is used to fine tune DRBD's properties. Please refer
200 to drbdsetup(8) for a detailed description of this section's
201 parameters. Optional parameters: protocol, sndbuf-size,
202 rcvbuf-size, timeout, connect-int, ping-int, ping-timeout,
203 max-buffers, max-epoch-size, ko-count, allow-two-primaries,
204 cram-hmac-alg, shared-secret, after-sb-0pri, after-sb-1pri,
205 after-sb-2pri, data-integrity-alg, no-tcp-cork, on-congestion,
206 congestion-fill, congestion-extents, verify-alg, use-rle,
207 csums-alg, socket-check-timeout.
208
209 startup
210
211 This section is used to fine tune DRBD's properties. Please refer
212 to drbdsetup(8) for a detailed description of this section's
213 parameters. Optional parameters: wfc-timeout, degr-wfc-timeout,
214 outdated-wfc-timeout, wait-after-sb, stacked-timeouts and
215 become-primary-on.
216
217 options
218
219 This section is used to fine tune the behaviour of the resource
220 object. Please refer to drbdsetup(8) for a detailed description of
221 this section's parameters. Optional parameters: cpu-mask, and
222 on-no-data-accessible.
223
224 handlers
225
226 In this section you can define handlers (executables) that are
227 started by the DRBD system in response to certain events. Optional
228 parameters: pri-on-incon-degr, pri-lost-after-sb, pri-lost,
229 fence-peer (formerly oudate-peer), local-io-error,
230 initial-split-brain, split-brain, before-resync-target,
231 after-resync-target.
232
233 The interface is done via environment variables:
234
235 · DRBD_RESOURCE is the name of the resource
236
237 · DRBD_MINOR is the minor number of the DRBD device, in decimal.
238
239 · DRBD_CONF is the path to the primary configuration file; if you
240 split your configuration into multiple files (e.g. in
241 /etc/drbd.conf.d/), this will not be helpful.
242
243 · DRBD_PEER_AF , DRBD_PEER_ADDRESS , DRBD_PEERS are the address
244 family (e.g. ipv6), the peer's address and hostnames.
245
246
247 DRBD_PEER is deprecated.
248
249 Please note that not all of these might be set for all handlers,
250 and that some values might not be useable for a floating
251 definition.
252
253 Parameters
254 minor-count count
255 count may be a number from 1 to 1048575.
256
257 Minor-count is a sizing hint for DRBD. It helps to right-size
258 various memory pools. It should be set in the in the same order of
259 magnitude than the actual number of minors you use. Per default the
260 module loads with 11 more resources than you have currently in your
261 config but at least 32.
262
263 dialog-refresh time
264 time may be 0 or a positive number.
265
266 The user dialog redraws the second count every time seconds (or
267 does no redraws if time is 0). The default value is 1.
268
269 disable-ip-verification
270 Use disable-ip-verification if, for some obscure reasons, drbdadm
271 can/might not use ip or ifconfig to do a sanity check for the IP
272 address. You can disable the IP verification with this option.
273
274 udev-always-use-vnr
275 When udev asks drbdadm for a list of device related symlinks,
276 drbdadm would suggest symlinks with differing naming conventions,
277 depending on whether the resource has explicit volume VNR { }
278 definitions, or only one single volume with the implicit volume
279 number 0:
280
281 # implicit single volume without "volume 0 {}" block
282 DEVICE=drbd<minor>
283 SYMLINK_BY_RES=drbd/by-res/<resource-name>
284
285 # explicit volume definition: volume VNR { }
286 DEVICE=drbd<minor>
287 SYMLINK_BY_RES=drbd/by-res/<resource-name>/VNR
288
289 If you define this parameter in the global section, drbdadm will
290 always add the .../VNR part, and will not care for whether the
291 volume definition was implicit or explicit.
292
293 For legacy backward compatibility, this is off by default, but we
294 do recommend to enable it.
295
296 usage-count val
297 Please participate in DRBD's online usage counter[2]. The most
298 convenient way to do so is to set this option to yes. Valid options
299 are: yes, no and ask.
300
301 protocol prot-id
302 On the TCP/IP link the specified protocol is used. Valid protocol
303 specifiers are A, B, and C.
304
305 Protocol A: write IO is reported as completed, if it has reached
306 local disk and local TCP send buffer.
307
308 Protocol B: write IO is reported as completed, if it has reached
309 local disk and remote buffer cache.
310
311 Protocol C: write IO is reported as completed, if it has reached
312 both local and remote disk.
313
314 device name minor nr
315
316 The name of the block device node of the resource being described.
317 You must use this device with your application (file system) and
318 you must not use the low level block device which is specified with
319 the disk parameter.
320
321 One can ether omit the name or minor and the minor number. If you
322 omit the name a default of /dev/drbdminor will be used.
323
324 Udev will create additional symlinks in /dev/drbd/by-res and
325 /dev/drbd/by-disk.
326
327 disk name
328
329 DRBD uses this block device to actually store and retrieve the
330 data. Never access such a device while DRBD is running on top of
331 it. This also holds true for dumpe2fs(8) and similar commands.
332
333 address AF addr:port
334
335 A resource needs one IP address per device, which is used to wait
336 for incoming connections from the partner device respectively to
337 reach the partner device. AF must be one of ipv4, ipv6, ssocks or
338 sdp (for compatibility reasons sci is an alias for ssocks). It may
339 be omited for IPv4 addresses. The actual IPv6 address that follows
340 the ipv6 keyword must be placed inside brackets: ipv6
341 [fd01:2345:6789:abcd::1]:7800.
342
343 Each DRBD resource needs a TCP port which is used to connect to the
344 node's partner device. Two different DRBD resources may not use the
345 same addr:port combination on the same node.
346
347 meta-disk internal,
348 meta-disk device,
349 meta-disk device [index]
350
351 Internal means that the last part of the backing device is used to
352 store the meta-data. The size of the meta-data is computed based on
353 the size of the device.
354
355 When a device is specified, either with or without an index, DRBD
356 stores the meta-data on this device. Without index, the size of the
357 meta-data is determined by the size of the data device. This is
358 usually used with LVM, which allows to have many variable sized
359 block devices. The meta-data size is 36kB + Backing-Storage-size /
360 32k, rounded up to the next 4kb boundary. (Rule of the thumb:
361 32kByte per 1GByte of storage, rounded up to the next MB.)
362
363 When an index is specified, each index number refers to a fixed
364 slot of meta-data of 128 MB, which allows a maximum data size of 4
365 TiB. This way, multiple DBRD devices can share the same meta-data
366 device. For example, if /dev/sde6[0] and /dev/sde6[1] are used,
367 /dev/sde6 must be at least 256 MB big. Because of the hard size
368 limit, use of meta-disk indexes is discouraged.
369
370 on-io-error handler
371 handler is taken, if the lower level device reports io-errors to
372 the upper layers.
373
374 handler may be pass_on, call-local-io-error or detach.
375
376 pass_on: The node downgrades the disk status to inconsistent, marks
377 the erroneous block as inconsistent in the bitmap and retries the
378 IO on the remote node.
379
380 call-local-io-error: Call the handler script local-io-error.
381
382 detach: The node drops its low level device, and continues in
383 diskless mode.
384
385 fencing fencing_policy
386
387 By fencing we understand preventive measures to avoid situations
388 where both nodes are primary and disconnected (AKA split brain).
389
390 Valid fencing policies are:
391
392 dont-care
393 This is the default policy. No fencing actions are taken.
394
395 resource-only
396 If a node becomes a disconnected primary, it tries to fence the
397 peer's disk. This is done by calling the fence-peer handler.
398 The handler is supposed to reach the other node over
399 alternative communication paths and call 'drbdadm outdate res'
400 there.
401
402 resource-and-stonith
403 If a node becomes a disconnected primary, it freezes all its IO
404 operations and calls its fence-peer handler. The fence-peer
405 handler is supposed to reach the peer over alternative
406 communication paths and call 'drbdadm outdate res' there. In
407 case it cannot reach the peer it should stonith the peer. IO is
408 resumed as soon as the situation is resolved. In case your
409 handler fails, you can resume IO with the resume-io command.
410
411 disk-barrier,
412 disk-flushes,
413 disk-drain
414 DRBD has four implementations to express write-after-write
415 dependencies to its backing storage device. DRBD will use the first
416 method that is supported by the backing storage device and that is
417 not disabled. By default the flush method is used.
418
419 Since drbd-8.4.2 disk-barrier is disabled by default because since
420 linux-2.6.36 (or 2.6.32 RHEL6) there is no reliable way to
421 determine if queuing of IO-barriers works. Dangerous only enable
422 if you are told so by one that knows for sure.
423
424 When selecting the method you should not only base your decision on
425 the measurable performance. In case your backing storage device has
426 a volatile write cache (plain disks, RAID of plain disks) you
427 should use one of the first two. In case your backing storage
428 device has battery-backed write cache you may go with option 3.
429 Option 4 (disable everything, use "none") is dangerous on most IO
430 stacks, may result in write-reordering, and if so, can
431 theoretically be the reason for data corruption, or disturb the
432 DRBD protocol, causing spurious disconnect/reconnect cycles. Do
433 not use no-disk-drain.
434
435 Unfortunately device mapper (LVM) might not support barriers.
436
437 The letter after "wo:" in /proc/drbd indicates with method is
438 currently in use for a device: b, f, d, n. The implementations are:
439
440 barrier
441 The first requires that the driver of the backing storage
442 device support barriers (called 'tagged command queuing' in
443 SCSI and 'native command queuing' in SATA speak). The use of
444 this method can be enabled by setting the disk-barrier options
445 to yes.
446
447 flush
448 The second requires that the backing device support disk
449 flushes (called 'force unit access' in the drive vendors
450 speak). The use of this method can be disabled setting
451 disk-flushes to no.
452
453 drain
454 The third method is simply to let write requests drain before
455 write requests of a new reordering domain are issued. This was
456 the only implementation before 8.0.9.
457
458 none
459 The fourth method is to not express write-after-write
460 dependencies to the backing store at all, by also specifying
461 no-disk-drain. This is dangerous on most IO stacks, may result
462 in write-reordering, and if so, can theoretically be the reason
463 for data corruption, or disturb the DRBD protocol, causing
464 spurious disconnect/reconnect cycles. Do not use
465 no-disk-drain.
466
467 md-flushes
468 Disables the use of disk flushes and barrier BIOs when accessing
469 the meta data device. See the notes on disk-flushes.
470
471 max-bio-bvecs
472 In some special circumstances the device mapper stack manages to
473 pass BIOs to DRBD that violate the constraints that are set forth
474 by DRBD's merge_bvec() function and which have more than one bvec.
475 A known example is: phys-disk -> DRBD -> LVM -> Xen -> misaligned
476 partition (63) -> DomU FS. Then you might see "bio would need to,
477 but cannot, be split:" in the Dom0's kernel log.
478
479 The best workaround is to proper align the partition within the VM
480 (E.g. start it at sector 1024). This costs 480 KiB of storage.
481 Unfortunately the default of most Linux partitioning tools is to
482 start the first partition at an odd number (63). Therefore most
483 distribution's install helpers for virtual linux machines will end
484 up with misaligned partitions. The second best workaround is to
485 limit DRBD's max bvecs per BIO (= max-bio-bvecs) to 1, but that
486 might cost performance.
487
488 The default value of max-bio-bvecs is 0, which means that there is
489 no user imposed limitation.
490
491 disk-timeout
492 If the lower-level device on which a DRBD device stores its data
493 does not finish an I/O request within the defined disk-timeout,
494 DRBD treats this as a failure. The lower-level device is detached,
495 and the device's disk state advances to Diskless. If DRBD is
496 connected to one or more peers, the failed request is passed on to
497 one of them.
498
499 This option is dangerous and may lead to kernel panic!
500
501 "Aborting" requests, or force-detaching the disk, is intended for
502 completely blocked/hung local backing devices which do no longer
503 complete requests at all, not even do error completions. In this
504 situation, usually a hard-reset and failover is the only way out.
505
506 By "aborting", basically faking a local error-completion, we allow
507 for a more graceful swichover by cleanly migrating services. Still
508 the affected node has to be rebooted "soon".
509
510 By completing these requests, we allow the upper layers to re-use
511 the associated data pages.
512
513 If later the local backing device "recovers", and now DMAs some
514 data from disk into the original request pages, in the best case it
515 will just put random data into unused pages; but typically it will
516 corrupt meanwhile completely unrelated data, causing all sorts of
517 damage.
518
519 Which means delayed successful completion, especially for READ
520 requests, is a reason to panic(). We assume that a delayed *error*
521 completion is OK, though we still will complain noisily about it.
522
523 The default value of disk-timeout is 0, which stands for an
524 infinite timeout. Timeouts are specified in units of 0.1 seconds.
525 This option is available since DRBD 8.3.12.
526
527 discard-zeroes-if-aligned {yes | no}
528
529 There are several aspects to discard/trim/unmap support on linux
530 block devices. Even if discard is supported in general, it may fail
531 silently, or may partially ignore discard requests. Devices also
532 announce whether reading from unmapped blocks returns defined data
533 (usually zeroes), or undefined data (possibly old data, possibly
534 garbage).
535
536 If on different nodes, DRBD is backed by devices with differing
537 discard characteristics, discards may lead to data divergence (old
538 data or garbage left over on one backend, zeroes due to unmapped
539 areas on the other backend). Online verify would now potentially
540 report tons of spurious differences. While probably harmless for
541 most use cases (fstrim on a file system), DRBD cannot have that.
542
543 To play safe, we have to disable discard support, if our local
544 backend (on a Primary) does not support "discard_zeroes_data=true".
545 We also have to translate discards to explicit zero-out on the
546 receiving side, unless the receiving side (Secondary) supports
547 "discard_zeroes_data=true", thereby allocating areas what were
548 supposed to be unmapped.
549
550 There are some devices (notably the LVM/DM thin provisioning) that
551 are capable of discard, but announce discard_zeroes_data=false. In
552 the case of DM-thin, discards aligned to the chunk size will be
553 unmapped, and reading from unmapped sectors will return zeroes.
554 However, unaligned partial head or tail areas of discard requests
555 will be silently ignored.
556
557 If we now add a helper to explicitly zero-out these unaligned
558 partial areas, while passing on the discard of the aligned full
559 chunks, we effectively achieve discard_zeroes_data=true on such
560 devices.
561
562 Setting discard-zeroes-if-aligned to yes will allow DRBD to use
563 discards, and to announce discard_zeroes_data=true, even on
564 backends that announce discard_zeroes_data=false.
565
566 Setting discard-zeroes-if-aligned to no will cause DRBD to always
567 fall-back to zero-out on the receiving side, and to not even
568 announce discard capabilities on the Primary, if the respective
569 backend announces discard_zeroes_data=false.
570
571 We used to ignore the discard_zeroes_data setting completely. To
572 not break established and expected behaviour, and suddenly cause
573 fstrim on thin-provisioned LVs to run out-of-space instead of
574 freeing up space, the default value is yes.
575
576 This option is available since 8.4.7.
577
578 read-balancing method
579 The supported methods for load balancing of read requests are
580 prefer-local, prefer-remote, round-robin, least-pending,
581 when-congested-remote, 32K-striping, 64K-striping, 128K-striping,
582 256K-striping, 512K-striping and 1M-striping.
583
584 The default value of is prefer-local. This option is available
585 since 8.4.1.
586
587 rs-discard-granularity byte
588 When rs-discard-granularity is set to a non zero, positive value
589 then DRBD tries to do a resync operation in requests of this size.
590 In case such a block contains only zero bytes on the sync source
591 node, the sync target node will issue a discard/trim/unmap command
592 for the area.
593
594 The value is constrained by the discard granularity of the backing
595 block device. In case rs-discard-granularity is not a multiplier of
596 the discard granularity of the backing block device DRBD rounds it
597 up. The feature only gets active if the backing block device reads
598 back zeroes after a discard command.
599
600 The default value of is 0. This option is available since 8.4.7.
601
602 sndbuf-size size
603 size is the size of the TCP socket send buffer. The default value
604 is 0, i.e. autotune. You can specify smaller or larger values.
605 Larger values are appropriate for reasonable write throughput with
606 protocol A over high latency networks. Values below 32K do not make
607 sense. Since 8.0.13 resp. 8.2.7, setting the size value to 0 means
608 that the kernel should autotune this.
609
610 rcvbuf-size size
611 size is the size of the TCP socket receive buffer. The default
612 value is 0, i.e. autotune. You can specify smaller or larger
613 values. Usually this should be left at its default. Setting the
614 size value to 0 means that the kernel should autotune this.
615
616 timeout time
617
618 If the partner node fails to send an expected response packet
619 within time tenths of a second, the partner node is considered dead
620 and therefore the TCP/IP connection is abandoned. This must be
621 lower than connect-int and ping-int. The default value is 60 = 6
622 seconds, the unit 0.1 seconds.
623
624 connect-int time
625
626 In case it is not possible to connect to the remote DRBD device
627 immediately, DRBD keeps on trying to connect. With this option you
628 can set the time between two retries. The default value is 10
629 seconds, the unit is 1 second.
630
631 ping-int time
632
633 If the TCP/IP connection linking a DRBD device pair is idle for
634 more than time seconds, DRBD will generate a keep-alive packet to
635 check if its partner is still alive. The default is 10 seconds, the
636 unit is 1 second.
637
638 ping-timeout time
639
640 The time the peer has time to answer to a keep-alive packet. In
641 case the peer's reply is not received within this time period, it
642 is considered as dead. The default value is 500ms, the default unit
643 are tenths of a second.
644
645 max-buffers number
646
647 Limits the memory usage per DRBD minor device on the receiving
648 side, or for internal buffers during resync or online-verify. Unit
649 is PAGE_SIZE, which is 4 KiB on most systems. The minimum possible
650 setting is hard coded to 32 (=128 KiB). These buffers are used to
651 hold data blocks while they are written to/read from disk. To avoid
652 possible distributed deadlocks on congestion, this setting is used
653 as a throttle threshold rather than a hard limit. Once more than
654 max-buffers pages are in use, further allocation from this pool is
655 throttled. You want to increase max-buffers if you cannot saturate
656 the IO backend on the receiving side.
657
658 ko-count number
659
660 In case the secondary node fails to complete a single write request
661 for count times the timeout, it is expelled from the cluster. (I.e.
662 the primary node will kill and restart the connection.) To disable
663 this feature, you should explicitly set it to 0; defaults may
664 change between versions.
665
666 max-epoch-size number
667
668 The highest number of data blocks between two write barriers. If
669 you set this smaller than 10, you might decrease your performance.
670
671 allow-two-primaries
672
673 With this option set you may assign the primary role to both nodes.
674 You only should use this option if you use a shared storage file
675 system on top of DRBD. At the time of writing the only ones are:
676 OCFS2 and GFS. If you use this option with any other file system,
677 you are going to crash your nodes and to corrupt your data!
678
679 unplug-watermark number
680 This setting has no effect with recent kernels that use explicit
681 on-stack plugging (upstream Linux kernel 2.6.39, distributions may
682 have backported).
683
684 When the number of pending write requests on the standby
685 (secondary) node exceeds the unplug-watermark, we trigger the
686 request processing of our backing storage device. Some storage
687 controllers deliver better performance with small values, others
688 deliver best performance when the value is set to the same value as
689 max-buffers, yet others don't feel much effect at all. Minimum 16,
690 default 128, maximum 131072.
691
692 cram-hmac-alg
693
694 You need to specify the HMAC algorithm to enable peer
695 authentication at all. You are strongly encouraged to use peer
696 authentication. The HMAC algorithm will be used for the challenge
697 response authentication of the peer. You may specify any digest
698 algorithm that is named in /proc/crypto.
699
700 shared-secret
701
702 The shared secret used in peer authentication. May be up to 64
703 characters. Note that peer authentication is disabled as long as no
704 cram-hmac-alg (see above) is specified.
705
706 after-sb-0pri policy
707 possible policies are:
708
709 disconnect
710 No automatic resynchronization, simply disconnect.
711
712 discard-younger-primary
713 Auto sync from the node that was primary before the split-brain
714 situation happened.
715
716 discard-older-primary
717 Auto sync from the node that became primary as second during
718 the split-brain situation.
719
720 discard-zero-changes
721 In case one node did not write anything since the split brain
722 became evident, sync from the node that wrote something to the
723 node that did not write anything. In case none wrote anything
724 this policy uses a random decision to perform a "resync" of 0
725 blocks. In case both have written something this policy
726 disconnects the nodes.
727
728 discard-least-changes
729 Auto sync from the node that touched more blocks during the
730 split brain situation.
731
732 discard-node-NODENAME
733 Auto sync to the named node.
734
735 after-sb-1pri policy
736 possible policies are:
737
738 disconnect
739 No automatic resynchronization, simply disconnect.
740
741 consensus
742 Discard the version of the secondary if the outcome of the
743 after-sb-0pri algorithm would also destroy the current
744 secondary's data. Otherwise disconnect.
745
746 violently-as0p
747 Always take the decision of the after-sb-0pri algorithm, even
748 if that causes an erratic change of the primary's view of the
749 data. This is only useful if you use a one-node FS (i.e. not
750 OCFS2 or GFS) with the allow-two-primaries flag, AND if you
751 really know what you are doing. This is DANGEROUS and MAY CRASH
752 YOUR MACHINE if you have an FS mounted on the primary node.
753
754 discard-secondary
755 Discard the secondary's version.
756
757 call-pri-lost-after-sb
758 Always honor the outcome of the after-sb-0pri algorithm. In
759 case it decides the current secondary has the right data, it
760 calls the "pri-lost-after-sb" handler on the current primary.
761
762 after-sb-2pri policy
763 possible policies are:
764
765 disconnect
766 No automatic resynchronization, simply disconnect.
767
768 violently-as0p
769 Always take the decision of the after-sb-0pri algorithm, even
770 if that causes an erratic change of the primary's view of the
771 data. This is only useful if you use a one-node FS (i.e. not
772 OCFS2 or GFS) with the allow-two-primaries flag, AND if you
773 really know what you are doing. This is DANGEROUS and MAY CRASH
774 YOUR MACHINE if you have an FS mounted on the primary node.
775
776 call-pri-lost-after-sb
777 Call the "pri-lost-after-sb" helper program on one of the
778 machines. This program is expected to reboot the machine, i.e.
779 make it secondary.
780
781 always-asbp
782 Normally the automatic after-split-brain policies are only used if
783 current states of the UUIDs do not indicate the presence of a third
784 node.
785
786 With this option you request that the automatic after-split-brain
787 policies are used as long as the data sets of the nodes are somehow
788 related. This might cause a full sync, if the UUIDs indicate the
789 presence of a third node. (Or double faults led to strange UUID
790 sets.)
791
792 rr-conflict policy
793 This option helps to solve the cases when the outcome of the resync
794 decision is incompatible with the current role assignment in the
795 cluster.
796
797 disconnect
798 No automatic resynchronization, simply disconnect.
799
800 violently
801 Sync to the primary node is allowed, violating the assumption
802 that data on a block device are stable for one of the nodes.
803 Dangerous, do not use.
804
805 call-pri-lost
806 Call the pri-lost-after-sb helper program on one of the
807 machines unless that machine can demote to secondary. The
808 helper program is expected to reboot the machine, which brings
809 the node into a secondary role. Which machine runs the helper
810 program is determined by the after-sb-0pri strategy.
811
812 data-integrity-alg alg
813 DRBD can ensure the data integrity of the user's data on the
814 network by comparing hash values. Normally this is ensured by the
815 16 bit checksums in the headers of TCP/IP packets.
816
817 This option can be set to any of the kernel's data digest
818 algorithms. In a typical kernel configuration you should have at
819 least one of md5, sha1, and crc32c available. By default this is
820 not enabled.
821
822 See also the notes on data integrity.
823
824 tcp-cork
825 DRBD usually uses the TCP socket option TCP_CORK to hint to the
826 network stack when it can expect more data, and when it should
827 flush out what it has in its send queue. It turned out that there
828 is at least one network stack that performs worse when one uses
829 this hinting method. Therefore we introducted this option. By
830 setting tcp-cork to no you can disable the setting and clearing of
831 the TCP_CORK socket option by DRBD.
832
833 on-congestion congestion_policy,
834 congestion-fill fill_threshold,
835 congestion-extents active_extents_threshold
836 By default DRBD blocks when the available TCP send queue becomes
837 full. That means it will slow down the application that generates
838 the write requests that cause DRBD to send more data down that TCP
839 connection.
840
841 When DRBD is deployed with DRBD-proxy it might be more desirable
842 that DRBD goes into AHEAD/BEHIND mode shortly before the send queue
843 becomes full. In AHEAD/BEHIND mode DRBD does no longer replicate
844 data, but still keeps the connection open.
845
846 The advantage of the AHEAD/BEHIND mode is that the application is
847 not slowed down, even if DRBD-proxy's buffer is not sufficient to
848 buffer all write requests. The downside is that the peer node falls
849 behind, and that a resync will be necessary to bring it back into
850 sync. During that resync the peer node will have an inconsistent
851 disk.
852
853 Available congestion_policys are block and pull-ahead. The default
854 is block. Fill_threshold might be in the range of 0 to 10GiBytes.
855 The default is 0 which disables the check.
856 Active_extents_threshold has the same limits as al-extents.
857
858 The AHEAD/BEHIND mode and its settings are available since DRBD
859 8.3.10.
860
861 wfc-timeout time
862 Wait for connection timeout.
863
864 The init script drbd(8) blocks the boot process until the DRBD
865 resources are connected. When the cluster manager starts later, it
866 does not see a resource with internal split-brain. In case you want
867 to limit the wait time, do it here. Default is 0, which means
868 unlimited. The unit is seconds.
869
870 degr-wfc-timeout time
871
872 Wait for connection timeout, if this node was a degraded cluster.
873 In case a degraded cluster (= cluster with only one node left) is
874 rebooted, this timeout value is used instead of wfc-timeout,
875 because the peer is less likely to show up in time, if it had been
876 dead before. Value 0 means unlimited.
877
878 outdated-wfc-timeout time
879
880 Wait for connection timeout, if the peer was outdated. In case a
881 degraded cluster (= cluster with only one node left) with an
882 outdated peer disk is rebooted, this timeout value is used instead
883 of wfc-timeout, because the peer is not allowed to become primary
884 in the meantime. Value 0 means unlimited.
885
886 wait-after-sb
887 By setting this option you can make the init script to continue to
888 wait even if the device pair had a split brain situation and
889 therefore refuses to connect.
890
891 become-primary-on node-name
892 Sets on which node the device should be promoted to primary role by
893 the init script. The node-name might either be a host name or the
894 keyword both. When this option is not set the devices stay in
895 secondary role on both nodes. Usually one delegates the role
896 assignment to a cluster manager (e.g. heartbeat).
897
898 stacked-timeouts
899 Usually wfc-timeout and degr-wfc-timeout are ignored for stacked
900 devices, instead twice the amount of connect-int is used for the
901 connection timeouts. With the stacked-timeouts keyword you disable
902 this, and force DRBD to mind the wfc-timeout and degr-wfc-timeout
903 statements. Only do that if the peer of the stacked resource is
904 usually not available or will usually not become primary. By using
905 this option incorrectly, you run the risk of causing unexpected
906 split brain.
907
908 resync-rate rate
909
910 To ensure a smooth operation of the application on top of DRBD, it
911 is possible to limit the bandwidth which may be used by background
912 synchronizations. The default is 250 KB/sec, the default unit is
913 KB/sec. Optional suffixes K, M, G are allowed.
914
915 use-rle
916
917 During resync-handshake, the dirty-bitmaps of the nodes are
918 exchanged and merged (using bit-or), so the nodes will have the
919 same understanding of which blocks are dirty. On large devices, the
920 fine grained dirty-bitmap can become large as well, and the bitmap
921 exchange can take quite some time on low-bandwidth links.
922
923 Because the bitmap typically contains compact areas where all bits
924 are unset (clean) or set (dirty), a simple run-length encoding
925 scheme can considerably reduce the network traffic necessary for
926 the bitmap exchange.
927
928 For backward compatibility reasons, and because on fast links this
929 possibly does not improve transfer time but consumes cpu cycles,
930 this defaults to off.
931
932 socket-check-timeout value
933
934 In setups involving a DRBD-proxy and connections that experience a
935 lot of buffer-bloat it might be necessary to set ping-timeout to an
936 unusual high value. By default DRBD uses the same value to wait if
937 a newly established TCP-connection is stable. Since the DRBD-proxy
938 is usually located in the same data center such a long wait time
939 may hinder DRBD's connect process.
940
941 In such setups socket-check-timeout should be set to at least to
942 the round trip time between DRBD and DRBD-proxy. I.e. in most cases
943 to 1.
944
945 The default unit is tenths of a second, the default value is 0
946 (which causes DRBD to use the value of ping-timeout instead).
947 Introduced in 8.4.5.
948
949 resync-after res-name
950
951 By default, resynchronization of all devices would run in parallel.
952 By defining a resync-after dependency, the resynchronization of
953 this resource will start only if the resource res-name is already
954 in connected state (i.e., has finished its resynchronization).
955
956 al-extents extents
957
958 DRBD automatically performs hot area detection. With this parameter
959 you control how big the hot area (= active set) can get. Each
960 extent marks 4M of the backing storage (= low-level device). In
961 case a primary node leaves the cluster unexpectedly, the areas
962 covered by the active set must be resynced upon rejoining of the
963 failed node. The data structure is stored in the meta-data area,
964 therefore each change of the active set is a write operation to the
965 meta-data device. A higher number of extents gives longer resync
966 times but less updates to the meta-data. The default number of
967 extents is 1237. (Minimum: 7, Maximum: 65534)
968
969 Note that the effective maximum may be smaller, depending on how
970 you created the device meta data, see also drbdmeta(8). The
971 effective maximum is 919 * (available on-disk activity-log
972 ring-buffer area/4kB -1), the default 32kB ring-buffer effects a
973 maximum of 6433 (covers more than 25 GiB of data). We recommend to
974 keep this well within the amount your backend storage and
975 replication link are able to resync inside of about 5 minutes.
976
977 al-updates {yes | no}
978
979 DRBD's activity log transaction writing makes it possible, that
980 after the crash of a primary node a partial (bit-map based) resync
981 is sufficient to bring the node back to up-to-date. Setting
982 al-updates to no might increase normal operation performance but
983 causes DRBD to do a full resync when a crashed primary gets
984 reconnected. The default value is yes.
985
986 verify-alg hash-alg
987 During online verification (as initiated by the verify
988 sub-command), rather than doing a bit-wise comparison, DRBD applies
989 a hash function to the contents of every block being verified, and
990 compares that hash with the peer. This option defines the hash
991 algorithm being used for that purpose. It can be set to any of the
992 kernel's data digest algorithms. In a typical kernel configuration
993 you should have at least one of md5, sha1, and crc32c available. By
994 default this is not enabled; you must set this option explicitly in
995 order to be able to use on-line device verification.
996
997 See also the notes on data integrity.
998
999 csums-alg hash-alg
1000 A resync process sends all marked data blocks from the source to
1001 the destination node, as long as no csums-alg is given. When one is
1002 specified the resync process exchanges hash values of all marked
1003 blocks first, and sends only those data blocks that have different
1004 hash values.
1005
1006 This setting is useful for DRBD setups with low bandwidth links.
1007 During the restart of a crashed primary node, all blocks covered by
1008 the activity log are marked for resync. But a large part of those
1009 will actually be still in sync, therefore using csums-alg will
1010 lower the required bandwidth in exchange for CPU cycles.
1011
1012 c-plan-ahead plan_time,
1013 c-fill-target fill_target,
1014 c-delay-target delay_target,
1015 c-max-rate max_rate
1016 The dynamic resync speed controller gets enabled with setting
1017 plan_time to a positive value. It aims to fill the buffers along
1018 the data path with either a constant amount of data fill_target, or
1019 aims to have a constant delay time of delay_target along the path.
1020 The controller has an upper bound of max_rate.
1021
1022 By plan_time the agility of the controller is configured. Higher
1023 values yield for slower/lower responses of the controller to
1024 deviation from the target value. It should be at least 5 times RTT.
1025 For regular data paths a fill_target in the area of 4k to 100k is
1026 appropriate. For a setup that contains drbd-proxy it is advisable
1027 to use delay_target instead. Only when fill_target is set to 0 the
1028 controller will use delay_target. 5 times RTT is a reasonable
1029 starting value. Max_rate should be set to the bandwidth available
1030 between the DRBD-hosts and the machines hosting DRBD-proxy, or to
1031 the available disk-bandwidth.
1032
1033 The default value of plan_time is 0, the default unit is 0.1
1034 seconds. Fill_target has 0 and sectors as default unit.
1035 Delay_target has 1 (100ms) and 0.1 as default unit. Max_rate has
1036 10240 (100MiB/s) and KiB/s as default unit.
1037
1038 The dynamic resync speed controller and its settings are available
1039 since DRBD 8.3.9.
1040
1041 c-min-rate min_rate
1042 A node that is primary and sync-source has to schedule application
1043 IO requests and resync IO requests. The min_rate tells DRBD use
1044 only up to min_rate for resync IO and to dedicate all other
1045 available IO bandwidth to application requests.
1046
1047 Note: The value 0 has a special meaning. It disables the limitation
1048 of resync IO completely, which might slow down application IO
1049 considerably. Set it to a value of 1, if you prefer that resync IO
1050 never slows down application IO.
1051
1052 Note: Although the name might suggest that it is a lower bound for
1053 the dynamic resync speed controller, it is not. If the DRBD-proxy
1054 buffer is full, the dynamic resync speed controller is free to
1055 lower the resync speed down to 0, completely independent of the
1056 c-min-rate setting.
1057
1058 Min_rate has 4096 (4MiB/s) and KiB/s as default unit.
1059
1060 on-no-data-accessible ond-policy
1061 This setting controls what happens to IO requests on a degraded,
1062 disk less node (I.e. no data store is reachable). The available
1063 policies are io-error and suspend-io.
1064
1065 If ond-policy is set to suspend-io you can either resume IO by
1066 attaching/connecting the last lost data storage, or by the drbdadm
1067 resume-io res command. The latter will result in IO errors of
1068 course.
1069
1070 The default is io-error. This setting is available since DRBD
1071 8.3.9.
1072
1073 cpu-mask cpu-mask
1074
1075 Sets the cpu-affinity-mask for DRBD's kernel threads of this
1076 device. The default value of cpu-mask is 0, which means that DRBD's
1077 kernel threads should be spread over all CPUs of the machine. This
1078 value must be given in hexadecimal notation. If it is too big it
1079 will be truncated.
1080
1081 pri-on-incon-degr cmd
1082
1083 This handler is called if the node is primary, degraded and if the
1084 local copy of the data is inconsistent.
1085
1086 pri-lost-after-sb cmd
1087
1088 The node is currently primary, but lost the after-split-brain auto
1089 recovery procedure. As as consequence, it should be abandoned.
1090
1091 pri-lost cmd
1092
1093 The node is currently primary, but DRBD's algorithm thinks that it
1094 should become sync target. As a consequence it should give up its
1095 primary role.
1096
1097 fence-peer cmd
1098
1099 The handler is part of the fencing mechanism. This handler is
1100 called in case the node needs to fence the peer's disk. It should
1101 use other communication paths than DRBD's network link.
1102
1103 local-io-error cmd
1104
1105 DRBD got an IO error from the local IO subsystem.
1106
1107 initial-split-brain cmd
1108
1109 DRBD has connected and detected a split brain situation. This
1110 handler can alert someone in all cases of split brain, not just
1111 those that go unresolved.
1112
1113 split-brain cmd
1114
1115 DRBD detected a split brain situation but remains unresolved.
1116 Manual recovery is necessary. This handler should alert someone on
1117 duty.
1118
1119 before-resync-target cmd
1120
1121 DRBD calls this handler just before a resync begins on the node
1122 that becomes resync target. It might be used to take a snapshot of
1123 the backing block device.
1124
1125 after-resync-target cmd
1126
1127 DRBD calls this handler just after a resync operation finished on
1128 the node whose disk just became consistent after being inconsistent
1129 for the duration of the resync. It might be used to remove a
1130 snapshot of the backing device that was created by the
1131 before-resync-target handler.
1132
1133 Other Keywords
1134 include file-pattern
1135
1136 Include all files matching the wildcard pattern file-pattern. The
1137 include statement is only allowed on the top level, i.e. it is not
1138 allowed inside any section.
1139
1141 There are two independent methods in DRBD to ensure the integrity of
1142 the mirrored data. The online-verify mechanism and the
1143 data-integrity-alg of the network section.
1144
1145 Both mechanisms might deliver false positives if the user of DRBD
1146 modifies the data which gets written to disk while the transfer goes
1147 on. This may happen for swap, or for certain append while global sync,
1148 or truncate/rewrite workloads, and not necessarily poses a problem for
1149 the integrity of the data. Usually when the initiator of the data
1150 transfer does this, it already knows that that data block will not be
1151 part of an on disk data structure, or will be resubmitted with correct
1152 data soon enough.
1153
1154 The data-integrity-alg causes the receiving side to log an error about
1155 "Digest integrity check FAILED: Ns +x\n", where N is the sector offset,
1156 and x is the size of the request in bytes. It will then disconnect, and
1157 reconnect, thus causing a quick resync. If the sending side at the same
1158 time detected a modification, it warns about "Digest mismatch, buffer
1159 modified by upper layers during write: Ns +x\n", which shows that this
1160 was a false positive. The sending side may detect these buffer
1161 modifications immediately after the unmodified data has been copied to
1162 the tcp buffers, in which case the receiving side won't notice it.
1163
1164 The most recent (2007) example of systematic corruption was an issue
1165 with the TCP offloading engine and the driver of a certain type of GBit
1166 NIC. The actual corruption happened on the DMA transfer from core
1167 memory to the card. Since the TCP checksum gets calculated on the card,
1168 this type of corruption stays undetected as long as you do not use
1169 either the online verify or the data-integrity-alg.
1170
1171 We suggest to use the data-integrity-alg only during a pre-production
1172 phase due to its CPU costs. Further we suggest to do online verify runs
1173 regularly e.g. once a month during a low load period.
1174
1176 This document was revised for version 8.4.0 of the DRBD distribution.
1177
1179 Written by Philipp Reisner <philipp.reisner@linbit.com> and Lars
1180 Ellenberg <lars.ellenberg@linbit.com>.
1181
1183 Report bugs to <drbd-user@lists.linbit.com>.
1184
1186 Copyright 2001-2008 LINBIT Information Technologies, Philipp Reisner,
1187 Lars Ellenberg. This is free software; see the source for copying
1188 conditions. There is NO warranty; not even for MERCHANTABILITY or
1189 FITNESS FOR A PARTICULAR PURPOSE.
1190
1192 drbd(8), drbddisk(8), drbdsetup(8), drbdmeta(8), drbdadm(8), DRBD
1193 User's Guide[1], DRBD web site[3]
1194
1196 1. DRBD User's Guide
1197 http://www.drbd.org/users-guide/
1198
1199 2. DRBD's online usage counter
1200 http://usage.drbd.org
1201
1202 3. DRBD web site
1203 http://www.drbd.org/
1204
1205
1206
1207DRBD 8.4.0 6 May 2011 DRBD.CONF(5)