1DRBD.CONF(5) Configuration Files DRBD.CONF(5)
2
3
4
6 drbd.conf - Configuration file for DRBD's devices
7
9 The file /etc/drbd.conf is read by drbdadm.
10
11 The file format was designed as to allow to have a verbatim copy of the
12 file on both nodes of the cluster. It is highly recommended to do so in
13 order to keep your configuration manageable. The file /etc/drbd.conf
14 should be the same on both nodes of the cluster. Changes to
15 /etc/drbd.conf do not apply immediately.
16
17 In this example, there is a single DRBD resource (called r0) which uses
18 protocol C for the connection between its devices. The device which
19 runs on host alice uses /dev/drbd1 as devices for its application, and
20 /dev/sda7 as low-level storage for the data. The IP addresses are used
21 to specify the networking interfaces to be used. An eventually running
22 resync process should use about 10MByte/second of IO bandwidth.
23
24 There may be multiple resource sections in a single drbd.conf file. For
25 more examples, please have a look at the DRBD User's Guide[1].
26
28 The file consists of sections and parameters. A section begins with a
29 keyword, sometimes an additional name, and an opening brace (“{”). A
30 section ends with a closing brace (“}”. The braces enclose the
31 parameters.
32
33 section [name] { parameter value; [...] }
34
35 A parameter starts with the identifier of the parameter followed by
36 whitespace. Every subsequent character is considered as part of the
37 parameter's value. A special case are Boolean parameters which consist
38 only of the identifier. Parameters are terminated by a semicolon (“;”).
39
40 Some parameter values have default units which might be overruled by K,
41 M or G. These units are defined in the usual way (K = 2^10 = 1024, M =
42 1024 K, G = 1024 M).
43
44 Comments may be placed into the configuration file and must begin with
45 a hash sign (“#”). Subsequent characters are ignored until the end of
46 the line.
47
48 Sections
49 skip
50
51 Comments out chunks of text, even spanning more than one line.
52 Characters between the keyword skip and the opening brace (“{”) are
53 ignored. Everything enclosed by the braces is skipped. This comes
54 in handy, if you just want to comment out some 'resource [name]
55 {...}' section: just precede it with '“skip”'.
56
57 global
58
59 Configures some global parameters. Currently only minor-count,
60 dialog-refresh, disable-ip-verification and usage-count are allowed
61 here. You may only have one global section, preferably as the first
62 section.
63
64 common
65
66 All resources inherit the options set in this section. The common
67 section might have a startup, a syncer, a handlers, a net and a
68 disk section.
69
70 resource name
71
72 Configures a DRBD resource. Each resource section needs to have two
73 (or more) on host sections and may have a startup, a syncer, a
74 handlers, a net and a disk section. Required parameter in this
75 section: protocol.
76
77 on host-name
78
79 Carries the necessary configuration parameters for a DRBD device of
80 the enclosing resource. host-name is mandatory and must match the
81 Linux host name (uname -n) of one of the nodes. You may list more
82 than one host name here, in case you want to use the same
83 parameters on several hosts (you'd have to move the IP around
84 usually). Or you may list more than two such sections.
85
86 resource r1 {
87 protocol C;
88 device minor 1;
89 meta-disk internal;
90
91 on alice bob {
92 address 10.2.2.100:7801;
93 disk /dev/mapper/some-san;
94 }
95 on charlie {
96 address 10.2.2.101:7801;
97 disk /dev/mapper/other-san;
98 }
99 on daisy {
100 address 10.2.2.103:7801;
101 disk /dev/mapper/other-san-as-seen-from-daisy;
102 }
103 }
104
105
106 See also the floating section keyword. Required parameters in this
107 section: device, disk, address, meta-disk, flexible-meta-disk.
108
109 stacked-on-top-of resource
110
111 For a stacked DRBD setup (3 or 4 nodes), a stacked-on-top-of is
112 used instead of an on section. Required parameters in this section:
113 device and address.
114
115 floating AF addr:port
116
117 Carries the necessary configuration parameters for a DRBD device of
118 the enclosing resource. This section is very similar to the on
119 section. The difference to the on section is that the matching of
120 the host sections to machines is done by the IP-address instead of
121 the node name. Required parameters in this section: device, disk,
122 meta-disk, flexible-meta-disk, all of which may be inherited from
123 the resource section, in which case you may shorten this section
124 down to just the address identifier.
125
126 resource r2 {
127 protocol C;
128 device minor 2;
129 disk /dev/sda7;
130 meta-disk internal;
131
132 # short form, device, disk and meta-disk inherited
133 floating 10.1.1.31:7802;
134
135 # longer form, only device inherited
136 floating 10.1.1.32:7802 {
137 disk /dev/sdb;
138 meta-disk /dev/sdc8;
139 }
140 }
141
142
143
144 disk
145
146 This section is used to fine tune DRBD's properties in respect to
147 the low level storage. Please refer to drbdsetup(8) for detailed
148 description of the parameters. Optional parameters: on-io-error,
149 size, fencing, use-bmbv, no-disk-barrier, no-disk-flushes,
150 no-disk-drain, no-md-flushes, max-bio-bvecs, disk-timeout.
151
152 net
153
154 This section is used to fine tune DRBD's properties. Please refer
155 to drbdsetup(8) for a detailed description of this section's
156 parameters. Optional parameters: sndbuf-size, rcvbuf-size, timeout,
157 connect-int, ping-int, ping-timeout, max-buffers, max-epoch-size,
158 ko-count, allow-two-primaries, cram-hmac-alg, shared-secret,
159 after-sb-0pri, after-sb-1pri, after-sb-2pri, data-integrity-alg,
160 no-tcp-cork, on-congestion, congestion-fill, congestion-extents
161
162 startup
163
164 This section is used to fine tune DRBD's properties. Please refer
165 to drbdsetup(8) for a detailed description of this section's
166 parameters. Optional parameters: wfc-timeout, degr-wfc-timeout,
167 outdated-wfc-timeout, wait-after-sb, stacked-timeouts and
168 become-primary-on.
169
170 syncer
171
172 This section is used to fine tune the synchronization daemon for
173 the device. Please refer to drbdsetup(8) for a detailed description
174 of this section's parameters. Optional parameters: rate, after,
175 al-extents, use-rle, cpu-mask, verify-alg, csums-alg, c-plan-ahead,
176 c-fill-target, c-delay-target, c-max-rate, c-min-rate and
177 on-no-data-accessible.
178
179 handlers
180
181 In this section you can define handlers (executables) that are
182 started by the DRBD system in response to certain events. Optional
183 parameters: pri-on-incon-degr, pri-lost-after-sb, pri-lost,
184 fence-peer (formerly oudate-peer), local-io-error,
185 initial-split-brain, split-brain, before-resync-target,
186 after-resync-target.
187
188 The interface is done via environment variables:
189
190 DRBD_RESOURCE
191 is the name of the resource
192
193 DRBD_MINOR
194 is the minor number of the DRBD device, in decimal.
195
196 DRBD_CONF
197 is the path to the primary configuration file; if you split
198 your configuration into multiple files (e.g. in
199 /etc/drbd.conf.d/), this will not be helpful.
200
201 DRBD_PEER_AF, DRBD_PEER_ADDRESS, DRBD_PEERS
202 are the address family (e.g. ipv6), the peer's address and
203 hostnames.
204
205
206 DRBD_PEER (note the singular form) is deprecated, and superseeded
207 by DRBD_PEERS.
208
209 Please note that not all of these might be set for all handlers,
210 and that some values might not be useable for a floating
211 definition.
212
213 Parameters
214 minor-count count
215 count may be a number from 1 to 255.
216
217 Use minor-count if you want to define massively more resources
218 later without reloading the DRBD kernel module. Per default the
219 module loads with 11 more resources than you have currently in your
220 config but at least 32.
221
222 dialog-refresh time
223 time may be 0 or a positive number.
224
225 The user dialog redraws the second count every time seconds (or
226 does no redraws if time is 0). The default value is 1.
227
228 disable-ip-verification
229 Use disable-ip-verification if, for some obscure reasons, drbdadm
230 can/might not use ip or ifconfig to do a sanity check for the IP
231 address. You can disable the IP verification with this option.
232
233 usage-count val
234 Please participate in DRBD's online usage counter[2]. The most
235 convenient way to do so is to set this option to yes. Valid options
236 are: yes, no and ask.
237
238 protocol prot-id
239 On the TCP/IP link the specified protocol is used. Valid protocol
240 specifiers are A, B, and C.
241
242 Protocol A: write IO is reported as completed, if it has reached
243 local disk and local TCP send buffer.
244
245 Protocol B: write IO is reported as completed, if it has reached
246 local disk and remote buffer cache.
247
248 Protocol C: write IO is reported as completed, if it has reached
249 both local and remote disk.
250
251 device name minor nr
252
253 The name of the block device node of the resource being described.
254 You must use this device with your application (file system) and
255 you must not use the low level block device which is specified with
256 the disk parameter.
257
258 One can ether omit the name or minor and the minor number. If you
259 omit the name a default of /dev/drbdminor will be used.
260
261 Udev will create additional symlinks in /dev/drbd/by-res and
262 /dev/drbd/by-disk.
263
264 disk name
265
266 DRBD uses this block device to actually store and retrieve the
267 data. Never access such a device while DRBD is running on top of
268 it. This also holds true for dumpe2fs(8) and similar commands.
269
270 address AF addr:port
271
272 A resource needs one IP address per device, which is used to wait
273 for incoming connections from the partner device respectively to
274 reach the partner device. AF must be one of ipv4, ipv6, ssocks or
275 sdp (for compatibility reasons sci is an alias for ssocks). It may
276 be omited for IPv4 addresses. The actual IPv6 address that follows
277 the ipv6 keyword must be placed inside brackets: ipv6
278 [fd01:2345:6789:abcd::1]:7800.
279
280 Each DRBD resource needs a TCP port which is used to connect to the
281 node's partner device. Two different DRBD resources may not use the
282 same addr:port combination on the same node.
283
284 meta-disk internal, flexible-meta-disk internal, meta-disk device
285 [index], flexible-meta-disk device
286
287 Internal means that the last part of the backing device is used to
288 store the meta-data. You must not use [index] with internal. Note:
289 Regardless of whether you use the meta-disk or the
290 flexible-meta-disk keyword, it will always be of the size needed
291 for the remaining storage size.
292
293 You can use a single block device to store meta-data of multiple
294 DRBD devices. E.g. use meta-disk /dev/sde6[0]; and meta-disk
295 /dev/sde6[1]; for two different resources. In this case the
296 meta-disk would need to be at least 256 MB in size.
297
298 With the flexible-meta-disk keyword you specify a block device as
299 meta-data storage. You usually use this with LVM, which allows you
300 to have many variable sized block devices. The required size of the
301 meta-disk block device is 36kB + Backing-Storage-size / 32k. Round
302 this number to the next 4kb boundary up and you have the exact
303 size. Rule of the thumb: 32kByte per 1GByte of storage, round up to
304 the next MB.
305
306 on-io-error handler
307 handler is taken, if the lower level device reports io-errors to
308 the upper layers.
309
310 handler may be pass_on, call-local-io-error or detach.
311
312 pass_on: The node downgrades the disk status to inconsistent, marks
313 the erroneous block as inconsistent in the bitmap and retries the
314 IO on the remote node.
315
316 call-local-io-error: Call the handler script local-io-error.
317
318 detach: The node drops its low level device, and continues in
319 diskless mode.
320
321 fencing fencing_policy
322
323 By fencing we understand preventive measures to avoid situations
324 where both nodes are primary and disconnected (AKA split brain).
325
326 Valid fencing policies are:
327
328 dont-care
329 This is the default policy. No fencing actions are taken.
330
331 resource-only
332 If a node becomes a disconnected primary, it tries to fence the
333 peer's disk. This is done by calling the fence-peer handler.
334 The handler is supposed to reach the other node over
335 alternative communication paths and call 'drbdadm outdate res'
336 there.
337
338 resource-and-stonith
339 If a node becomes a disconnected primary, it freezes all its IO
340 operations and calls its fence-peer handler. The fence-peer
341 handler is supposed to reach the peer over alternative
342 communication paths and call 'drbdadm outdate res' there. In
343 case it cannot reach the peer it should stonith the peer. IO is
344 resumed as soon as the situation is resolved. In case your
345 handler fails, you can resume IO with the resume-io command.
346
347 use-bmbv
348 In case the backing storage's driver has a merge_bvec_fn()
349 function, DRBD has to pretend that it can only process IO requests
350 in units not larger than 4KiB. (At the time of writing the only
351 known drivers which have such a function are: md (software raid
352 driver), dm (device mapper - LVM) and DRBD itself).
353
354 To get the best performance out of DRBD on top of software RAID (or
355 any other driver with a merge_bvec_fn() function) you might enable
356 this function, if you know for sure that the merge_bvec_fn()
357 function will deliver the same results on all nodes of your
358 cluster. I.e. the physical disks of the software RAID are of
359 exactly the same type. Use this option only if you know what you
360 are doing.
361
362 no-disk-barrier, no-disk-flushes, no-disk-drain
363 DRBD has four implementations to express write-after-write
364 dependencies to its backing storage device. DRBD will use the first
365 method that is supported by the backing storage device and that is
366 not disabled by the user.
367
368 When selecting the method you should not only base your decision on
369 the measurable performance. In case your backing storage device has
370 a volatile write cache (plain disks, RAID of plain disks) you
371 should use one of the first two. In case your backing storage
372 device has battery-backed write cache you may go with option 3.
373 Option 4 (disable everything, use "none") is dangerous on most IO
374 stacks, may result in write-reordering, and if so, can
375 theoretically be the reason for data corruption, or disturb the
376 DRBD protocol, causing spurious disconnect/reconnect cycles. Do
377 not use no-disk-drain.
378
379 Unfortunately device mapper (LVM) might not support barriers.
380
381 The letter after "wo:" in /proc/drbd indicates with method is
382 currently in use for a device: b, f, d, n. The implementations are:
383
384 barrier
385 The first requires that the driver of the backing storage
386 device support barriers (called 'tagged command queuing' in
387 SCSI and 'native command queuing' in SATA speak). The use of
388 this method can be disabled by the no-disk-barrier option.
389 Note: Since Linux-2.6.36 (or RHEL's 2.6.32) this method is
390 disabled.
391
392 flush
393 The second requires that the backing device support disk
394 flushes (called 'force unit access' in the drive vendors
395 speak). The use of this method can be disabled using the
396 no-disk-flushes option.
397
398 drain
399 The third method is simply to let write requests drain before
400 write requests of a new reordering domain are issued. This was
401 the only implementation before 8.0.9.
402
403 none
404 The fourth method is to not express write-after-write
405 dependencies to the backing store at all, by also specifying
406 no-disk-drain. This is dangerous on most IO stacks, may result
407 in write-reordering, and if so, can theoretically be the reason
408 for data corruption, or disturb the DRBD protocol, causing
409 spurious disconnect/reconnect cycles. Do not use
410 no-disk-drain.
411
412 no-md-flushes
413 Disables the use of disk flushes and barrier BIOs when accessing
414 the meta data device. See the notes on no-disk-flushes.
415
416 max-bio-bvecs
417 In some special circumstances the device mapper stack manages to
418 pass BIOs to DRBD that violate the constraints that are set forth
419 by DRBD's merge_bvec() function and which have more than one bvec.
420 A known example is: phys-disk -> DRBD -> LVM -> Xen -> misaligned
421 partition (63) -> DomU FS. Then you might see "bio would need to,
422 but cannot, be split:" in the Dom0's kernel log.
423
424 The best workaround is to proper align the partition within the VM
425 (E.g. start it at sector 1024). This costs 480 KiB of storage.
426 Unfortunately the default of most Linux partitioning tools is to
427 start the first partition at an odd number (63). Therefore most
428 distribution's install helpers for virtual linux machines will end
429 up with misaligned partitions. The second best workaround is to
430 limit DRBD's max bvecs per BIO (= max-bio-bvecs) to 1, but that
431 might cost performance.
432
433 The default value of max-bio-bvecs is 0, which means that there is
434 no user imposed limitation.
435
436 disk-timeout
437 If the driver of the lower_device does not finish an IO request
438 within disk_timeout, DRBD considers the disk as failed. If DRBD is
439 connected to a remote host, it will reissue local pending IO
440 requests to the peer, and ship all new IO requests to the peer
441 only. The disk state advances to diskless, as soon as the backing
442 block device has finished all IO requests.
443
444 The default value of disk-timeout is 0, which means that no timeout
445 is enforced. The default unit is 100ms. This option is available
446 since 8.3.12.
447
448 sndbuf-size size
449 size is the size of the TCP socket send buffer. The default value
450 is 0, i.e. autotune. You can specify smaller or larger values.
451 Larger values are appropriate for reasonable write throughput with
452 protocol A over high latency networks. Values below 32K do not make
453 sense. Since 8.0.13 resp. 8.2.7, setting the size value to 0 means
454 that the kernel should autotune this.
455
456 rcvbuf-size size
457 size is the size of the TCP socket receive buffer. The default
458 value is 0, i.e. autotune. You can specify smaller or larger
459 values. Usually this should be left at its default. Setting the
460 size value to 0 means that the kernel should autotune this.
461
462 timeout time
463
464 If the partner node fails to send an expected response packet
465 within time tenths of a second, the partner node is considered dead
466 and therefore the TCP/IP connection is abandoned. This must be
467 lower than connect-int and ping-int. The default value is 60 = 6
468 seconds, the unit 0.1 seconds.
469
470 connect-int time
471
472 In case it is not possible to connect to the remote DRBD device
473 immediately, DRBD keeps on trying to connect. With this option you
474 can set the time between two retries. The default value is 10
475 seconds, the unit is 1 second.
476
477 ping-int time
478
479 If the TCP/IP connection linking a DRBD device pair is idle for
480 more than time seconds, DRBD will generate a keep-alive packet to
481 check if its partner is still alive. The default is 10 seconds, the
482 unit is 1 second.
483
484 ping-timeout time
485
486 The time the peer has time to answer to a keep-alive packet. In
487 case the peer's reply is not received within this time period, it
488 is considered as dead. The default value is 500ms, the default unit
489 are tenths of a second.
490
491 max-buffers number
492
493 Limits the memory usage per DRBD minor device on the receiving
494 side, or for internal buffers during resync or online-verify. Unit
495 is PAGE_SIZE, which is 4 KiB on most systems. The minimum possible
496 setting is hard coded to 32 (=128 KiB). These buffers are used to
497 hold data blocks while they are written to/read from disk. To avoid
498 possible distributed deadlocks on congestion, this setting is used
499 as a throttle threshold rather than a hard limit. Once more than
500 max-buffers pages are in use, further allocation from this pool is
501 throttled. You want to increase max-buffers if you cannot saturate
502 the IO backend on the receiving side.
503
504 ko-count number
505
506 In case the secondary node fails to complete a single write request
507 for count times the timeout, it is expelled from the cluster. (I.e.
508 the primary node goes into StandAlone mode.) To disable this
509 feature, you should explicitly set it to 0; defaults may change
510 between versions.
511
512 max-epoch-size number
513
514 The highest number of data blocks between two write barriers. If
515 you set this smaller than 10, you might decrease your performance.
516
517 allow-two-primaries
518
519 With this option set you may assign the primary role to both nodes.
520 You only should use this option if you use a shared storage file
521 system on top of DRBD. At the time of writing the only ones are:
522 OCFS2 and GFS. If you use this option with any other file system,
523 you are going to crash your nodes and to corrupt your data!
524
525 unplug-watermark number
526 This setting has no effect with recent kernels that use explicit
527 on-stack plugging (upstream Linux kernel 2.6.39, distributions may
528 have backported).
529
530 When the number of pending write requests on the standby
531 (secondary) node exceeds the unplug-watermark, we trigger the
532 request processing of our backing storage device. Some storage
533 controllers deliver better performance with small values, others
534 deliver best performance when the value is set to the same value as
535 max-buffers, yet others don't feel much effect at all. Minimum 16,
536 default 128, maximum 131072.
537
538 cram-hmac-alg
539
540 You need to specify the HMAC algorithm to enable peer
541 authentication at all. You are strongly encouraged to use peer
542 authentication. The HMAC algorithm will be used for the challenge
543 response authentication of the peer. You may specify any digest
544 algorithm that is named in /proc/crypto.
545
546 shared-secret
547
548 The shared secret used in peer authentication. May be up to 64
549 characters. Note that peer authentication is disabled as long as no
550 cram-hmac-alg (see above) is specified.
551
552 after-sb-0pri policy
553 possible policies are:
554
555 disconnect
556 No automatic resynchronization, simply disconnect.
557
558 discard-younger-primary
559 Auto sync from the node that was primary before the split-brain
560 situation happened.
561
562 discard-older-primary
563 Auto sync from the node that became primary as second during
564 the split-brain situation.
565
566 discard-zero-changes
567 In case one node did not write anything since the split brain
568 became evident, sync from the node that wrote something to the
569 node that did not write anything. In case none wrote anything
570 this policy uses a random decision to perform a "resync" of 0
571 blocks. In case both have written something this policy
572 disconnects the nodes.
573
574 discard-least-changes
575 Auto sync from the node that touched more blocks during the
576 split brain situation.
577
578 discard-node-NODENAME
579 Auto sync to the named node.
580
581 after-sb-1pri policy
582 possible policies are:
583
584 disconnect
585 No automatic resynchronization, simply disconnect.
586
587 consensus
588 Discard the version of the secondary if the outcome of the
589 after-sb-0pri algorithm would also destroy the current
590 secondary's data. Otherwise disconnect.
591
592 violently-as0p
593 Always take the decision of the after-sb-0pri algorithm, even
594 if that causes an erratic change of the primary's view of the
595 data. This is only useful if you use a one-node FS (i.e. not
596 OCFS2 or GFS) with the allow-two-primaries flag, AND if you
597 really know what you are doing. This is DANGEROUS and MAY CRASH
598 YOUR MACHINE if you have an FS mounted on the primary node.
599
600 discard-secondary
601 Discard the secondary's version.
602
603 call-pri-lost-after-sb
604 Always honor the outcome of the after-sb-0pri algorithm. In
605 case it decides the current secondary has the right data, it
606 calls the "pri-lost-after-sb" handler on the current primary.
607
608 after-sb-2pri policy
609 possible policies are:
610
611 disconnect
612 No automatic resynchronization, simply disconnect.
613
614 violently-as0p
615 Always take the decision of the after-sb-0pri algorithm, even
616 if that causes an erratic change of the primary's view of the
617 data. This is only useful if you use a one-node FS (i.e. not
618 OCFS2 or GFS) with the allow-two-primaries flag, AND if you
619 really know what you are doing. This is DANGEROUS and MAY CRASH
620 YOUR MACHINE if you have an FS mounted on the primary node.
621
622 call-pri-lost-after-sb
623 Call the "pri-lost-after-sb" helper program on one of the
624 machines. This program is expected to reboot the machine, i.e.
625 make it secondary.
626
627 always-asbp
628 Normally the automatic after-split-brain policies are only used if
629 current states of the UUIDs do not indicate the presence of a third
630 node.
631
632 With this option you request that the automatic after-split-brain
633 policies are used as long as the data sets of the nodes are somehow
634 related. This might cause a full sync, if the UUIDs indicate the
635 presence of a third node. (Or double faults led to strange UUID
636 sets.)
637
638 rr-conflict policy
639 This option helps to solve the cases when the outcome of the resync
640 decision is incompatible with the current role assignment in the
641 cluster.
642
643 disconnect
644 No automatic resynchronization, simply disconnect.
645
646 violently
647 Sync to the primary node is allowed, violating the assumption
648 that data on a block device are stable for one of the nodes.
649 Dangerous, do not use.
650
651 call-pri-lost
652 Call the "pri-lost" helper program on one of the machines. This
653 program is expected to reboot the machine, i.e. make it
654 secondary.
655
656 data-integrity-alg alg
657 DRBD can ensure the data integrity of the user's data on the
658 network by comparing hash values. Normally this is ensured by the
659 16 bit checksums in the headers of TCP/IP packets.
660
661 This option can be set to any of the kernel's data digest
662 algorithms. In a typical kernel configuration you should have at
663 least one of md5, sha1, and crc32c available. By default this is
664 not enabled.
665
666 See also the notes on data integrity.
667
668 no-tcp-cork
669 DRBD usually uses the TCP socket option TCP_CORK to hint to the
670 network stack when it can expect more data, and when it should
671 flush out what it has in its send queue. It turned out that there
672 is at least one network stack that performs worse when one uses
673 this hinting method. Therefore we introducted this option, which
674 disables the setting and clearing of the TCP_CORK socket option by
675 DRBD.
676
677 on-congestion congestion_policy, congestion-fill fill_threshold,
678 congestion-extents active_extents_threshold
679 By default DRBD blocks when the available TCP send queue becomes
680 full. That means it will slow down the application that generates
681 the write requests that cause DRBD to send more data down that TCP
682 connection.
683
684 When DRBD is deployed with DRBD-proxy it might be more desirable
685 that DRBD goes into AHEAD/BEHIND mode shortly before the send queue
686 becomes full. In AHEAD/BEHIND mode DRBD does no longer replicate
687 data, but still keeps the connection open.
688
689 The advantage of the AHEAD/BEHIND mode is that the application is
690 not slowed down, even if DRBD-proxy's buffer is not sufficient to
691 buffer all write requests. The downside is that the peer node falls
692 behind, and that a resync will be necessary to bring it back into
693 sync. During that resync the peer node will have an inconsistent
694 disk.
695
696 Available congestion_policys are block and pull-ahead. The default
697 is block. Fill_threshold might be in the range of 0 to 10GiBytes.
698 The default is 0 which disables the check.
699 Active_extents_threshold has the same limits as al-extents.
700
701 The AHEAD/BEHIND mode and its settings are available since DRBD
702 8.3.10.
703
704 wfc-timeout time
705 Wait for connection timeout.
706
707 The init script drbd(8) blocks the boot process until the DRBD
708 resources are connected. When the cluster manager starts later, it
709 does not see a resource with internal split-brain. In case you want
710 to limit the wait time, do it here. Default is 0, which means
711 unlimited. The unit is seconds.
712
713 degr-wfc-timeout time
714
715 Wait for connection timeout, if this node was a degraded cluster.
716 In case a degraded cluster (= cluster with only one node left) is
717 rebooted, this timeout value is used instead of wfc-timeout,
718 because the peer is less likely to show up in time, if it had been
719 dead before. Value 0 means unlimited.
720
721 outdated-wfc-timeout time
722
723 Wait for connection timeout, if the peer was outdated. In case a
724 degraded cluster (= cluster with only one node left) with an
725 outdated peer disk is rebooted, this timeout value is used instead
726 of wfc-timeout, because the peer is not allowed to become primary
727 in the meantime. Value 0 means unlimited.
728
729 wait-after-sb
730 By setting this option you can make the init script to continue to
731 wait even if the device pair had a split brain situation and
732 therefore refuses to connect.
733
734 become-primary-on node-name
735 Sets on which node the device should be promoted to primary role by
736 the init script. The node-name might either be a host name or the
737 keyword both. When this option is not set the devices stay in
738 secondary role on both nodes. Usually one delegates the role
739 assignment to a cluster manager (e.g. heartbeat).
740
741 stacked-timeouts
742 Usually wfc-timeout and degr-wfc-timeout are ignored for stacked
743 devices, instead twice the amount of connect-int is used for the
744 connection timeouts. With the stacked-timeouts keyword you disable
745 this, and force DRBD to mind the wfc-timeout and degr-wfc-timeout
746 statements. Only do that if the peer of the stacked resource is
747 usually not available or will usually not become primary. By using
748 this option incorrectly, you run the risk of causing unexpected
749 split brain.
750
751 rate rate
752
753 To ensure a smooth operation of the application on top of DRBD, it
754 is possible to limit the bandwidth which may be used by background
755 synchronizations. The default is 250 KB/sec, the default unit is
756 KB/sec. Optional suffixes K, M, G are allowed.
757
758 use-rle
759
760 During resync-handshake, the dirty-bitmaps of the nodes are
761 exchanged and merged (using bit-or), so the nodes will have the
762 same understanding of which blocks are dirty. On large devices, the
763 fine grained dirty-bitmap can become large as well, and the bitmap
764 exchange can take quite some time on low-bandwidth links.
765
766 Because the bitmap typically contains compact areas where all bits
767 are unset (clean) or set (dirty), a simple run-length encoding
768 scheme can considerably reduce the network traffic necessary for
769 the bitmap exchange.
770
771 For backward compatibilty reasons, and because on fast links this
772 possibly does not improve transfer time but consumes cpu cycles,
773 this defaults to off.
774
775 after res-name
776
777 By default, resynchronization of all devices would run in parallel.
778 By defining a sync-after dependency, the resynchronization of this
779 resource will start only if the resource res-name is already in
780 connected state (i.e., has finished its resynchronization).
781
782 al-extents extents
783
784 DRBD automatically performs hot area detection. With this parameter
785 you control how big the hot area (= active set) can get. Each
786 extent marks 4M of the backing storage (= low-level device). In
787 case a primary node leaves the cluster unexpectedly, the areas
788 covered by the active set must be resynced upon rejoining of the
789 failed node. The data structure is stored in the meta-data area,
790 therefore each change of the active set is a write operation to the
791 meta-data device. A higher number of extents gives longer resync
792 times but less updates to the meta-data. The default number of
793 extents is 127. (Minimum: 7, Maximum: 3843)
794
795 verify-alg hash-alg
796 During online verification (as initiated by the verify
797 sub-command), rather than doing a bit-wise comparison, DRBD applies
798 a hash function to the contents of every block being verified, and
799 compares that hash with the peer. This option defines the hash
800 algorithm being used for that purpose. It can be set to any of the
801 kernel's data digest algorithms. In a typical kernel configuration
802 you should have at least one of md5, sha1, and crc32c available. By
803 default this is not enabled; you must set this option explicitly in
804 order to be able to use on-line device verification.
805
806 See also the notes on data integrity.
807
808 csums-alg hash-alg
809 A resync process sends all marked data blocks from the source to
810 the destination node, as long as no csums-alg is given. When one is
811 specified the resync process exchanges hash values of all marked
812 blocks first, and sends only those data blocks that have different
813 hash values.
814
815 This setting is useful for DRBD setups with low bandwidth links.
816 During the restart of a crashed primary node, all blocks covered by
817 the activity log are marked for resync. But a large part of those
818 will actually be still in sync, therefore using csums-alg will
819 lower the required bandwidth in exchange for CPU cycles.
820
821 c-plan-ahead plan_time, c-fill-target fill_target, c-delay-target
822 delay_target, c-max-rate max_rate
823 The dynamic resync speed controller gets enabled with setting
824 plan_time to a positive value. It aims to fill the buffers along
825 the data path with either a constant amount of data fill_target, or
826 aims to have a constant delay time of delay_target along the path.
827 The controller has an upper bound of max_rate.
828
829 By plan_time the agility of the controller is configured. Higher
830 values yield for slower/lower responses of the controller to
831 deviation from the target value. It should be at least 5 times RTT.
832 For regular data paths a fill_target in the area of 4k to 100k is
833 appropriate. For a setup that contains drbd-proxy it is advisable
834 to use delay_target instead. Only when fill_target is set to 0 the
835 controller will use delay_target. 5 times RTT is a reasonable
836 starting value. Max_rate should be set to the bandwidth available
837 between the DRBD-hosts and the machines hosting DRBD-proxy, or to
838 the available disk-bandwidth.
839
840 The default value of plan_time is 0, the default unit is 0.1
841 seconds. Fill_target has 0 and sectors as default unit.
842 Delay_target has 1 (100ms) and 0.1 as default unit. Max_rate has
843 10240 (100MiB/s) and KiB/s as default unit.
844
845 The dynamic resync speed controller and its settings are available
846 since DRBD 8.3.9.
847
848 c-min-rate min_rate
849 A node that is primary and sync-source has to schedule application
850 IO requests and resync IO requests. The min_rate tells DRBD use
851 only up to min_rate for resync IO and to dedicate all other
852 available IO bandwidth to application requests.
853
854 Note: The value 0 has a special meaning. It disables the limitation
855 of resync IO completely, which might slow down application IO
856 considerably. Set it to a value of 1, if you prefer that resync IO
857 never slows down application IO.
858
859 Note: Although the name might suggest that it is a lower bound for
860 the dynamic resync speed controller, it is not. If the DRBD-proxy
861 buffer is full, the dynamic resync speed controller is free to
862 lower the resync speed down to 0, completely independent of the
863 c-min-rate setting.
864
865 Min_rate has 4096 (4MiB/s) and KiB/s as default unit.
866
867 on-no-data-accessible ond-policy
868 This setting controls what happens to IO requests on a degraded,
869 disk less node (I.e. no data store is reachable). The available
870 policies are io-error and suspend-io.
871
872 If ond-policy is set to suspend-io you can either resume IO by
873 attaching/connecting the last lost data storage, or by the drbdadm
874 resume-io res command. The latter will result in IO errors of
875 course.
876
877 The default is io-error. This setting is available since DRBD
878 8.3.9.
879
880 cpu-mask cpu-mask
881
882 Sets the cpu-affinity-mask for DRBD's kernel threads of this
883 device. The default value of cpu-mask is 0, which means that DRBD's
884 kernel threads should be spread over all CPUs of the machine. This
885 value must be given in hexadecimal notation. If it is too big it
886 will be truncated.
887
888 pri-on-incon-degr cmd
889
890 This handler is called if the node is primary, degraded and if the
891 local copy of the data is inconsistent.
892
893 pri-lost-after-sb cmd
894
895 The node is currently primary, but lost the after-split-brain auto
896 recovery procedure. As as consequence, it should be abandoned.
897
898 pri-lost cmd
899
900 The node is currently primary, but DRBD's algorithm thinks that it
901 should become sync target. As a consequence it should give up its
902 primary role.
903
904 fence-peer cmd
905
906 The handler is part of the fencing mechanism. This handler is
907 called in case the node needs to fence the peer's disk. It should
908 use other communication paths than DRBD's network link.
909
910 local-io-error cmd
911
912 DRBD got an IO error from the local IO subsystem.
913
914 initial-split-brain cmd
915
916 DRBD has connected and detected a split brain situation. This
917 handler can alert someone in all cases of split brain, not just
918 those that go unresolved.
919
920 split-brain cmd
921
922 DRBD detected a split brain situation but remains unresolved.
923 Manual recovery is necessary. This handler should alert someone on
924 duty.
925
926 before-resync-target cmd
927
928 DRBD calls this handler just before a resync begins on the node
929 that becomes resync target. It might be used to take a snapshot of
930 the backing block device.
931
932 after-resync-target cmd
933
934 DRBD calls this handler just after a resync operation finished on
935 the node whose disk just became consistent after being inconsistent
936 for the duration of the resync. It might be used to remove a
937 snapshot of the backing device that was created by the
938 before-resync-target handler.
939
940 Other Keywords
941 include file-pattern
942
943 Include all files matching the wildcard pattern file-pattern. The
944 include statement is only allowed on the top level, i.e. it is not
945 allowed inside any section.
946
948 There are two independent methods in DRBD to ensure the integrity of
949 the mirrored data. The online-verify mechanism and the
950 data-integrity-alg of the network section.
951
952 Both mechanisms might deliver false positives if the user of DRBD
953 modifies the data which gets written to disk while the transfer goes
954 on. This may happen for swap, or for certain append while global sync,
955 or truncate/rewrite workloads, and not necessarily poses a problem for
956 the integrity of the data. Usually when the initiator of the data
957 transfer does this, it already knows that that data block will not be
958 part of an on disk data structure, or will be resubmitted with correct
959 data soon enough.
960
961 The data-integrity-alg causes the receiving side to log an error about
962 "Digest integrity check FAILED: Ns +x\n", where N is the sector offset,
963 and x is the size of the requst in bytes. It will then disconnect, and
964 reconnect, thus causing a quick resync. If the sending side at the same
965 time detected a modification, it warns about "Digest mismatch, buffer
966 modified by upper layers during write: Ns +x\n", which shows that this
967 was a false positive. The sending side may detect these buffer
968 modifications immediately after the unmodified data has been copied to
969 the tcp buffers, in which case the receiving side won't notice it.
970
971 The most recent (2007) example of systematic corruption was an issue
972 with the TCP offloading engine and the driver of a certain type of GBit
973 NIC. The actual corruption happened on the DMA transfer from core
974 memory to the card. Since the TCP checksum gets calculated on the card,
975 this type of corruption stays undetected as long as you do not use
976 either the online verify or the data-integrity-alg.
977
978 We suggest to use the data-integrity-alg only during a pre-production
979 phase due to its CPU costs. Further we suggest to do online verify runs
980 regularly e.g. once a month during a low load period.
981
983 This document was revised for version 8.3.2 of the DRBD distribution.
984
986 Written by Philipp Reisner <philipp.reisner@linbit.com> and Lars
987 Ellenberg <lars.ellenberg@linbit.com>.
988
990 Report bugs to <drbd-user@lists.linbit.com>.
991
993 Copyright 2001-2008 LINBIT Information Technologies, Philipp Reisner,
994 Lars Ellenberg. This is free software; see the source for copying
995 conditions. There is NO warranty; not even for MERCHANTABILITY or
996 FITNESS FOR A PARTICULAR PURPOSE.
997
999 drbd(8), drbddisk(8), drbdsetup(8), drbdadm(8), DRBD User's Guide[1],
1000 DRBD web site[3]
1001
1003 1. DRBD User's Guide
1004 http://www.drbd.org/users-guide/
1005
1006 2. DRBD's online usage counter
1007 http://usage.drbd.org
1008
1009 3. DRBD web site
1010 http://www.drbd.org/
1011
1012
1013
1014DRBD 8.3.2 5 Dec 2008 DRBD.CONF(5)