1DRBD.CONF(5)                  Configuration Files                 DRBD.CONF(5)
2
3
4

NAME

6       drbd.conf - Configuration file for DRBD's devices
7

INTRODUCTION

9       The file /etc/drbd.conf is read by drbdadm.
10
11       The file format was designed as to allow to have a verbatim copy of the
12       file on both nodes of the cluster. It is highly recommended to do so in
13       order to keep your configuration manageable. The file /etc/drbd.conf
14       should be the same on both nodes of the cluster. Changes to
15       /etc/drbd.conf do not apply immediately.
16
17       By convention the main config contains two include statements. The
18       first one includes the file /etc/drbd.d/global_common.conf, the second
19       one all file with a .res suffix.
20
21           resource r0 {
22                net {
23                     protocol C;
24                     cram-hmac-alg sha1;
25                     shared-secret "FooFunFactory";
26                }
27                disk {
28                     resync-rate 10M;
29                }
30                on alice {
31                     volume 0 {
32                          device    minor 1;
33                          disk      /dev/sda7;
34                          meta-disk internal;
35                     }
36                     address   10.1.1.31:7789;
37                }
38                on bob {
39                     volume 0 {
40                          device    minor 1;
41                          disk      /dev/sda7;
42                          meta-disk internal;
43                     }
44                     address   10.1.1.32:7789;
45                }
46           }
47
48       In this example, there is a single DRBD resource (called r0) which uses
49       protocol C for the connection between its devices. It contains a single
50       volume which runs on host alice uses /dev/drbd1 as devices for its
51       application, and /dev/sda7 as low-level storage for the data. The IP
52       addresses are used to specify the networking interfaces to be used. An
53       eventually running resync process should use about 10MByte/second of IO
54       bandwidth. This resync-rate statement is valid for volume 0, but would
55       also be valid for further volumes. In this example it assigns full
56       10MByte/second to each volume.
57
58       There may be multiple resource sections in a single drbd.conf file. For
59       more examples, please have a look at the DRBD User's Guide[1].
60

FILE FORMAT

62       The file consists of sections and parameters. A section begins with a
63       keyword, sometimes an additional name, and an opening brace (“{”). A
64       section ends with a closing brace (“}”. The braces enclose the
65       parameters.
66
67       section [name] { parameter value; [...] }
68
69       A parameter starts with the identifier of the parameter followed by
70       whitespace. Every subsequent character is considered as part of the
71       parameter's value. A special case are Boolean parameters which consist
72       only of the identifier. Parameters are terminated by a semicolon (“;”).
73
74       Some parameter values have default units which might be overruled by K,
75       M or G. These units are defined in the usual way (K = 2^10 = 1024, M =
76       1024 K, G = 1024 M).
77
78       Comments may be placed into the configuration file and must begin with
79       a hash sign (“#”). Subsequent characters are ignored until the end of
80       the line.
81
82   Sections
83       skip
84
85           Comments out chunks of text, even spanning more than one line.
86           Characters between the keyword skip and the opening brace (“{”) are
87           ignored. Everything enclosed by the braces is skipped. This comes
88           in handy, if you just want to comment out some 'resource [name]
89           {...}' section: just precede it with 'skip'.
90
91       global
92
93           Configures some global parameters. Currently only minor-count,
94           dialog-refresh, disable-ip-verification and usage-count are allowed
95           here. You may only have one global section, preferably as the first
96           section.
97
98       common
99
100           All resources inherit the options set in this section. The common
101           section might have a startup, a options, a handlers, a net and a
102           disk section.
103
104       resource name
105
106           Configures a DRBD resource. Each resource section needs to have two
107           (or more) on host sections and may have a startup, a options, a
108           handlers, a net and a disk section. It might contain volumes
109           sections.
110
111       on host-name
112
113           Carries the necessary configuration parameters for a DRBD device of
114           the enclosing resource.  host-name is mandatory and must match the
115           Linux host name (uname -n) of one of the nodes. You may list more
116           than one host name here, in case you want to use the same
117           parameters on several hosts (you'd have to move the IP around
118           usually). Or you may list more than two such sections.
119
120                    resource r1 {
121                         protocol C;
122                         device minor 1;
123                         meta-disk internal;
124
125                         on alice bob {
126                              address 10.2.2.100:7801;
127                              disk /dev/mapper/some-san;
128                         }
129                         on charlie {
130                              address 10.2.2.101:7801;
131                              disk /dev/mapper/other-san;
132                         }
133                         on daisy {
134                              address 10.2.2.103:7801;
135                              disk /dev/mapper/other-san-as-seen-from-daisy;
136                         }
137                    }
138
139
140           See also the floating section keyword. Required statements in this
141           section: address and volume. Note for backward compatibility and
142           convenience it is valid to embed the statements of a single volume
143           directly into the host section.
144
145       volume vnr
146
147           Defines a volume within a connection. The minor numbers of a
148           replicated volume might be different on different hosts, the volume
149           number (vnr) is what groups them together. Required parameters in
150           this section: device, disk, meta-disk.
151
152       stacked-on-top-of resource
153
154           For a stacked DRBD setup (3 or 4 nodes), a stacked-on-top-of is
155           used instead of an on section. Required parameters in this section:
156           device and address.
157
158       floating AF addr:port
159
160           Carries the necessary configuration parameters for a DRBD device of
161           the enclosing resource. This section is very similar to the on
162           section. The difference to the on section is that the matching of
163           the host sections to machines is done by the IP-address instead of
164           the node name. Required parameters in this section: device, disk,
165           meta-disk, all of which may be inherited from the resource section,
166           in which case you may shorten this section down to just the address
167           identifier.
168
169                    resource r2 {
170                         protocol C;
171                         device minor 2;
172                         disk      /dev/sda7;
173                         meta-disk internal;
174
175                         # short form, device, disk and meta-disk inherited
176                         floating 10.1.1.31:7802;
177
178                         # longer form, only device inherited
179                         floating 10.1.1.32:7802 {
180                              disk /dev/sdb;
181                              meta-disk /dev/sdc8;
182                         }
183                    }
184
185
186       disk
187
188           This section is used to fine tune DRBD's properties in respect to
189           the low level storage. Please refer to drbdsetup(8) for detailed
190           description of the parameters. Optional parameters: on-io-error,
191           size, fencing, disk-barrier, disk-flushes, disk-drain, md-flushes,
192           max-bio-bvecs, resync-rate, resync-after, al-extents, al-updates,
193           c-plan-ahead, c-fill-target, c-delay-target, c-max-rate,
194           c-min-rate, disk-timeout, discard-zeroes-if-aligned,
195           rs-discard-granularity, read-balancing.
196
197       net
198
199           This section is used to fine tune DRBD's properties. Please refer
200           to drbdsetup(8) for a detailed description of this section's
201           parameters. Optional parameters: protocol, sndbuf-size,
202           rcvbuf-size, timeout, connect-int, ping-int, ping-timeout,
203           max-buffers, max-epoch-size, ko-count, allow-two-primaries,
204           cram-hmac-alg, shared-secret, after-sb-0pri, after-sb-1pri,
205           after-sb-2pri, data-integrity-alg, no-tcp-cork, on-congestion,
206           congestion-fill, congestion-extents, verify-alg, use-rle,
207           csums-alg, socket-check-timeout.
208
209       startup
210
211           This section is used to fine tune DRBD's properties. Please refer
212           to drbdsetup(8) for a detailed description of this section's
213           parameters. Optional parameters: wfc-timeout, degr-wfc-timeout,
214           outdated-wfc-timeout, wait-after-sb, stacked-timeouts and
215           become-primary-on.
216
217       options
218
219           This section is used to fine tune the behaviour of the resource
220           object. Please refer to drbdsetup(8) for a detailed description of
221           this section's parameters. Optional parameters: cpu-mask, and
222           on-no-data-accessible.
223
224       handlers
225
226           In this section you can define handlers (executables) that are
227           started by the DRBD system in response to certain events. Optional
228           parameters: pri-on-incon-degr, pri-lost-after-sb, pri-lost,
229           fence-peer (formerly oudate-peer), local-io-error,
230           initial-split-brain, split-brain, before-resync-target,
231           after-resync-target.
232
233           The interface is done via environment variables:
234
235DRBD_RESOURCE is the name of the resource
236
237DRBD_MINOR is the minor number of the DRBD device, in decimal.
238
239DRBD_CONF is the path to the primary configuration file; if you
240               split your configuration into multiple files (e.g. in
241               /etc/drbd.conf.d/), this will not be helpful.
242
243DRBD_PEER_AF , DRBD_PEER_ADDRESS , DRBD_PEERS are the address
244               family (e.g.  ipv6), the peer's address and hostnames.
245
246
247           DRBD_PEER is deprecated.
248
249           Please note that not all of these might be set for all handlers,
250           and that some values might not be useable for a floating
251           definition.
252
253   Parameters
254       minor-count count
255           count may be a number from 1 to 1048575.
256
257           Minor-count is a sizing hint for DRBD. It helps to right-size
258           various memory pools. It should be set in the in the same order of
259           magnitude than the actual number of minors you use. Per default the
260           module loads with 11 more resources than you have currently in your
261           config but at least 32.
262
263       dialog-refresh time
264           time may be 0 or a positive number.
265
266           The user dialog redraws the second count every time seconds (or
267           does no redraws if time is 0). The default value is 1.
268
269       disable-ip-verification
270           Use disable-ip-verification if, for some obscure reasons, drbdadm
271           can/might not use ip or ifconfig to do a sanity check for the IP
272           address. You can disable the IP verification with this option.
273
274       udev-always-use-vnr
275           When udev asks drbdadm for a list of device related symlinks,
276           drbdadm would suggest symlinks with differing naming conventions,
277           depending on whether the resource has explicit volume VNR { }
278           definitions, or only one single volume with the implicit volume
279           number 0:
280
281               # implicit single volume without "volume 0 {}" block
282               DEVICE=drbd<minor>
283               SYMLINK_BY_RES=drbd/by-res/<resource-name>
284
285               # explicit volume definition: volume VNR { }
286               DEVICE=drbd<minor>
287               SYMLINK_BY_RES=drbd/by-res/<resource-name>/VNR
288
289           If you define this parameter in the global section, drbdadm will
290           always add the .../VNR part, and will not care for whether the
291           volume definition was implicit or explicit.
292
293           For legacy backward compatibility, this is off by default, but we
294           do recommend to enable it.
295
296       usage-count val
297           Please participate in DRBD's online usage counter[2]. The most
298           convenient way to do so is to set this option to yes. Valid options
299           are: yes, no and ask.
300
301       protocol prot-id
302           On the TCP/IP link the specified protocol is used. Valid protocol
303           specifiers are A, B, and C.
304
305           Protocol A: write IO is reported as completed, if it has reached
306           local disk and local TCP send buffer.
307
308           Protocol B: write IO is reported as completed, if it has reached
309           local disk and remote buffer cache.
310
311           Protocol C: write IO is reported as completed, if it has reached
312           both local and remote disk.
313
314       device name minor nr
315
316           The name of the block device node of the resource being described.
317           You must use this device with your application (file system) and
318           you must not use the low level block device which is specified with
319           the disk parameter.
320
321           One can ether omit the name or minor and the minor number. If you
322           omit the name a default of /dev/drbdminor will be used.
323
324           Udev will create additional symlinks in /dev/drbd/by-res and
325           /dev/drbd/by-disk.
326
327       disk name
328
329           DRBD uses this block device to actually store and retrieve the
330           data. Never access such a device while DRBD is running on top of
331           it. This also holds true for dumpe2fs(8) and similar commands.
332
333       address AF addr:port
334
335           A resource needs one IP address per device, which is used to wait
336           for incoming connections from the partner device respectively to
337           reach the partner device.  AF must be one of ipv4, ipv6, ssocks or
338           sdp (for compatibility reasons sci is an alias for ssocks). It may
339           be omited for IPv4 addresses. The actual IPv6 address that follows
340           the ipv6 keyword must be placed inside brackets: ipv6
341           [fd01:2345:6789:abcd::1]:7800.
342
343           Each DRBD resource needs a TCP port which is used to connect to the
344           node's partner device. Two different DRBD resources may not use the
345           same addr:port combination on the same node.
346
347       meta-disk internal,
348       meta-disk device,
349       meta-disk device [index]
350
351           Internal means that the last part of the backing device is used to
352           store the meta-data. The size of the meta-data is computed based on
353           the size of the device.
354
355           When a device is specified, either with or without an index, DRBD
356           stores the meta-data on this device. Without index, the size of the
357           meta-data is determined by the size of the data device. This is
358           usually used with LVM, which allows to have many variable sized
359           block devices. The meta-data size is 36kB + Backing-Storage-size /
360           32k, rounded up to the next 4kb boundary. (Rule of the thumb:
361           32kByte per 1GByte of storage, rounded up to the next MB.)
362
363           When an index is specified, each index number refers to a fixed
364           slot of meta-data of 128 MB, which allows a maximum data size of 4
365           TiB. This way, multiple DBRD devices can share the same meta-data
366           device. For example, if /dev/sde6[0] and /dev/sde6[1] are used,
367           /dev/sde6 must be at least 256 MB big. Because of the hard size
368           limit, use of meta-disk indexes is discouraged.
369
370       on-io-error handler
371           handler is taken, if the lower level device reports io-errors to
372           the upper layers.
373
374           handler may be pass_on, call-local-io-error or detach.
375
376           pass_on: The node downgrades the disk status to inconsistent, marks
377           the erroneous block as inconsistent in the bitmap and retries the
378           IO on the remote node.
379
380           call-local-io-error: Call the handler script local-io-error.
381
382           detach: The node drops its low level device, and continues in
383           diskless mode.
384
385       fencing fencing_policy
386
387           By fencing we understand preventive measures to avoid situations
388           where both nodes are primary and disconnected (AKA split brain).
389
390           Valid fencing policies are:
391
392           dont-care
393               This is the default policy. No fencing actions are taken.
394
395           resource-only
396               If a node becomes a disconnected primary, it tries to fence the
397               peer's disk. This is done by calling the fence-peer handler.
398               The handler is supposed to reach the other node over
399               alternative communication paths and call 'drbdadm outdate res'
400               there.
401
402           resource-and-stonith
403               If a node becomes a disconnected primary, it freezes all its IO
404               operations and calls its fence-peer handler. The fence-peer
405               handler is supposed to reach the peer over alternative
406               communication paths and call 'drbdadm outdate res' there. In
407               case it cannot reach the peer it should stonith the peer. IO is
408               resumed as soon as the situation is resolved. In case your
409               handler fails, you can resume IO with the resume-io command.
410
411       disk-barrier,
412       disk-flushes,
413       disk-drain
414           DRBD has four implementations to express write-after-write
415           dependencies to its backing storage device. DRBD will use the first
416           method that is supported by the backing storage device and that is
417           not disabled. By default the flush method is used.
418
419           Since drbd-8.4.2 disk-barrier is disabled by default because since
420           linux-2.6.36 (or 2.6.32 RHEL6) there is no reliable way to
421           determine if queuing of IO-barriers works.  Dangerous only enable
422           if you are told so by one that knows for sure.
423
424           When selecting the method you should not only base your decision on
425           the measurable performance. In case your backing storage device has
426           a volatile write cache (plain disks, RAID of plain disks) you
427           should use one of the first two. In case your backing storage
428           device has battery-backed write cache you may go with option 3.
429           Option 4 (disable everything, use "none") is dangerous on most IO
430           stacks, may result in write-reordering, and if so, can
431           theoretically be the reason for data corruption, or disturb the
432           DRBD protocol, causing spurious disconnect/reconnect cycles.  Do
433           not use no-disk-drain.
434
435           Unfortunately device mapper (LVM) might not support barriers.
436
437           The letter after "wo:" in /proc/drbd indicates with method is
438           currently in use for a device: b, f, d, n. The implementations are:
439
440           barrier
441               The first requires that the driver of the backing storage
442               device support barriers (called 'tagged command queuing' in
443               SCSI and 'native command queuing' in SATA speak). The use of
444               this method can be enabled by setting the disk-barrier options
445               to yes.
446
447           flush
448               The second requires that the backing device support disk
449               flushes (called 'force unit access' in the drive vendors
450               speak). The use of this method can be disabled setting
451               disk-flushes to no.
452
453           drain
454               The third method is simply to let write requests drain before
455               write requests of a new reordering domain are issued. This was
456               the only implementation before 8.0.9.
457
458           none
459               The fourth method is to not express write-after-write
460               dependencies to the backing store at all, by also specifying
461               no-disk-drain. This is dangerous on most IO stacks, may result
462               in write-reordering, and if so, can theoretically be the reason
463               for data corruption, or disturb the DRBD protocol, causing
464               spurious disconnect/reconnect cycles.  Do not use
465               no-disk-drain.
466
467       md-flushes
468           Disables the use of disk flushes and barrier BIOs when accessing
469           the meta data device. See the notes on disk-flushes.
470
471       max-bio-bvecs
472           In some special circumstances the device mapper stack manages to
473           pass BIOs to DRBD that violate the constraints that are set forth
474           by DRBD's merge_bvec() function and which have more than one bvec.
475           A known example is: phys-disk -> DRBD -> LVM -> Xen -> misaligned
476           partition (63) -> DomU FS. Then you might see "bio would need to,
477           but cannot, be split:" in the Dom0's kernel log.
478
479           The best workaround is to proper align the partition within the VM
480           (E.g. start it at sector 1024). This costs 480 KiB of storage.
481           Unfortunately the default of most Linux partitioning tools is to
482           start the first partition at an odd number (63). Therefore most
483           distribution's install helpers for virtual linux machines will end
484           up with misaligned partitions. The second best workaround is to
485           limit DRBD's max bvecs per BIO (= max-bio-bvecs) to 1, but that
486           might cost performance.
487
488           The default value of max-bio-bvecs is 0, which means that there is
489           no user imposed limitation.
490
491       disk-timeout
492           If the lower-level device on which a DRBD device stores its data
493           does not finish an I/O request within the defined disk-timeout,
494           DRBD treats this as a failure. The lower-level device is detached,
495           and the device's disk state advances to Diskless. If DRBD is
496           connected to one or more peers, the failed request is passed on to
497           one of them.
498
499           This option is dangerous and may lead to kernel panic!
500
501           "Aborting" requests, or force-detaching the disk, is intended for
502           completely blocked/hung local backing devices which do no longer
503           complete requests at all, not even do error completions. In this
504           situation, usually a hard-reset and failover is the only way out.
505
506           By "aborting", basically faking a local error-completion, we allow
507           for a more graceful swichover by cleanly migrating services. Still
508           the affected node has to be rebooted "soon".
509
510           By completing these requests, we allow the upper layers to re-use
511           the associated data pages.
512
513           If later the local backing device "recovers", and now DMAs some
514           data from disk into the original request pages, in the best case it
515           will just put random data into unused pages; but typically it will
516           corrupt meanwhile completely unrelated data, causing all sorts of
517           damage.
518
519           Which means delayed successful completion, especially for READ
520           requests, is a reason to panic(). We assume that a delayed *error*
521           completion is OK, though we still will complain noisily about it.
522
523           The default value of disk-timeout is 0, which stands for an
524           infinite timeout. Timeouts are specified in units of 0.1 seconds.
525           This option is available since DRBD 8.3.12.
526
527       discard-zeroes-if-aligned {yes | no}
528
529           There are several aspects to discard/trim/unmap support on linux
530           block devices. Even if discard is supported in general, it may fail
531           silently, or may partially ignore discard requests. Devices also
532           announce whether reading from unmapped blocks returns defined data
533           (usually zeroes), or undefined data (possibly old data, possibly
534           garbage).
535
536           If on different nodes, DRBD is backed by devices with differing
537           discard characteristics, discards may lead to data divergence (old
538           data or garbage left over on one backend, zeroes due to unmapped
539           areas on the other backend). Online verify would now potentially
540           report tons of spurious differences. While probably harmless for
541           most use cases (fstrim on a file system), DRBD cannot have that.
542
543           To play safe, we have to disable discard support, if our local
544           backend (on a Primary) does not support "discard_zeroes_data=true".
545           We also have to translate discards to explicit zero-out on the
546           receiving side, unless the receiving side (Secondary) supports
547           "discard_zeroes_data=true", thereby allocating areas what were
548           supposed to be unmapped.
549
550           There are some devices (notably the LVM/DM thin provisioning) that
551           are capable of discard, but announce discard_zeroes_data=false. In
552           the case of DM-thin, discards aligned to the chunk size will be
553           unmapped, and reading from unmapped sectors will return zeroes.
554           However, unaligned partial head or tail areas of discard requests
555           will be silently ignored.
556
557           If we now add a helper to explicitly zero-out these unaligned
558           partial areas, while passing on the discard of the aligned full
559           chunks, we effectively achieve discard_zeroes_data=true on such
560           devices.
561
562           Setting discard-zeroes-if-aligned to yes will allow DRBD to use
563           discards, and to announce discard_zeroes_data=true, even on
564           backends that announce discard_zeroes_data=false.
565
566           Setting discard-zeroes-if-aligned to no will cause DRBD to always
567           fall-back to zero-out on the receiving side, and to not even
568           announce discard capabilities on the Primary, if the respective
569           backend announces discard_zeroes_data=false.
570
571           We used to ignore the discard_zeroes_data setting completely. To
572           not break established and expected behaviour, and suddenly cause
573           fstrim on thin-provisioned LVs to run out-of-space instead of
574           freeing up space, the default value is yes.
575
576           This option is available since 8.4.7.
577
578       --disable-write-same {yes | no}
579
580           Some disks announce WRITE_SAME support to the kernel but fail with
581           an I/O error upon actually receiving such a request. This mostly
582           happens when using virtualized disks -- notably, this behavior has
583           been observed with VMware's virtual disks.
584
585           When disable-write-same is set to yes, WRITE_SAME detection is
586           manually overriden and support is disabled.
587
588           The default value of disable-write-same is no. This option is
589           available since 8.4.7.
590
591       read-balancing method
592           The supported methods for load balancing of read requests are
593           prefer-local, prefer-remote, round-robin, least-pending,
594           when-congested-remote, 32K-striping, 64K-striping, 128K-striping,
595           256K-striping, 512K-striping and 1M-striping.
596
597           The default value of read-balancing is prefer-local. This option is
598           available since 8.4.1.
599
600       rs-discard-granularity byte
601           When rs-discard-granularity is set to a non zero, positive value
602           then DRBD tries to do a resync operation in requests of this size.
603           In case such a block contains only zero bytes on the sync source
604           node, the sync target node will issue a discard/trim/unmap command
605           for the area.
606
607           The value is constrained by the discard granularity of the backing
608           block device. In case rs-discard-granularity is not a multiplier of
609           the discard granularity of the backing block device DRBD rounds it
610           up. The feature only gets active if the backing block device reads
611           back zeroes after a discard command.
612
613           The default value of rs-discard-granularity is 0. This option is
614           available since 8.4.7.
615
616       sndbuf-size size
617           size is the size of the TCP socket send buffer. The default value
618           is 0, i.e. autotune. You can specify smaller or larger values.
619           Larger values are appropriate for reasonable write throughput with
620           protocol A over high latency networks. Values below 32K do not make
621           sense. Since 8.0.13 resp. 8.2.7, setting the size value to 0 means
622           that the kernel should autotune this.
623
624       rcvbuf-size size
625           size is the size of the TCP socket receive buffer. The default
626           value is 0, i.e. autotune. You can specify smaller or larger
627           values. Usually this should be left at its default. Setting the
628           size value to 0 means that the kernel should autotune this.
629
630       timeout time
631
632           If the partner node fails to send an expected response packet
633           within time tenths of a second, the partner node is considered dead
634           and therefore the TCP/IP connection is abandoned. This must be
635           lower than connect-int and ping-int. The default value is 60 = 6
636           seconds, the unit 0.1 seconds.
637
638       connect-int time
639
640           In case it is not possible to connect to the remote DRBD device
641           immediately, DRBD keeps on trying to connect. With this option you
642           can set the time between two retries. The default value is 10
643           seconds, the unit is 1 second.
644
645       ping-int time
646
647           If the TCP/IP connection linking a DRBD device pair is idle for
648           more than time seconds, DRBD will generate a keep-alive packet to
649           check if its partner is still alive. The default is 10 seconds, the
650           unit is 1 second.
651
652       ping-timeout time
653
654           The time the peer has time to answer to a keep-alive packet. In
655           case the peer's reply is not received within this time period, it
656           is considered as dead. The default value is 500ms, the default unit
657           are tenths of a second.
658
659       max-buffers number
660
661           Limits the memory usage per DRBD minor device on the receiving
662           side, or for internal buffers during resync or online-verify. Unit
663           is PAGE_SIZE, which is 4 KiB on most systems. The minimum possible
664           setting is hard coded to 32 (=128 KiB). These buffers are used to
665           hold data blocks while they are written to/read from disk. To avoid
666           possible distributed deadlocks on congestion, this setting is used
667           as a throttle threshold rather than a hard limit. Once more than
668           max-buffers pages are in use, further allocation from this pool is
669           throttled. You want to increase max-buffers if you cannot saturate
670           the IO backend on the receiving side.
671
672       ko-count number
673
674           In case the secondary node fails to complete a single write request
675           for count times the timeout, it is expelled from the cluster. (I.e.
676           the primary node will kill and restart the connection.) To disable
677           this feature, you should explicitly set it to 0; defaults may
678           change between versions.
679
680       max-epoch-size number
681
682           The highest number of data blocks between two write barriers. If
683           you set this smaller than 10, you might decrease your performance.
684
685       allow-two-primaries
686
687           With this option set you may assign the primary role to both nodes.
688           You only should use this option if you use a shared storage file
689           system on top of DRBD. At the time of writing the only ones are:
690           OCFS2 and GFS. If you use this option with any other file system,
691           you are going to crash your nodes and to corrupt your data!
692
693       unplug-watermark number
694           This setting has no effect with recent kernels that use explicit
695           on-stack plugging (upstream Linux kernel 2.6.39, distributions may
696           have backported).
697
698           When the number of pending write requests on the standby
699           (secondary) node exceeds the unplug-watermark, we trigger the
700           request processing of our backing storage device. Some storage
701           controllers deliver better performance with small values, others
702           deliver best performance when the value is set to the same value as
703           max-buffers, yet others don't feel much effect at all. Minimum 16,
704           default 128, maximum 131072.
705
706       cram-hmac-alg
707
708           You need to specify the HMAC algorithm to enable peer
709           authentication at all. You are strongly encouraged to use peer
710           authentication. The HMAC algorithm will be used for the challenge
711           response authentication of the peer. You may specify any digest
712           algorithm that is named in /proc/crypto.
713
714       shared-secret
715
716           The shared secret used in peer authentication. May be up to 64
717           characters. Note that peer authentication is disabled as long as no
718           cram-hmac-alg (see above) is specified.
719
720       after-sb-0pri  policy
721           possible policies are:
722
723           disconnect
724               No automatic resynchronization, simply disconnect.
725
726           discard-younger-primary
727               Auto sync from the node that was primary before the split-brain
728               situation happened.
729
730           discard-older-primary
731               Auto sync from the node that became primary as second during
732               the split-brain situation.
733
734           discard-zero-changes
735               In case one node did not write anything since the split brain
736               became evident, sync from the node that wrote something to the
737               node that did not write anything. In case none wrote anything
738               this policy uses a random decision to perform a "resync" of 0
739               blocks. In case both have written something this policy
740               disconnects the nodes.
741
742           discard-least-changes
743               Auto sync from the node that touched more blocks during the
744               split brain situation.
745
746           discard-node-NODENAME
747               Auto sync to the named node.
748
749       after-sb-1pri  policy
750           possible policies are:
751
752           disconnect
753               No automatic resynchronization, simply disconnect.
754
755           consensus
756               Discard the version of the secondary if the outcome of the
757               after-sb-0pri algorithm would also destroy the current
758               secondary's data. Otherwise disconnect.
759
760           violently-as0p
761               Always take the decision of the after-sb-0pri algorithm, even
762               if that causes an erratic change of the primary's view of the
763               data. This is only useful if you use a one-node FS (i.e. not
764               OCFS2 or GFS) with the allow-two-primaries flag, AND if you
765               really know what you are doing. This is DANGEROUS and MAY CRASH
766               YOUR MACHINE if you have an FS mounted on the primary node.
767
768           discard-secondary
769               Discard the secondary's version.
770
771           call-pri-lost-after-sb
772               Always honor the outcome of the after-sb-0pri algorithm. In
773               case it decides the current secondary has the right data, it
774               calls the "pri-lost-after-sb" handler on the current primary.
775
776       after-sb-2pri  policy
777           possible policies are:
778
779           disconnect
780               No automatic resynchronization, simply disconnect.
781
782           violently-as0p
783               Always take the decision of the after-sb-0pri algorithm, even
784               if that causes an erratic change of the primary's view of the
785               data. This is only useful if you use a one-node FS (i.e. not
786               OCFS2 or GFS) with the allow-two-primaries flag, AND if you
787               really know what you are doing. This is DANGEROUS and MAY CRASH
788               YOUR MACHINE if you have an FS mounted on the primary node.
789
790           call-pri-lost-after-sb
791               Call the "pri-lost-after-sb" helper program on one of the
792               machines. This program is expected to reboot the machine, i.e.
793               make it secondary.
794
795       always-asbp
796           Normally the automatic after-split-brain policies are only used if
797           current states of the UUIDs do not indicate the presence of a third
798           node.
799
800           With this option you request that the automatic after-split-brain
801           policies are used as long as the data sets of the nodes are somehow
802           related. This might cause a full sync, if the UUIDs indicate the
803           presence of a third node. (Or double faults led to strange UUID
804           sets.)
805
806       rr-conflict  policy
807           This option helps to solve the cases when the outcome of the resync
808           decision is incompatible with the current role assignment in the
809           cluster.
810
811           disconnect
812               No automatic resynchronization, simply disconnect.
813
814           violently
815               Sync to the primary node is allowed, violating the assumption
816               that data on a block device are stable for one of the nodes.
817               Dangerous, do not use.
818
819           call-pri-lost
820               Call the pri-lost-after-sb helper program on one of the
821               machines unless that machine can demote to secondary. The
822               helper program is expected to reboot the machine, which brings
823               the node into a secondary role. Which machine runs the helper
824               program is determined by the after-sb-0pri strategy.
825
826       data-integrity-alg  alg
827           DRBD can ensure the data integrity of the user's data on the
828           network by comparing hash values. Normally this is ensured by the
829           16 bit checksums in the headers of TCP/IP packets.
830
831           This option can be set to any of the kernel's data digest
832           algorithms. In a typical kernel configuration you should have at
833           least one of md5, sha1, and crc32c available. By default this is
834           not enabled.
835
836           See also the notes on data integrity.
837
838       tcp-cork
839           DRBD usually uses the TCP socket option TCP_CORK to hint to the
840           network stack when it can expect more data, and when it should
841           flush out what it has in its send queue. It turned out that there
842           is at least one network stack that performs worse when one uses
843           this hinting method. Therefore we introducted this option. By
844           setting tcp-cork to no you can disable the setting and clearing of
845           the TCP_CORK socket option by DRBD.
846
847       on-congestion congestion_policy,
848       congestion-fill fill_threshold,
849       congestion-extents active_extents_threshold
850           By default DRBD blocks when the available TCP send queue becomes
851           full. That means it will slow down the application that generates
852           the write requests that cause DRBD to send more data down that TCP
853           connection.
854
855           When DRBD is deployed with DRBD-proxy it might be more desirable
856           that DRBD goes into AHEAD/BEHIND mode shortly before the send queue
857           becomes full. In AHEAD/BEHIND mode DRBD does no longer replicate
858           data, but still keeps the connection open.
859
860           The advantage of the AHEAD/BEHIND mode is that the application is
861           not slowed down, even if DRBD-proxy's buffer is not sufficient to
862           buffer all write requests. The downside is that the peer node falls
863           behind, and that a resync will be necessary to bring it back into
864           sync. During that resync the peer node will have an inconsistent
865           disk.
866
867           Available congestion_policys are block and pull-ahead. The default
868           is block.  Fill_threshold might be in the range of 0 to 10GiBytes.
869           The default is 0 which disables the check.
870           Active_extents_threshold has the same limits as al-extents.
871
872           The AHEAD/BEHIND mode and its settings are available since DRBD
873           8.3.10.
874
875       wfc-timeout time
876           Wait for connection timeout.
877
878           The init script drbd(8) blocks the boot process until the DRBD
879           resources are connected. When the cluster manager starts later, it
880           does not see a resource with internal split-brain. In case you want
881           to limit the wait time, do it here. Default is 0, which means
882           unlimited. The unit is seconds.
883
884       degr-wfc-timeout time
885
886           Wait for connection timeout, if this node was a degraded cluster.
887           In case a degraded cluster (= cluster with only one node left) is
888           rebooted, this timeout value is used instead of wfc-timeout,
889           because the peer is less likely to show up in time, if it had been
890           dead before. Value 0 means unlimited.
891
892       outdated-wfc-timeout time
893
894           Wait for connection timeout, if the peer was outdated. In case a
895           degraded cluster (= cluster with only one node left) with an
896           outdated peer disk is rebooted, this timeout value is used instead
897           of wfc-timeout, because the peer is not allowed to become primary
898           in the meantime. Value 0 means unlimited.
899
900       wait-after-sb
901           By setting this option you can make the init script to continue to
902           wait even if the device pair had a split brain situation and
903           therefore refuses to connect.
904
905       become-primary-on node-name
906           Sets on which node the device should be promoted to primary role by
907           the init script. The node-name might either be a host name or the
908           keyword both. When this option is not set the devices stay in
909           secondary role on both nodes. Usually one delegates the role
910           assignment to a cluster manager (e.g. heartbeat).
911
912       stacked-timeouts
913           Usually wfc-timeout and degr-wfc-timeout are ignored for stacked
914           devices, instead twice the amount of connect-int is used for the
915           connection timeouts. With the stacked-timeouts keyword you disable
916           this, and force DRBD to mind the wfc-timeout and degr-wfc-timeout
917           statements. Only do that if the peer of the stacked resource is
918           usually not available or will usually not become primary. By using
919           this option incorrectly, you run the risk of causing unexpected
920           split brain.
921
922       resync-rate rate
923
924           To ensure a smooth operation of the application on top of DRBD, it
925           is possible to limit the bandwidth which may be used by background
926           synchronizations. The default is 250 KB/sec, the default unit is
927           KB/sec. Optional suffixes K, M, G are allowed.
928
929       use-rle
930
931           During resync-handshake, the dirty-bitmaps of the nodes are
932           exchanged and merged (using bit-or), so the nodes will have the
933           same understanding of which blocks are dirty. On large devices, the
934           fine grained dirty-bitmap can become large as well, and the bitmap
935           exchange can take quite some time on low-bandwidth links.
936
937           Because the bitmap typically contains compact areas where all bits
938           are unset (clean) or set (dirty), a simple run-length encoding
939           scheme can considerably reduce the network traffic necessary for
940           the bitmap exchange.
941
942           For backward compatibility reasons, and because on fast links this
943           possibly does not improve transfer time but consumes cpu cycles,
944           this defaults to off.
945
946       socket-check-timeout value
947
948           In setups involving a DRBD-proxy and connections that experience a
949           lot of buffer-bloat it might be necessary to set ping-timeout to an
950           unusual high value. By default DRBD uses the same value to wait if
951           a newly established TCP-connection is stable. Since the DRBD-proxy
952           is usually located in the same data center such a long wait time
953           may hinder DRBD's connect process.
954
955           In such setups socket-check-timeout should be set to at least to
956           the round trip time between DRBD and DRBD-proxy. I.e. in most cases
957           to 1.
958
959           The default unit is tenths of a second, the default value is 0
960           (which causes DRBD to use the value of ping-timeout instead).
961           Introduced in 8.4.5.
962
963       resync-after res-name
964
965           By default, resynchronization of all devices would run in parallel.
966           By defining a resync-after dependency, the resynchronization of
967           this resource will start only if the resource res-name is already
968           in connected state (i.e., has finished its resynchronization).
969
970       al-extents extents
971
972           DRBD automatically performs hot area detection. With this parameter
973           you control how big the hot area (= active set) can get. Each
974           extent marks 4M of the backing storage (= low-level device). In
975           case a primary node leaves the cluster unexpectedly, the areas
976           covered by the active set must be resynced upon rejoining of the
977           failed node. The data structure is stored in the meta-data area,
978           therefore each change of the active set is a write operation to the
979           meta-data device. A higher number of extents gives longer resync
980           times but less updates to the meta-data. The default number of
981           extents is 1237. (Minimum: 7, Maximum: 65534)
982
983           Note that the effective maximum may be smaller, depending on how
984           you created the device meta data, see also drbdmeta(8). The
985           effective maximum is 919 * (available on-disk activity-log
986           ring-buffer area/4kB -1), the default 32kB ring-buffer effects a
987           maximum of 6433 (covers more than 25 GiB of data). We recommend to
988           keep this well within the amount your backend storage and
989           replication link are able to resync inside of about 5 minutes.
990
991       al-updates {yes | no}
992
993           DRBD's activity log transaction writing makes it possible, that
994           after the crash of a primary node a partial (bit-map based) resync
995           is sufficient to bring the node back to up-to-date. Setting
996           al-updates to no might increase normal operation performance but
997           causes DRBD to do a full resync when a crashed primary gets
998           reconnected. The default value is yes.
999
1000       verify-alg hash-alg
1001           During online verification (as initiated by the verify
1002           sub-command), rather than doing a bit-wise comparison, DRBD applies
1003           a hash function to the contents of every block being verified, and
1004           compares that hash with the peer. This option defines the hash
1005           algorithm being used for that purpose. It can be set to any of the
1006           kernel's data digest algorithms. In a typical kernel configuration
1007           you should have at least one of md5, sha1, and crc32c available. By
1008           default this is not enabled; you must set this option explicitly in
1009           order to be able to use on-line device verification.
1010
1011           See also the notes on data integrity.
1012
1013       csums-alg hash-alg
1014           A resync process sends all marked data blocks from the source to
1015           the destination node, as long as no csums-alg is given. When one is
1016           specified the resync process exchanges hash values of all marked
1017           blocks first, and sends only those data blocks that have different
1018           hash values.
1019
1020           This setting is useful for DRBD setups with low bandwidth links.
1021           During the restart of a crashed primary node, all blocks covered by
1022           the activity log are marked for resync. But a large part of those
1023           will actually be still in sync, therefore using csums-alg will
1024           lower the required bandwidth in exchange for CPU cycles.
1025
1026       c-plan-ahead plan_time,
1027       c-fill-target fill_target,
1028       c-delay-target delay_target,
1029       c-max-rate max_rate
1030           The dynamic resync speed controller gets enabled with setting
1031           plan_time to a positive value. It aims to fill the buffers along
1032           the data path with either a constant amount of data fill_target, or
1033           aims to have a constant delay time of delay_target along the path.
1034           The controller has an upper bound of max_rate.
1035
1036           By plan_time the agility of the controller is configured. Higher
1037           values yield for slower/lower responses of the controller to
1038           deviation from the target value. It should be at least 5 times RTT.
1039           For regular data paths a fill_target in the area of 4k to 100k is
1040           appropriate. For a setup that contains drbd-proxy it is advisable
1041           to use delay_target instead. Only when fill_target is set to 0 the
1042           controller will use delay_target. 5 times RTT is a reasonable
1043           starting value.  Max_rate should be set to the bandwidth available
1044           between the DRBD-hosts and the machines hosting DRBD-proxy, or to
1045           the available disk-bandwidth.
1046
1047           The default value of plan_time is 0, the default unit is 0.1
1048           seconds.  Fill_target has 0 and sectors as default unit.
1049           Delay_target has 1 (100ms) and 0.1 as default unit.  Max_rate has
1050           10240 (100MiB/s) and KiB/s as default unit.
1051
1052           The dynamic resync speed controller and its settings are available
1053           since DRBD 8.3.9.
1054
1055       c-min-rate min_rate
1056           A node that is primary and sync-source has to schedule application
1057           IO requests and resync IO requests. The min_rate tells DRBD use
1058           only up to min_rate for resync IO and to dedicate all other
1059           available IO bandwidth to application requests.
1060
1061           Note: The value 0 has a special meaning. It disables the limitation
1062           of resync IO completely, which might slow down application IO
1063           considerably. Set it to a value of 1, if you prefer that resync IO
1064           never slows down application IO.
1065
1066           Note: Although the name might suggest that it is a lower bound for
1067           the dynamic resync speed controller, it is not. If the DRBD-proxy
1068           buffer is full, the dynamic resync speed controller is free to
1069           lower the resync speed down to 0, completely independent of the
1070           c-min-rate setting.
1071
1072           The default value of min_rate is 250, in units of KiB/s
1073
1074       on-no-data-accessible ond-policy
1075           This setting controls what happens to IO requests on a degraded,
1076           disk less node (I.e. no data store is reachable). The available
1077           policies are io-error and suspend-io.
1078
1079           If ond-policy is set to suspend-io you can either resume IO by
1080           attaching/connecting the last lost data storage, or by the drbdadm
1081           resume-io res command. The latter will result in IO errors of
1082           course.
1083
1084           The default is io-error. This setting is available since DRBD
1085           8.3.9.
1086
1087       cpu-mask cpu-mask
1088
1089           Sets the cpu-affinity-mask for DRBD's kernel threads of this
1090           device. The default value of cpu-mask is 0, which means that DRBD's
1091           kernel threads should be spread over all CPUs of the machine. This
1092           value must be given in hexadecimal notation. If it is too big it
1093           will be truncated.
1094
1095       pri-on-incon-degr cmd
1096
1097           This handler is called if the node is primary, degraded and if the
1098           local copy of the data is inconsistent.
1099
1100       pri-lost-after-sb cmd
1101
1102           The node is currently primary, but lost the after-split-brain auto
1103           recovery procedure. As as consequence, it should be abandoned.
1104
1105       pri-lost cmd
1106
1107           The node is currently primary, but DRBD's algorithm thinks that it
1108           should become sync target. As a consequence it should give up its
1109           primary role.
1110
1111       fence-peer cmd
1112
1113           The handler is part of the fencing mechanism. This handler is
1114           called in case the node needs to fence the peer's disk. It should
1115           use other communication paths than DRBD's network link.
1116
1117       local-io-error cmd
1118
1119           DRBD got an IO error from the local IO subsystem.
1120
1121       initial-split-brain cmd
1122
1123           DRBD has connected and detected a split brain situation. This
1124           handler can alert someone in all cases of split brain, not just
1125           those that go unresolved.
1126
1127       split-brain cmd
1128
1129           DRBD detected a split brain situation but remains unresolved.
1130           Manual recovery is necessary. This handler should alert someone on
1131           duty.
1132
1133       before-resync-target cmd
1134
1135           DRBD calls this handler just before a resync begins on the node
1136           that becomes resync target. It might be used to take a snapshot of
1137           the backing block device.
1138
1139       after-resync-target cmd
1140
1141           DRBD calls this handler just after a resync operation finished on
1142           the node whose disk just became consistent after being inconsistent
1143           for the duration of the resync. It might be used to remove a
1144           snapshot of the backing device that was created by the
1145           before-resync-target handler.
1146
1147   Other Keywords
1148       include file-pattern
1149
1150           Include all files matching the wildcard pattern file-pattern. The
1151           include statement is only allowed on the top level, i.e. it is not
1152           allowed inside any section.
1153

NOTES ON DATA INTEGRITY

1155       There are two independent methods in DRBD to ensure the integrity of
1156       the mirrored data. The online-verify mechanism and the
1157       data-integrity-alg of the network section.
1158
1159       Both mechanisms might deliver false positives if the user of DRBD
1160       modifies the data which gets written to disk while the transfer goes
1161       on. This may happen for swap, or for certain append while global sync,
1162       or truncate/rewrite workloads, and not necessarily poses a problem for
1163       the integrity of the data. Usually when the initiator of the data
1164       transfer does this, it already knows that that data block will not be
1165       part of an on disk data structure, or will be resubmitted with correct
1166       data soon enough.
1167
1168       The data-integrity-alg causes the receiving side to log an error about
1169       "Digest integrity check FAILED: Ns +x\n", where N is the sector offset,
1170       and x is the size of the request in bytes. It will then disconnect, and
1171       reconnect, thus causing a quick resync. If the sending side at the same
1172       time detected a modification, it warns about "Digest mismatch, buffer
1173       modified by upper layers during write: Ns +x\n", which shows that this
1174       was a false positive. The sending side may detect these buffer
1175       modifications immediately after the unmodified data has been copied to
1176       the tcp buffers, in which case the receiving side won't notice it.
1177
1178       The most recent (2007) example of systematic corruption was an issue
1179       with the TCP offloading engine and the driver of a certain type of GBit
1180       NIC. The actual corruption happened on the DMA transfer from core
1181       memory to the card. Since the TCP checksum gets calculated on the card,
1182       this type of corruption stays undetected as long as you do not use
1183       either the online verify or the data-integrity-alg.
1184
1185       We suggest to use the data-integrity-alg only during a pre-production
1186       phase due to its CPU costs. Further we suggest to do online verify runs
1187       regularly e.g. once a month during a low load period.
1188

VERSION

1190       This document was revised for version 8.4.0 of the DRBD distribution.
1191

AUTHOR

1193       Written by Philipp Reisner <philipp.reisner@linbit.com> and Lars
1194       Ellenberg <lars.ellenberg@linbit.com>.
1195

REPORTING BUGS

1197       Report bugs to <drbd-user@lists.linbit.com>.
1198
1200       Copyright 2001-2008 LINBIT Information Technologies, Philipp Reisner,
1201       Lars Ellenberg. This is free software; see the source for copying
1202       conditions. There is NO warranty; not even for MERCHANTABILITY or
1203       FITNESS FOR A PARTICULAR PURPOSE.
1204

SEE ALSO

1206       drbd(8), drbddisk(8), drbdsetup(8), drbdmeta(8), drbdadm(8), DRBD
1207       User's Guide[1], DRBD web site[3]
1208

NOTES

1210        1. DRBD User's Guide
1211           http://www.drbd.org/users-guide/
1212
1213        2. DRBD's online usage counter
1214           http://usage.drbd.org
1215
1216        3. DRBD web site
1217           http://www.drbd.org/
1218
1219
1220
1221DRBD 8.4.0                        6 May 2011                      DRBD.CONF(5)
Impressum