1DRBD.CONF(5) Configuration Files DRBD.CONF(5)
2
3
4
6 drbd.conf - DRBD Configuration Files
7
9 DRBD implements block devices which replicate their data to all nodes
10 of a cluster. The actual data and associated metadata are usually
11 stored redundantly on "ordinary" block devices on each cluster node.
12
13 Replicated block devices are called /dev/drbdminor by default. They are
14 grouped into resources, with one or more devices per resource.
15 Replication among the devices in a resource takes place in
16 chronological order. With DRBD, we refer to the devices inside a
17 resource as volumes.
18
19 In DRBD 9, a resource can be replicated between two or more cluster
20 nodes. The connections between cluster nodes are point-to-point links,
21 and use TCP or a TCP-like protocol. All nodes must be directly
22 connected.
23
24 DRBD consists of low-level user-space components which interact with
25 the kernel and perform basic operations (drbdsetup, drbdmeta), a
26 high-level user-space component which understands and processes the
27 DRBD configuration and translates it into basic operations of the
28 low-level components (drbdadm), and a kernel component.
29
30 The default DRBD configuration consists of /etc/drbd.conf and of
31 additional files included from there, usually global_common.conf and
32 all *.res files inside /etc/drbd.d/. It has turned out to be useful to
33 define each resource in a separate *.res file.
34
35 The configuration files are designed so that each cluster node can
36 contain an identical copy of the entire cluster configuration. The host
37 name of each node determines which parts of the configuration apply
38 (uname -n). It is highly recommended to keep the cluster configuration
39 on all nodes in sync by manually copying it to all nodes, or by
40 automating the process with csync2 or a similar tool.
41
43 global {
44 usage-count yes;
45 udev-always-use-vnr;
46 }
47 resource r0 {
48 net {
49 cram-hmac-alg sha1;
50 shared-secret "FooFunFactory";
51 }
52 volume 0 {
53 device /dev/drbd1;
54 disk /dev/sda7;
55 meta-disk internal;
56 }
57 on alice {
58 node-id 0;
59 address 10.1.1.31:7000;
60 }
61 on bob {
62 node-id 1;
63 address 10.1.1.32:7000;
64 }
65 connection {
66 host alice port 7000;
67 host bob port 7000;
68 net {
69 protocol C;
70 }
71 }
72 }
73
74 This example defines a resource r0 which contains a single replicated
75 device with volume number 0. The resource is replicated among hosts
76 alice and bob, which have the IPv4 addresses 10.1.1.31 and 10.1.1.32
77 and the node identifiers 0 and 1, respectively. On both hosts, the
78 replicated device is called /dev/drbd1, and the actual data and
79 metadata are stored on the lower-level device /dev/sda7. The connection
80 between the hosts uses protocol C.
81
82 Please refer to the DRBD User's Guide[1] for more examples.
83
85 DRBD configuration files consist of sections, which contain other
86 sections and parameters depending on the section types. Each section
87 consists of one or more keywords, sometimes a section name, an opening
88 brace (“{”), the section's contents, and a closing brace (“}”).
89 Parameters inside a section consist of a keyword, followed by one or
90 more keywords or values, and a semicolon (“;”).
91
92 Some parameter values have a default scale which applies when a plain
93 number is specified (for example Kilo, or 1024 times the numeric
94 value). Such default scales can be overridden by using a suffix (for
95 example, M for Mega). The common suffixes K = 2^10 = 1024, M = 1024 K,
96 and G = 1024 M are supported.
97
98 Comments start with a hash sign (“#”) and extend to the end of the
99 line. In addition, any section can be prefixed with the keyword skip,
100 which causes the section and any sub-sections to be ignored.
101
102 Additional files can be included with the include file-pattern
103 statement (see glob(7) for the expressions supported in file-pattern).
104 Include statements are only allowed outside of sections.
105
106 The following sections are defined (indentation indicates in which
107 context):
108
109 common
110 [disk]
111 [handlers]
112 [net]
113 [options]
114 [startup]
115 global
116 [require-drbd-module-version-{eq,ne,gt,ge,lt,le}]
117 resource
118 connection
119 path
120 net
121 volume
122 peer-device-options
123 [peer-device-options]
124 connection-mesh
125 net
126 [disk]
127 floating
128 handlers
129 [net]
130 on
131 volume
132 disk
133 [disk]
134 options
135 stacked-on-top-of
136 startup
137
138 Sections in brackets affect other parts of the configuration: inside
139 the common section, they apply to all resources. A disk section inside
140 a resource or on section applies to all volumes of that resource, and a
141 net section inside a resource section applies to all connections of
142 that resource. This allows to avoid repeating identical options for
143 each resource, connection, or volume. Options can be overridden in a
144 more specific resource, connection, on, or volume section.
145
146 peer-device-options are resync-rate, c-plan-ahead, c-delay-target,
147 c-fill-target, c-max-rate and c-min-rate. Due to backward
148 comapatibility they can be specified in any disk options section as
149 well. They are inherited into all relevant connections. If they are
150 given on connection level they are inherited to all volumes on that
151 connection. A peer-device-options section is started with the disk
152 keyword.
153
154 Sections
155 common
156
157 This section can contain each a disk, handlers, net, options, and
158 startup section. All resources inherit the parameters in these
159 sections as their default values.
160
161 connection [name]
162
163 Define a connection between two hosts. This section must contain
164 two host parameters or multiple path sections. The optional name is
165 used to refer to the connection in the system log and in other
166 messages. If no name is specified, the peer's host name is used
167 instead.
168
169 path
170
171 Define a path between two hosts. This section must contain two host
172 parameters.
173
174 connection-mesh
175
176 Define a connection mesh between multiple hosts. This section must
177 contain a hosts parameter, which has the host names as arguments.
178 This section is a shortcut to define many connections which share
179 the same network options.
180
181 disk
182
183 Define parameters for a volume. All parameters in this section are
184 optional.
185
186 floating [address-family] addr:port
187
188 Like the on section, except that instead of the host name a network
189 address is used to determine if it matches a floating section.
190
191 The node-id parameter in this section is required. If the address
192 parameter is not provided, no connections to peers will be created
193 by default. The device, disk, and meta-disk parameters must be
194 defined in, or inherited by, this section.
195
196 global
197
198 Define some global parameters. All parameters in this section are
199 optional. Only one global section is allowed in the configuration.
200
201 require-drbd-module-version-{eq,ne,gt,ge,lt,le}
202
203 This statement contains one of the valid forms and a three digit
204 version number (e.g., require-drbd-module-version-eq 9.0.16;). If
205 the currently loaded DRBD kernel module does not match the
206 specification, parsing is aborted. Comparison operator names have
207 same semantic as in test(1).
208
209 handlers
210
211 Define handlers to be invoked when certain events occur. The kernel
212 passes the resource name in the first command-line argument and
213 sets the following environment variables depending on the event's
214 context:
215
216 · For events related to a particular device: the device's minor
217 number in DRBD_MINOR, the device's volume number in
218 DRBD_VOLUME.
219
220 · For events related to a particular device on a particular peer:
221 the connection endpoints in DRBD_MY_ADDRESS, DRBD_MY_AF,
222 DRBD_PEER_ADDRESS, and DRBD_PEER_AF; the device's local minor
223 number in DRBD_MINOR, and the device's volume number in
224 DRBD_VOLUME.
225
226 · For events related to a particular connection: the connection
227 endpoints in DRBD_MY_ADDRESS, DRBD_MY_AF, DRBD_PEER_ADDRESS,
228 and DRBD_PEER_AF; and, for each device defined for that
229 connection: the device's minor number in
230 DRBD_MINOR_volume-number.
231
232 · For events that identify a device, if a lower-level device is
233 attached, the lower-level device's device name is passed in
234 DRBD_BACKING_DEV (or DRBD_BACKING_DEV_volume-number).
235
236 All parameters in this section are optional. Only a single handler
237 can be defined for each event; if no handler is defined, nothing
238 will happen.
239
240 net
241
242 Define parameters for a connection. All parameters in this section
243 are optional.
244
245 on host-name [...]
246
247 Define the properties of a resource on a particular host or set of
248 hosts. Specifying more than one host name can make sense in a setup
249 with IP address failover, for example. The host-name argument must
250 match the Linux host name (uname -n).
251
252 Usually contains or inherits at least one volume section. The
253 node-id and address parameters must be defined in this section. The
254 device, disk, and meta-disk parameters must be defined in, or
255 inherited by, this section.
256
257 A normal configuration file contains two or more on sections for
258 each resource. Also see the floating section.
259
260 options
261
262 Define parameters for a resource. All parameters in this section
263 are optional.
264
265 resource name
266
267 Define a resource. Usually contains at least two on sections and at
268 least one connection section.
269
270 stacked-on-top-of resource
271
272 Used instead of an on section for configuring a stacked resource
273 with three to four nodes.
274
275 Starting with DRBD 9, stacking is deprecated. It is advised to use
276 resources which are replicated among more than two nodes instead.
277
278 startup
279
280 The parameters in this section determine the behavior of a resource
281 at startup time.
282
283 volume volume-number
284
285 Define a volume within a resource. The volume numbers in the
286 various volume sections of a resource define which devices on which
287 hosts form a replicated device.
288
289 Section connection Parameters
290 host name [address [address-family] address] [port port-number]
291
292 Defines an endpoint for a connection. Each host statement refers to
293 an on section in a resource. If a port number is defined, this
294 endpoint will use the specified port instead of the port defined in
295 the on section. Each connection section must contain exactly two
296 host parameters. Instead of two host parameters the connection may
297 contain multiple path sections.
298
299 Section path Parameters
300 host name [address [address-family] address] [port port-number]
301
302 Defines an endpoint for a connection. Each host statement refers to
303 an on section in a resource. If a port number is defined, this
304 endpoint will use the specified port instead of the port defined in
305 the on section. Each path section must contain exactly two host
306 parameters.
307
308 Section connection-mesh Parameters
309 hosts name...
310
311 Defines all nodes of a mesh. Each name refers to an on section in a
312 resource. The port that is defined in the on section will be used.
313
314 Section disk Parameters
315 al-extents extents
316
317 DRBD automatically maintains a "hot" or "active" disk area likely
318 to be written to again soon based on the recent write activity. The
319 "active" disk area can be written to immediately, while "inactive"
320 disk areas must be "activated" first, which requires a meta-data
321 write. We also refer to this active disk area as the "activity
322 log".
323
324 The activity log saves meta-data writes, but the whole log must be
325 resynced upon recovery of a failed node. The size of the activity
326 log is a major factor of how long a resync will take and how fast a
327 replicated disk will become consistent after a crash.
328
329 The activity log consists of a number of 4-Megabyte segments; the
330 al-extents parameter determines how many of those segments can be
331 active at the same time. The default value for al-extents is 1237,
332 with a minimum of 7 and a maximum of 65536.
333
334 Note that the effective maximum may be smaller, depending on how
335 you created the device meta data, see also drbdmeta(8) The
336 effective maximum is 919 * (available on-disk activity-log
337 ring-buffer area/4kB -1), the default 32kB ring-buffer effects a
338 maximum of 6433 (covers more than 25 GiB of data) We recommend to
339 keep this well within the amount your backend storage and
340 replication link are able to resync inside of about 5 minutes.
341
342 al-updates {yes | no}
343
344 With this parameter, the activity log can be turned off entirely
345 (see the al-extents parameter). This will speed up writes because
346 fewer meta-data writes will be necessary, but the entire device
347 needs to be resynchronized opon recovery of a failed primary node.
348 The default value for al-updates is yes.
349
350 disk-barrier,
351 disk-flushes,
352 disk-drain
353 DRBD has three methods of handling the ordering of dependent write
354 requests:
355
356 disk-barrier
357 Use disk barriers to make sure that requests are written to
358 disk in the right order. Barriers ensure that all requests
359 submitted before a barrier make it to the disk before any
360 requests submitted after the barrier. This is implemented using
361 'tagged command queuing' on SCSI devices and 'native command
362 queuing' on SATA devices. Only some devices and device stacks
363 support this method. The device mapper (LVM) only supports
364 barriers in some configurations.
365
366 Note that on systems which do not support disk barriers,
367 enabling this option can lead to data loss or corruption. Until
368 DRBD 8.4.1, disk-barrier was turned on if the I/O stack below
369 DRBD did support barriers. Kernels since linux-2.6.36 (or
370 2.6.32 RHEL6) no longer allow to detect if barriers are
371 supported. Since drbd-8.4.2, this option is off by default and
372 needs to be enabled explicitly.
373
374 disk-flushes
375 Use disk flushes between dependent write requests, also
376 referred to as 'force unit access' by drive vendors. This
377 forces all data to disk. This option is enabled by default.
378
379 disk-drain
380 Wait for the request queue to "drain" (that is, wait for the
381 requests to finish) before submitting a dependent write
382 request. This method requires that requests are stable on disk
383 when they finish. Before DRBD 8.0.9, this was the only method
384 implemented. This option is enabled by default. Do not disable
385 in production environments.
386
387 From these three methods, drbd will use the first that is enabled
388 and supported by the backing storage device. If all three of these
389 options are turned off, DRBD will submit write requests without
390 bothering about dependencies. Depending on the I/O stack, write
391 requests can be reordered, and they can be submitted in a different
392 order on different cluster nodes. This can result in data loss or
393 corruption. Therefore, turning off all three methods of controlling
394 write ordering is strongly discouraged.
395
396 A general guideline for configuring write ordering is to use disk
397 barriers or disk flushes when using ordinary disks (or an ordinary
398 disk array) with a volatile write cache. On storage without cache
399 or with a battery backed write cache, disk draining can be a
400 reasonable choice.
401
402 disk-timeout
403 If the lower-level device on which a DRBD device stores its data
404 does not finish an I/O request within the defined disk-timeout,
405 DRBD treats this as a failure. The lower-level device is detached,
406 and the device's disk state advances to Diskless. If DRBD is
407 connected to one or more peers, the failed request is passed on to
408 one of them.
409
410 This option is dangerous and may lead to kernel panic!
411
412 "Aborting" requests, or force-detaching the disk, is intended for
413 completely blocked/hung local backing devices which do no longer
414 complete requests at all, not even do error completions. In this
415 situation, usually a hard-reset and failover is the only way out.
416
417 By "aborting", basically faking a local error-completion, we allow
418 for a more graceful swichover by cleanly migrating services. Still
419 the affected node has to be rebooted "soon".
420
421 By completing these requests, we allow the upper layers to re-use
422 the associated data pages.
423
424 If later the local backing device "recovers", and now DMAs some
425 data from disk into the original request pages, in the best case it
426 will just put random data into unused pages; but typically it will
427 corrupt meanwhile completely unrelated data, causing all sorts of
428 damage.
429
430 Which means delayed successful completion, especially for READ
431 requests, is a reason to panic(). We assume that a delayed *error*
432 completion is OK, though we still will complain noisily about it.
433
434 The default value of disk-timeout is 0, which stands for an
435 infinite timeout. Timeouts are specified in units of 0.1 seconds.
436 This option is available since DRBD 8.3.12.
437
438 md-flushes
439 Enable disk flushes and disk barriers on the meta-data device. This
440 option is enabled by default. See the disk-flushes parameter.
441
442 on-io-error handler
443
444 Configure how DRBD reacts to I/O errors on a lower-level device.
445 The following policies are defined:
446
447 pass_on
448 Change the disk status to Inconsistent, mark the failed block
449 as inconsistent in the bitmap, and retry the I/O operation on a
450 remote cluster node.
451
452 call-local-io-error
453 Call the local-io-error handler (see the handlers section).
454
455 detach
456 Detach the lower-level device and continue in diskless mode.
457
458
459 read-balancing policy
460 Distribute read requests among cluster nodes as defined by policy.
461 The supported policies are prefer-local (the default),
462 prefer-remote, round-robin, least-pending, when-congested-remote,
463 32K-striping, 64K-striping, 128K-striping, 256K-striping,
464 512K-striping and 1M-striping.
465
466 This option is available since DRBD 8.4.1.
467
468 resync-after res-name/volume
469
470 Define that a device should only resynchronize after the specified
471 other device. By default, no order between devices is defined, and
472 all devices will resynchronize in parallel. Depending on the
473 configuration of the lower-level devices, and the available network
474 and disk bandwidth, this can slow down the overall resync process.
475 This option can be used to form a chain or tree of dependencies
476 among devices.
477
478 rs-discard-granularity byte
479 When rs-discard-granularity is set to a non zero, positive value
480 then DRBD tries to do a resync operation in requests of this size.
481 In case such a block contains only zero bytes on the sync source
482 node, the sync target node will issue a discard/trim/unmap command
483 for the area.
484
485 The value is constrained by the discard granularity of the backing
486 block device. In case rs-discard-granularity is not a multiplier of
487 the discard granularity of the backing block device DRBD rounds it
488 up. The feature only gets active if the backing block device reads
489 back zeroes after a discard command.
490
491 The default value of is 0. This option is available since 8.4.7.
492
493 discard-zeroes-if-aligned {yes | no}
494
495 There are several aspects to discard/trim/unmap support on linux
496 block devices. Even if discard is supported in general, it may fail
497 silently, or may partially ignore discard requests. Devices also
498 announce whether reading from unmapped blocks returns defined data
499 (usually zeroes), or undefined data (possibly old data, possibly
500 garbage).
501
502 If on different nodes, DRBD is backed by devices with differing
503 discard characteristics, discards may lead to data divergence (old
504 data or garbage left over on one backend, zeroes due to unmapped
505 areas on the other backend). Online verify would now potentially
506 report tons of spurious differences. While probably harmless for
507 most use cases (fstrim on a file system), DRBD cannot have that.
508
509 To play safe, we have to disable discard support, if our local
510 backend (on a Primary) does not support "discard_zeroes_data=true".
511 We also have to translate discards to explicit zero-out on the
512 receiving side, unless the receiving side (Secondary) supports
513 "discard_zeroes_data=true", thereby allocating areas what were
514 supposed to be unmapped.
515
516 There are some devices (notably the LVM/DM thin provisioning) that
517 are capable of discard, but announce discard_zeroes_data=false. In
518 the case of DM-thin, discards aligned to the chunk size will be
519 unmapped, and reading from unmapped sectors will return zeroes.
520 However, unaligned partial head or tail areas of discard requests
521 will be silently ignored.
522
523 If we now add a helper to explicitly zero-out these unaligned
524 partial areas, while passing on the discard of the aligned full
525 chunks, we effectively achieve discard_zeroes_data=true on such
526 devices.
527
528 Setting discard-zeroes-if-aligned to yes will allow DRBD to use
529 discards, and to announce discard_zeroes_data=true, even on
530 backends that announce discard_zeroes_data=false.
531
532 Setting discard-zeroes-if-aligned to no will cause DRBD to always
533 fall-back to zero-out on the receiving side, and to not even
534 announce discard capabilities on the Primary, if the respective
535 backend announces discard_zeroes_data=false.
536
537 We used to ignore the discard_zeroes_data setting completely. To
538 not break established and expected behaviour, and suddenly cause
539 fstrim on thin-provisioned LVs to run out-of-space instead of
540 freeing up space, the default value is yes.
541
542 This option is available since 8.4.7.
543
544 Section peer-device-options Parameters
545 Please note that you open the section with the disk keyword.
546
547 c-delay-target delay_target,
548 c-fill-target fill_target,
549 c-max-rate max_rate,
550 c-plan-ahead plan_time
551 Dynamically control the resync speed. This mechanism is enabled by
552 setting the c-plan-ahead parameter to a positive value. The goal is
553 to either fill the buffers along the data path with a defined
554 amount of data if c-fill-target is defined, or to have a defined
555 delay along the path if c-delay-target is defined. The maximum
556 bandwidth is limited by the c-max-rate parameter.
557
558 The c-plan-ahead parameter defines how fast drbd adapts to changes
559 in the resync speed. It should be set to five times the network
560 round-trip time or more. Common values for c-fill-target for
561 "normal" data paths range from 4K to 100K. If drbd-proxy is used,
562 it is advised to use c-delay-target instead of c-fill-target. The
563 c-delay-target parameter is used if the c-fill-target parameter is
564 undefined or set to 0. The c-delay-target parameter should be set
565 to five times the network round-trip time or more. The c-max-rate
566 option should be set to either the bandwidth available between the
567 DRBD-hosts and the machines hosting DRBD-proxy, or to the available
568 disk bandwidth.
569
570 The default values of these parameters are: c-plan-ahead = 20 (in
571 units of 0.1 seconds), c-fill-target = 0 (in units of sectors),
572 c-delay-target = 1 (in units of 0.1 seconds), and c-max-rate =
573 102400 (in units of KiB/s).
574
575 Dynamic resync speed control is available since DRBD 8.3.9.
576
577 c-min-rate min_rate
578 A node which is primary and sync-source has to schedule application
579 I/O requests and resync I/O requests. The c-min-rate parameter
580 limits how much bandwidth is available for resync I/O; the
581 remaining bandwidth is used for application I/O.
582
583 A c-min-rate value of 0 means that there is no limit on the resync
584 I/O bandwidth. This can slow down application I/O significantly.
585 Use a value of 1 (1 KiB/s) for the lowest possible resync rate.
586
587 The default value of c-min-rate is 4096, in units of KiB/s.
588
589 resync-rate rate
590
591 Define how much bandwidth DRBD may use for resynchronizing. DRBD
592 allows "normal" application I/O even during a resync. If the resync
593 takes up too much bandwidth, application I/O can become very slow.
594 This parameter allows to avoid that. Please note this is option
595 only works when the dynamic resync controller is disabled.
596
597 Section global Parameters
598 dialog-refresh time
599
600 The DRBD init script can be used to configure and start DRBD
601 devices, which can involve waiting for other cluster nodes. While
602 waiting, the init script shows the remaining waiting time. The
603 dialog-refresh defines the number of seconds between updates of
604 that countdown. The default value is 1; a value of 0 turns off the
605 countdown.
606
607 disable-ip-verification
608 Normally, DRBD verifies that the IP addresses in the configuration
609 match the host names. Use the disable-ip-verification parameter to
610 disable these checks.
611
612 usage-count {yes | no | ask}
613 A explained on DRBD's Online Usage Counter[2] web page, DRBD
614 includes a mechanism for anonymously counting how many
615 installations are using which versions of DRBD. The results are
616 available on the web page for anyone to see.
617
618 This parameter defines if a cluster node participates in the usage
619 counter; the supported values are yes, no, and ask (ask the user,
620 the default).
621
622 We would like to ask users to participate in the online usage
623 counter as this provides us valuable feedback for steering the
624 development of DRBD.
625
626 udev-always-use-vnr
627 When udev asks drbdadm for a list of device related symlinks,
628 drbdadm would suggest symlinks with differing naming conventions,
629 depending on whether the resource has explicit volume VNR { }
630 definitions, or only one single volume with the implicit volume
631 number 0:
632
633 # implicit single volume without "volume 0 {}" block
634 DEVICE=drbd<minor>
635 SYMLINK_BY_RES=drbd/by-res/<resource-name>
636 SYMLINK_BY_DISK=drbd/by-disk/<backing-disk-name>
637
638 # explicit volume definition: volume VNR { }
639 DEVICE=drbd<minor>
640 SYMLINK_BY_RES=drbd/by-res/<resource-name>/VNR
641 SYMLINK_BY_DISK=drbd/by-disk/<backing-disk-name>
642
643 If you define this parameter in the global section, drbdadm will
644 always add the .../VNR part, and will not care for whether the
645 volume definition was implicit or explicit.
646
647 For legacy backward compatibility, this is off by default, but we
648 do recommend to enable it.
649
650 Section handlers Parameters
651 after-resync-target cmd
652
653 Called on a resync target when a node state changes from
654 Inconsistent to Consistent when a resync finishes. This handler can
655 be used for removing the snapshot created in the
656 before-resync-target handler.
657
658 before-resync-target cmd
659
660 Called on a resync target before a resync begins. This handler can
661 be used for creating a snapshot of the lower-level device for the
662 duration of the resync: if the resync source becomes unavailable
663 during a resync, reverting to the snapshot can restore a consistent
664 state.
665
666 before-resync-source cmd
667
668 Called on a resync source before a resync begins.
669
670 out-of-sync cmd
671
672 Called on all nodes after a verify finishes and out-of-sync blocks
673 were found. This handler is mainly used for monitoring purposes. An
674 example would be to call a script that sends an alert SMS.
675
676 quorum-lost cmd
677
678 Called on a Primary that lost quorum. This handler is usually used
679 to reboot the node if it is not possible to restart the application
680 that uses the storage on top of DRBD.
681
682 fence-peer cmd
683
684 Called when a node should fence a resource on a particular peer.
685 The handler should not use the same communication path that DRBD
686 uses for talking to the peer.
687
688 unfence-peer cmd
689
690 Called when a node should remove fencing constraints from other
691 nodes.
692
693 initial-split-brain cmd
694
695 Called when DRBD connects to a peer and detects that the peer is in
696 a split-brain state with the local node. This handler is also
697 called for split-brain scenarios which will be resolved
698 automatically.
699
700 local-io-error cmd
701
702 Called when an I/O error occurs on a lower-level device.
703
704 pri-lost cmd
705
706 The local node is currently primary, but DRBD believes that it
707 should become a sync target. The node should give up its primary
708 role.
709
710 pri-lost-after-sb cmd
711
712 The local node is currently primary, but it has lost the
713 after-split-brain auto recovery procedure. The node should be
714 abandoned.
715
716 pri-on-incon-degr cmd
717
718 The local node is primary, and neither the local lower-level device
719 nor a lower-level device on a peer is up to date. (The primary has
720 no device to read from or to write to.)
721
722 split-brain cmd
723
724 DRBD has detected a split-brain situation which could not be
725 resolved automatically. Manual recovery is necessary. This handler
726 can be used to call for administrator attention.
727
728 Section net Parameters
729 after-sb-0pri policy
730 Define how to react if a split-brain scenario is detected and none
731 of the two nodes is in primary role. (We detect split-brain
732 scenarios when two nodes connect; split-brain decisions are always
733 between two nodes.) The defined policies are:
734
735 disconnect
736 No automatic resynchronization; simply disconnect.
737
738 discard-younger-primary,
739 discard-older-primary
740 Resynchronize from the node which became primary first
741 (discard-younger-primary) or last (discard-older-primary). If
742 both nodes became primary independently, the
743 discard-least-changes policy is used.
744
745 discard-zero-changes
746 If only one of the nodes wrote data since the split brain
747 situation was detected, resynchronize from this node to the
748 other. If both nodes wrote data, disconnect.
749
750 discard-least-changes
751 Resynchronize from the node with more modified blocks.
752
753 discard-node-nodename
754 Always resynchronize to the named node.
755
756 after-sb-1pri policy
757 Define how to react if a split-brain scenario is detected, with one
758 node in primary role and one node in secondary role. (We detect
759 split-brain scenarios when two nodes connect, so split-brain
760 decisions are always among two nodes.) The defined policies are:
761
762 disconnect
763 No automatic resynchronization, simply disconnect.
764
765 consensus
766 Discard the data on the secondary node if the after-sb-0pri
767 algorithm would also discard the data on the secondary node.
768 Otherwise, disconnect.
769
770 violently-as0p
771 Always take the decision of the after-sb-0pri algorithm, even
772 if it causes an erratic change of the primary's view of the
773 data. This is only useful if a single-node file system (i.e.,
774 not OCFS2 or GFS) with the allow-two-primaries flag is used.
775 This option can cause the primary node to crash, and should not
776 be used.
777
778 discard-secondary
779 Discard the data on the secondary node.
780
781 call-pri-lost-after-sb
782 Always take the decision of the after-sb-0pri algorithm. If the
783 decision is to discard the data on the primary node, call the
784 pri-lost-after-sb handler on the primary node.
785
786 after-sb-2pri policy
787 Define how to react if a split-brain scenario is detected and both
788 nodes are in primary role. (We detect split-brain scenarios when
789 two nodes connect, so split-brain decisions are always among two
790 nodes.) The defined policies are:
791
792 disconnect
793 No automatic resynchronization, simply disconnect.
794
795 violently-as0p
796 See the violently-as0p policy for after-sb-1pri.
797
798 call-pri-lost-after-sb
799 Call the pri-lost-after-sb helper program on one of the
800 machines unless that machine can demote to secondary. The
801 helper program is expected to reboot the machine, which brings
802 the node into a secondary role. Which machine runs the helper
803 program is determined by the after-sb-0pri strategy.
804
805 allow-two-primaries
806
807 The most common way to configure DRBD devices is to allow only one
808 node to be primary (and thus writable) at a time.
809
810 In some scenarios it is preferable to allow two nodes to be primary
811 at once; a mechanism outside of DRBD then must make sure that
812 writes to the shared, replicated device happen in a coordinated
813 way. This can be done with a shared-storage cluster file system
814 like OCFS2 and GFS, or with virtual machine images and a virtual
815 machine manager that can migrate virtual machines between physical
816 machines.
817
818 The allow-two-primaries parameter tells DRBD to allow two nodes to
819 be primary at the same time. Never enable this option when using a
820 non-distributed file system; otherwise, data corruption and node
821 crashes will result!
822
823 always-asbp
824 Normally the automatic after-split-brain policies are only used if
825 current states of the UUIDs do not indicate the presence of a third
826 node.
827
828 With this option you request that the automatic after-split-brain
829 policies are used as long as the data sets of the nodes are somehow
830 related. This might cause a full sync, if the UUIDs indicate the
831 presence of a third node. (Or double faults led to strange UUID
832 sets.)
833
834 connect-int time
835
836 As soon as a connection between two nodes is configured with
837 drbdsetup connect, DRBD immediately tries to establish the
838 connection. If this fails, DRBD waits for connect-int seconds and
839 then repeats. The default value of connect-int is 10 seconds.
840
841 cram-hmac-alg hash-algorithm
842
843 Configure the hash-based message authentication code (HMAC) or
844 secure hash algorithm to use for peer authentication. The kernel
845 supports a number of different algorithms, some of which may be
846 loadable as kernel modules. See the shash algorithms listed in
847 /proc/crypto. By default, cram-hmac-alg is unset. Peer
848 authentication also requires a shared-secret to be configured.
849
850 csums-alg hash-algorithm
851
852 Normally, when two nodes resynchronize, the sync target requests a
853 piece of out-of-sync data from the sync source, and the sync source
854 sends the data. With many usage patterns, a significant number of
855 those blocks will actually be identical.
856
857 When a csums-alg algorithm is specified, when requesting a piece of
858 out-of-sync data, the sync target also sends along a hash of the
859 data it currently has. The sync source compares this hash with its
860 own version of the data. It sends the sync target the new data if
861 the hashes differ, and tells it that the data are the same
862 otherwise. This reduces the network bandwidth required, at the cost
863 of higher cpu utilization and possibly increased I/O on the sync
864 target.
865
866 The csums-alg can be set to one of the secure hash algorithms
867 supported by the kernel; see the shash algorithms listed in
868 /proc/crypto. By default, csums-alg is unset.
869
870 csums-after-crash-only
871
872 Enabling this option (and csums-alg, above) makes it possible to
873 use the checksum based resync only for the first resync after
874 primary crash, but not for later "network hickups".
875
876 In most cases, block that are marked as need-to-be-resynced are in
877 fact changed, so calculating checksums, and both reading and
878 writing the blocks on the resync target is all effective overhead.
879
880 The advantage of checksum based resync is mostly after primary
881 crash recovery, where the recovery marked larger areas (those
882 covered by the activity log) as need-to-be-resynced, just in case.
883 Introduced in 8.4.5.
884
885 data-integrity-alg alg
886 DRBD normally relies on the data integrity checks built into the
887 TCP/IP protocol, but if a data integrity algorithm is configured,
888 it will additionally use this algorithm to make sure that the data
889 received over the network match what the sender has sent. If a data
890 integrity error is detected, DRBD will close the network connection
891 and reconnect, which will trigger a resync.
892
893 The data-integrity-alg can be set to one of the secure hash
894 algorithms supported by the kernel; see the shash algorithms listed
895 in /proc/crypto. By default, this mechanism is turned off.
896
897 Because of the CPU overhead involved, we recommend not to use this
898 option in production environments. Also see the notes on data
899 integrity below.
900
901 fencing fencing_policy
902
903 Fencing is a preventive measure to avoid situations where both
904 nodes are primary and disconnected. This is also known as a
905 split-brain situation. DRBD supports the following fencing
906 policies:
907
908 dont-care
909 No fencing actions are taken. This is the default policy.
910
911 resource-only
912 If a node becomes a disconnected primary, it tries to fence the
913 peer. This is done by calling the fence-peer handler. The
914 handler is supposed to reach the peer over an alternative
915 communication path and call 'drbdadm outdate minor' there.
916
917 resource-and-stonith
918 If a node becomes a disconnected primary, it freezes all its IO
919 operations and calls its fence-peer handler. The fence-peer
920 handler is supposed to reach the peer over an alternative
921 communication path and call 'drbdadm outdate minor' there. In
922 case it cannot do that, it should stonith the peer. IO is
923 resumed as soon as the situation is resolved. In case the
924 fence-peer handler fails, I/O can be resumed manually with
925 'drbdadm resume-io'.
926
927 ko-count number
928
929 If a secondary node fails to complete a write request in ko-count
930 times the timeout parameter, it is excluded from the cluster. The
931 primary node then sets the connection to this secondary node to
932 Standalone. To disable this feature, you should explicitly set it
933 to 0; defaults may change between versions.
934
935 max-buffers number
936
937 Limits the memory usage per DRBD minor device on the receiving
938 side, or for internal buffers during resync or online-verify. Unit
939 is PAGE_SIZE, which is 4 KiB on most systems. The minimum possible
940 setting is hard coded to 32 (=128 KiB). These buffers are used to
941 hold data blocks while they are written to/read from disk. To avoid
942 possible distributed deadlocks on congestion, this setting is used
943 as a throttle threshold rather than a hard limit. Once more than
944 max-buffers pages are in use, further allocation from this pool is
945 throttled. You want to increase max-buffers if you cannot saturate
946 the IO backend on the receiving side.
947
948 max-epoch-size number
949
950 Define the maximum number of write requests DRBD may issue before
951 issuing a write barrier. The default value is 2048, with a minimum
952 of 1 and a maximum of 20000. Setting this parameter to a value
953 below 10 is likely to decrease performance.
954
955 on-congestion policy,
956 congestion-fill threshold,
957 congestion-extents threshold
958 By default, DRBD blocks when the TCP send queue is full. This
959 prevents applications from generating further write requests until
960 more buffer space becomes available again.
961
962 When DRBD is used together with DRBD-proxy, it can be better to use
963 the pull-ahead on-congestion policy, which can switch DRBD into
964 ahead/behind mode before the send queue is full. DRBD then records
965 the differences between itself and the peer in its bitmap, but it
966 no longer replicates them to the peer. When enough buffer space
967 becomes available again, the node resynchronizes with the peer and
968 switches back to normal replication.
969
970 This has the advantage of not blocking application I/O even when
971 the queues fill up, and the disadvantage that peer nodes can fall
972 behind much further. Also, while resynchronizing, peer nodes will
973 become inconsistent.
974
975 The available congestion policies are block (the default) and
976 pull-ahead. The congestion-fill parameter defines how much data is
977 allowed to be "in flight" in this connection. The default value is
978 0, which disables this mechanism of congestion control, with a
979 maximum of 10 GiBytes. The congestion-extents parameter defines how
980 many bitmap extents may be active before switching into
981 ahead/behind mode, with the same default and limits as the
982 al-extents parameter. The congestion-extents parameter is effective
983 only when set to a value smaller than al-extents.
984
985 Ahead/behind mode is available since DRBD 8.3.10.
986
987 ping-int interval
988
989 When the TCP/IP connection to a peer is idle for more than ping-int
990 seconds, DRBD will send a keep-alive packet to make sure that a
991 failed peer or network connection is detected reasonably soon. The
992 default value is 10 seconds, with a minimum of 1 and a maximum of
993 120 seconds. The unit is seconds.
994
995 ping-timeout timeout
996
997 Define the timeout for replies to keep-alive packets. If the peer
998 does not reply within ping-timeout, DRBD will close and try to
999 reestablish the connection. The default value is 0.5 seconds, with
1000 a minimum of 0.1 seconds and a maximum of 3 seconds. The unit is
1001 tenths of a second.
1002
1003 socket-check-timeout timeout
1004 In setups involving a DRBD-proxy and connections that experience a
1005 lot of buffer-bloat it might be necessary to set ping-timeout to an
1006 unusual high value. By default DRBD uses the same value to wait if
1007 a newly established TCP-connection is stable. Since the DRBD-proxy
1008 is usually located in the same data center such a long wait time
1009 may hinder DRBD's connect process.
1010
1011 In such setups socket-check-timeout should be set to at least to
1012 the round trip time between DRBD and DRBD-proxy. I.e. in most cases
1013 to 1.
1014
1015 The default unit is tenths of a second, the default value is 0
1016 (which causes DRBD to use the value of ping-timeout instead).
1017 Introduced in 8.4.5.
1018
1019 protocol name
1020 Use the specified protocol on this connection. The supported
1021 protocols are:
1022
1023 A
1024 Writes to the DRBD device complete as soon as they have reached
1025 the local disk and the TCP/IP send buffer.
1026
1027 B
1028 Writes to the DRBD device complete as soon as they have reached
1029 the local disk, and all peers have acknowledged the receipt of
1030 the write requests.
1031
1032 C
1033 Writes to the DRBD device complete as soon as they have reached
1034 the local and all remote disks.
1035
1036
1037 rcvbuf-size size
1038
1039 Configure the size of the TCP/IP receive buffer. A value of 0 (the
1040 default) causes the buffer size to adjust dynamically. This
1041 parameter usually does not need to be set, but it can be set to a
1042 value up to 10 MiB. The default unit is bytes.
1043
1044 rr-conflict policy
1045 This option helps to solve the cases when the outcome of the resync
1046 decision is incompatible with the current role assignment in the
1047 cluster. The defined policies are:
1048
1049 disconnect
1050 No automatic resynchronization, simply disconnect.
1051
1052 retry-connect
1053 Disconnect now, and retry to connect immediatly afterwards.
1054
1055 violently
1056 Resync to the primary node is allowed, violating the assumption
1057 that data on a block device are stable for one of the nodes.
1058 Do not use this option, it is dangerous.
1059
1060 call-pri-lost
1061 Call the pri-lost handler on one of the machines. The handler
1062 is expected to reboot the machine, which puts it into secondary
1063 role.
1064
1065 shared-secret secret
1066
1067 Configure the shared secret used for peer authentication. The
1068 secret is a string of up to 64 characters. Peer authentication also
1069 requires the cram-hmac-alg parameter to be set.
1070
1071 sndbuf-size size
1072
1073 Configure the size of the TCP/IP send buffer. Since DRBD 8.0.13 /
1074 8.2.7, a value of 0 (the default) causes the buffer size to adjust
1075 dynamically. Values below 32 KiB are harmful to the throughput on
1076 this connection. Large buffer sizes can be useful especially when
1077 protocol A is used over high-latency networks; the maximum value
1078 supported is 10 MiB.
1079
1080 tcp-cork
1081 By default, DRBD uses the TCP_CORK socket option to prevent the
1082 kernel from sending partial messages; this results in fewer and
1083 bigger packets on the network. Some network stacks can perform
1084 worse with this optimization. On these, the tcp-cork parameter can
1085 be used to turn this optimization off.
1086
1087 timeout time
1088
1089 Define the timeout for replies over the network: if a peer node
1090 does not send an expected reply within the specified timeout, it is
1091 considered dead and the TCP/IP connection is closed. The timeout
1092 value must be lower than connect-int and lower than ping-int. The
1093 default is 6 seconds; the value is specified in tenths of a second.
1094
1095 transport type
1096
1097 With DRBD9 the network transport used by DRBD is loaded as a
1098 seperate module. With this option you can specify which transport
1099 and module to load. At present only two options exist, tcp and
1100 rdma. Please note that currently the RDMA transport module is only
1101 available with a license purchased from LINBIT. Default is tcp.
1102
1103 use-rle
1104
1105 Each replicated device on a cluster node has a separate bitmap for
1106 each of its peer devices. The bitmaps are used for tracking the
1107 differences between the local and peer device: depending on the
1108 cluster state, a disk range can be marked as different from the
1109 peer in the device's bitmap, in the peer device's bitmap, or in
1110 both bitmaps. When two cluster nodes connect, they exchange each
1111 other's bitmaps, and they each compute the union of the local and
1112 peer bitmap to determine the overall differences.
1113
1114 Bitmaps of very large devices are also relatively large, but they
1115 usually compress very well using run-length encoding. This can save
1116 time and bandwidth for the bitmap transfers.
1117
1118 The use-rle parameter determines if run-length encoding should be
1119 used. It is on by default since DRBD 8.4.0.
1120
1121 verify-alg hash-algorithm
1122 Online verification (drbdadm verify) computes and compares
1123 checksums of disk blocks (i.e., hash values) in order to detect if
1124 they differ. The verify-alg parameter determines which algorithm to
1125 use for these checksums. It must be set to one of the secure hash
1126 algorithms supported by the kernel before online verify can be
1127 used; see the shash algorithms listed in /proc/crypto.
1128
1129 We recommend to schedule online verifications regularly during
1130 low-load periods, for example once a month. Also see the notes on
1131 data integrity below.
1132
1133 allow-remote-read bool-value
1134 Allows or disallows DRBD to read from a peer node.
1135
1136 When the disk of a primary node is detached, DRBD will try to
1137 continue reading and writing from another node in the cluster. For
1138 this purpose, it searches for nodes with up-to-date data, and uses
1139 any found node to resume operations. In some cases it may not be
1140 desirable to read back data from a peer node, because the node
1141 should only be used as a replication target. In this case, the
1142 allow-remote-read parameter can be set to no, which would prohibit
1143 this node from reading data from the peer node.
1144
1145 The allow-remote-read parameter is available since DRBD 9.0.19, and
1146 defaults to yes.
1147
1148 Section on Parameters
1149 address [address-family] address:port
1150
1151 Defines the address family, address, and port of a connection
1152 endpoint.
1153
1154 The address families ipv4, ipv6, ssocks (Dolphin Interconnect
1155 Solutions' "super sockets"), sdp (Infiniband Sockets Direct
1156 Protocol), and sci are supported (sci is an alias for ssocks). If
1157 no address family is specified, ipv4 is assumed. For all address
1158 families except ipv6, the address is specified in IPV4 address
1159 notation (for example, 1.2.3.4). For ipv6, the address is enclosed
1160 in brackets and uses IPv6 address notation (for example,
1161 [fd01:2345:6789:abcd::1]). The port is always specified as a
1162 decimal number from 1 to 65535.
1163
1164 On each host, the port numbers must be unique for each address;
1165 ports cannot be shared.
1166
1167 node-id value
1168
1169 Defines the unique node identifier for a node in the cluster. Node
1170 identifiers are used to identify individual nodes in the network
1171 protocol, and to assign bitmap slots to nodes in the metadata.
1172
1173 Node identifiers can only be reasssigned in a cluster when the
1174 cluster is down. It is essential that the node identifiers in the
1175 configuration and in the device metadata are changed consistently
1176 on all hosts. To change the metadata, dump the current state with
1177 drbdmeta dump-md, adjust the bitmap slot assignment, and update the
1178 metadata with drbdmeta restore-md.
1179
1180 The node-id parameter exists since DRBD 9. Its value ranges from 0
1181 to 16; there is no default.
1182
1183 Section options Parameters (Resource Options)
1184 auto-promote bool-value
1185 A resource must be promoted to primary role before any of its
1186 devices can be mounted or opened for writing.
1187
1188 Before DRBD 9, this could only be done explicitly ("drbdadm
1189 primary"). Since DRBD 9, the auto-promote parameter allows to
1190 automatically promote a resource to primary role when one of its
1191 devices is mounted or opened for writing. As soon as all devices
1192 are unmounted or closed with no more remaining users, the role of
1193 the resource changes back to secondary.
1194
1195 Automatic promotion only succeeds if the cluster state allows it
1196 (that is, if an explicit drbdadm primary command would succeed).
1197 Otherwise, mounting or opening the device fails as it already did
1198 before DRBD 9: the mount(2) system call fails with errno set to
1199 EROFS (Read-only file system); the open(2) system call fails with
1200 errno set to EMEDIUMTYPE (wrong medium type).
1201
1202 Irrespective of the auto-promote parameter, if a device is promoted
1203 explicitly (drbdadm primary), it also needs to be demoted
1204 explicitly (drbdadm secondary).
1205
1206 The auto-promote parameter is available since DRBD 9.0.0, and
1207 defaults to yes.
1208
1209 cpu-mask cpu-mask
1210
1211 Set the cpu affinity mask for DRBD kernel threads. The cpu mask is
1212 specified as a hexadecimal number. The default value is 0, which
1213 lets the scheduler decide which kernel threads run on which CPUs.
1214 CPU numbers in cpu-mask which do not exist in the system are
1215 ignored.
1216
1217 on-no-data-accessible policy
1218 Determine how to deal with I/O requests when the requested data is
1219 not available locally or remotely (for example, when all disks have
1220 failed). The defined policies are:
1221
1222 io-error
1223 System calls fail with errno set to EIO.
1224
1225 suspend-io
1226 The resource suspends I/O. I/O can be resumed by (re)attaching
1227 the lower-level device, by connecting to a peer which has
1228 access to the data, or by forcing DRBD to resume I/O with
1229 drbdadm resume-io res. When no data is available, forcing I/O
1230 to resume will result in the same behavior as the io-error
1231 policy.
1232
1233 This setting is available since DRBD 8.3.9; the default policy is
1234 io-error.
1235
1236 peer-ack-window value
1237
1238 On each node and for each device, DRBD maintains a bitmap of the
1239 differences between the local and remote data for each peer device.
1240 For example, in a three-node setup (nodes A, B, C) each with a
1241 single device, every node maintains one bitmap for each of its
1242 peers.
1243
1244 When nodes receive write requests, they know how to update the
1245 bitmaps for the writing node, but not how to update the bitmaps
1246 between themselves. In this example, when a write request
1247 propagates from node A to B and C, nodes B and C know that they
1248 have the same data as node A, but not whether or not they both have
1249 the same data.
1250
1251 As a remedy, the writing node occasionally sends peer-ack packets
1252 to its peers which tell them which state they are in relative to
1253 each other.
1254
1255 The peer-ack-window parameter specifies how much data a primary
1256 node may send before sending a peer-ack packet. A low value causes
1257 increased network traffic; a high value causes less network traffic
1258 but higher memory consumption on secondary nodes and higher resync
1259 times between the secondary nodes after primary node failures.
1260 (Note: peer-ack packets may be sent due to other reasons as well,
1261 e.g. membership changes or expiry of the peer-ack-delay timer.)
1262
1263 The default value for peer-ack-window is 2 MiB, the default unit is
1264 sectors. This option is available since 9.0.0.
1265
1266 peer-ack-delay expiry-time
1267
1268 If after the last finished write request no new write request gets
1269 issued for expiry-time, then a peer-ack packet is sent. If a new
1270 write request is issued before the timer expires, the timer gets
1271 reset to expiry-time. (Note: peer-ack packets may be sent due to
1272 other reasons as well, e.g. membership changes or the
1273 peer-ack-window option.)
1274
1275 This parameter may influence resync behavior on remote nodes. Peer
1276 nodes need to wait until they receive an peer-ack for releasing a
1277 lock on an AL-extent. Resync operations between peers may need to
1278 wait for for these locks.
1279
1280 The default value for peer-ack-delay is 100 milliseconds, the
1281 default unit is milliseconds. This option is available since 9.0.0.
1282
1283 quorum value
1284
1285 When activated, a cluster partition requires quorum in order to
1286 modify the replicated data set. That means a node in the cluster
1287 partition can only be promoted to primary if the cluster partition
1288 has quorum. Every node with a disk directly connected to the node
1289 that should be promoted counts. If a primary node should execute a
1290 write request, but the cluster partition has lost quorum, it will
1291 freeze IO or reject the write request with an error (depending on
1292 the on-no-quorum setting). Upon loosing quorum a primary always
1293 invokes the quorum-lost handler. The handler is intended for
1294 notification purposes, its return code is ignored.
1295
1296 The option's value might be set to off, majority, all or a numeric
1297 value. If you set it to a numeric value, make sure that the value
1298 is greater than half of your number of nodes. Quorum is a mechanism
1299 to avoid data divergence, it might be used instead of fencing when
1300 there are more than two repicas. It defaults to off
1301
1302 If all missing nodes are marked as outdated, a partition always has
1303 quorum, no matter how small it is. I.e. If you disconnect all
1304 secondary nodes gracefully a single primary continues to operate.
1305 In the moment a single secondary is lost, it has to be assumed that
1306 it forms a partition with all the missing outdated nodes. In case
1307 my partition might be smaller than the other, quorum is lost in
1308 this moment.
1309
1310 In case you want to allow permanently diskless nodes to gain quorum
1311 it is recommendet to not use majority or all. It is recommended to
1312 specify an absolute number, since DBRD's heuristic to determine the
1313 complete number of diskfull nodes in the cluster is unreliable.
1314
1315 The quorum implementation is available starting with the DRBD
1316 kernel driver version 9.0.7.
1317
1318 quorum-minimum-redundancy value
1319
1320 This option sets the minimal required number of nodes with an
1321 UpToDate disk to allow the partition to gain quorum. This is a
1322 different requirement than the plain quorum option expresses.
1323
1324 The option's value might be set to off, majority, all or a numeric
1325 value. If you set it to a numeric value, make sure that the value
1326 is greater than half of your number of nodes.
1327
1328 In case you want to allow permanently diskless nodes to gain quorum
1329 it is recommendet to not use majority or all. It is recommended to
1330 specify an absolute number, since DBRD's heuristic to determine the
1331 complete number of diskfull nodes in the cluster is unreliable.
1332
1333 This option is available starting with the DRBD kernel driver
1334 version 9.0.10.
1335
1336 on-no-quorum {io-error | suspend-io}
1337
1338 By default DRBD freezes IO on a device, that lost quorum. By
1339 setting the on-no-quorum to io-error it completes all IO operations
1340 with an error if quorum ist lost.
1341
1342 The on-no-quorum options is available starting with the DRBD kernel
1343 driver version 9.0.8.
1344
1345 Section startup Parameters
1346 The parameters in this section define the behavior of DRBD at system
1347 startup time, in the DRBD init script. They have no effect once the
1348 system is up and running.
1349
1350 degr-wfc-timeout timeout
1351
1352 Define how long to wait until all peers are connected in case the
1353 cluster consisted of a single node only when the system went down.
1354 This parameter is usually set to a value smaller than wfc-timeout.
1355 The assumption here is that peers which were unreachable before a
1356 reboot are less likely to be reachable after the reboot, so waiting
1357 is less likely to help.
1358
1359 The timeout is specified in seconds. The default value is 0, which
1360 stands for an infinite timeout. Also see the wfc-timeout parameter.
1361
1362 outdated-wfc-timeout timeout
1363
1364 Define how long to wait until all peers are connected if all peers
1365 were outdated when the system went down. This parameter is usually
1366 set to a value smaller than wfc-timeout. The assumption here is
1367 that an outdated peer cannot have become primary in the meantime,
1368 so we don't need to wait for it as long as for a node which was
1369 alive before.
1370
1371 The timeout is specified in seconds. The default value is 0, which
1372 stands for an infinite timeout. Also see the wfc-timeout parameter.
1373
1374 stacked-timeouts
1375 On stacked devices, the wfc-timeout and degr-wfc-timeout parameters
1376 in the configuration are usually ignored, and both timeouts are set
1377 to twice the connect-int timeout. The stacked-timeouts parameter
1378 tells DRBD to use the wfc-timeout and degr-wfc-timeout parameters
1379 as defined in the configuration, even on stacked devices. Only use
1380 this parameter if the peer of the stacked resource is usually not
1381 available, or will not become primary. Incorrect use of this
1382 parameter can lead to unexpected split-brain scenarios.
1383
1384 wait-after-sb
1385 This parameter causes DRBD to continue waiting in the init script
1386 even when a split-brain situation has been detected, and the nodes
1387 therefore refuse to connect to each other.
1388
1389 wfc-timeout timeout
1390
1391 Define how long the init script waits until all peers are
1392 connected. This can be useful in combination with a cluster manager
1393 which cannot manage DRBD resources: when the cluster manager
1394 starts, the DRBD resources will already be up and running. With a
1395 more capable cluster manager such as Pacemaker, it makes more sense
1396 to let the cluster manager control DRBD resources. The timeout is
1397 specified in seconds. The default value is 0, which stands for an
1398 infinite timeout. Also see the degr-wfc-timeout parameter.
1399
1400 Section volume Parameters
1401 device /dev/drbdminor-number
1402
1403 Define the device name and minor number of a replicated block
1404 device. This is the device that applications are supposed to
1405 access; in most cases, the device is not used directly, but as a
1406 file system. This parameter is required and the standard device
1407 naming convention is assumed.
1408
1409 In addition to this device, udev will create
1410 /dev/drbd/by-res/resource/volume and
1411 /dev/drbd/by-disk/lower-level-device symlinks to the device.
1412
1413 disk {[disk] | none}
1414
1415 Define the lower-level block device that DRBD will use for storing
1416 the actual data. While the replicated drbd device is configured,
1417 the lower-level device must not be used directly. Even read-only
1418 access with tools like dumpe2fs(8) and similar is not allowed. The
1419 keyword none specifies that no lower-level block device is
1420 configured; this also overrides inheritance of the lower-level
1421 device.
1422
1423 meta-disk internal,
1424 meta-disk device,
1425 meta-disk device [index]
1426
1427 Define where the metadata of a replicated block device resides: it
1428 can be internal, meaning that the lower-level device contains both
1429 the data and the metadata, or on a separate device.
1430
1431 When the index form of this parameter is used, multiple replicated
1432 devices can share the same metadata device, each using a separate
1433 index. Each index occupies 128 MiB of data, which corresponds to a
1434 replicated device size of at most 4 TiB with two cluster nodes. We
1435 recommend not to share metadata devices anymore, and to instead use
1436 the lvm volume manager for creating metadata devices as needed.
1437
1438 When the index form of this parameter is not used, the size of the
1439 lower-level device determines the size of the metadata. The size
1440 needed is 36 KiB + (size of lower-level device) / 32K * (number of
1441 nodes - 1). If the metadata device is bigger than that, the extra
1442 space is not used.
1443
1444 This parameter is required if a disk other than none is specified,
1445 and ignored if disk is set to none. A meta-disk parameter without a
1446 disk parameter is not allowed.
1447
1449 DRBD supports two different mechanisms for data integrity checking:
1450 first, the data-integrity-alg network parameter allows to add a
1451 checksum to the data sent over the network. Second, the online
1452 verification mechanism (drbdadm verify and the verify-alg parameter)
1453 allows to check for differences in the on-disk data.
1454
1455 Both mechanisms can produce false positives if the data is modified
1456 during I/O (i.e., while it is being sent over the network or written to
1457 disk). This does not always indicate a problem: for example, some file
1458 systems and applications do modify data under I/O for certain
1459 operations. Swap space can also undergo changes while under I/O.
1460
1461 Network data integrity checking tries to identify data modification
1462 during I/O by verifying the checksums on the sender side after sending
1463 the data. If it detects a mismatch, it logs an error. The receiver also
1464 logs an error when it detects a mismatch. Thus, an error logged only on
1465 the receiver side indicates an error on the network, and an error
1466 logged on both sides indicates data modification under I/O.
1467
1468 The most recent example of systematic data corruption was identified as
1469 a bug in the TCP offloading engine and driver of a certain type of GBit
1470 NIC in 2007: the data corruption happened on the DMA transfer from core
1471 memory to the card. Because the TCP checksum were calculated on the
1472 card, the TCP/IP protocol checksums did not reveal this problem.
1473
1475 This document was revised for version 9.0.0 of the DRBD distribution.
1476
1478 Written by Philipp Reisner <philipp.reisner@linbit.com> and Lars
1479 Ellenberg <lars.ellenberg@linbit.com>.
1480
1482 Report bugs to <drbd-user@lists.linbit.com>.
1483
1485 Copyright 2001-2018 LINBIT Information Technologies, Philipp Reisner,
1486 Lars Ellenberg. This is free software; see the source for copying
1487 conditions. There is NO warranty; not even for MERCHANTABILITY or
1488 FITNESS FOR A PARTICULAR PURPOSE.
1489
1491 drbd(8), drbdsetup(8), drbdadm(8), DRBD User's Guide[1], DRBD Web
1492 Site[3]
1493
1495 1. DRBD User's Guide
1496 http://www.drbd.org/users-guide/
1497
1498 2.
1499
1500 Online Usage Counter
1501 http://usage.drbd.org
1502
1503 3. DRBD Web Site
1504 http://www.drbd.org/
1505
1506
1507
1508DRBD 9.0.x 17 January 2018 DRBD.CONF(5)