1DRBD.CONF(5)                  Configuration Files                 DRBD.CONF(5)
2
3
4

NAME

6       drbd.conf - DRBD Configuration Files
7

INTRODUCTION

9       DRBD implements block devices which replicate their data to all nodes
10       of a cluster. The actual data and associated metadata are usually
11       stored redundantly on "ordinary" block devices on each cluster node.
12
13       Replicated block devices are called /dev/drbdminor by default. They are
14       grouped into resources, with one or more devices per resource.
15       Replication among the devices in a resource takes place in
16       chronological order. With DRBD, we refer to the devices inside a
17       resource as volumes.
18
19       In DRBD 9, a resource can be replicated between two or more cluster
20       nodes. The connections between cluster nodes are point-to-point links,
21       and use TCP or a TCP-like protocol. All nodes must be directly
22       connected.
23
24       DRBD consists of low-level user-space components which interact with
25       the kernel and perform basic operations (drbdsetup, drbdmeta), a
26       high-level user-space component which understands and processes the
27       DRBD configuration and translates it into basic operations of the
28       low-level components (drbdadm), and a kernel component.
29
30       The default DRBD configuration consists of /etc/drbd.conf and of
31       additional files included from there, usually global_common.conf and
32       all *.res files inside /etc/drbd.d/. It has turned out to be useful to
33       define each resource in a separate *.res file.
34
35       The configuration files are designed so that each cluster node can
36       contain an identical copy of the entire cluster configuration. The host
37       name of each node determines which parts of the configuration apply
38       (uname -n). It is highly recommended to keep the cluster configuration
39       on all nodes in sync by manually copying it to all nodes, or by
40       automating the process with csync2 or a similar tool.
41

EXAMPLE CONFIGURATION FILE

43           global {
44                usage-count yes;
45                udev-always-use-vnr;
46           }
47           resource r0 {
48                 net {
49                      cram-hmac-alg sha1;
50                      shared-secret "FooFunFactory";
51                 }
52                 volume 0 {
53                      device    /dev/drbd1;
54                      disk      /dev/sda7;
55                      meta-disk internal;
56                 }
57                 on alice {
58                      node-id   0;
59                      address   10.1.1.31:7000;
60                 }
61                 on bob {
62                      node-id   1;
63                      address   10.1.1.32:7000;
64                 }
65                 connection {
66                      host      alice  port 7000;
67                      host      bob    port 7000;
68                      net {
69                          protocol C;
70                      }
71                 }
72           }
73
74       This example defines a resource r0 which contains a single replicated
75       device with volume number 0. The resource is replicated among hosts
76       alice and bob, which have the IPv4 addresses 10.1.1.31 and 10.1.1.32
77       and the node identifiers 0 and 1, respectively. On both hosts, the
78       replicated device is called /dev/drbd1, and the actual data and
79       metadata are stored on the lower-level device /dev/sda7. The connection
80       between the hosts uses protocol C.
81
82       Please refer to the DRBD User's Guide[1] for more examples.
83

FILE FORMAT

85       DRBD configuration files consist of sections, which contain other
86       sections and parameters depending on the section types. Each section
87       consists of one or more keywords, sometimes a section name, an opening
88       brace (“{”), the section's contents, and a closing brace (“}”).
89       Parameters inside a section consist of a keyword, followed by one or
90       more keywords or values, and a semicolon (“;”).
91
92       Some parameter values have a default scale which applies when a plain
93       number is specified (for example Kilo, or 1024 times the numeric
94       value). Such default scales can be overridden by using a suffix (for
95       example, M for Mega). The common suffixes K = 2^10 = 1024, M = 1024 K,
96       and G = 1024 M are supported.
97
98       Comments start with a hash sign (“#”) and extend to the end of the
99       line. In addition, any section can be prefixed with the keyword skip,
100       which causes the section and any sub-sections to be ignored.
101
102       Additional files can be included with the include file-pattern
103       statement (see glob(7) for the expressions supported in file-pattern).
104       Include statements are only allowed outside of sections.
105
106       The following sections are defined (indentation indicates in which
107       context):
108
109           common
110              [disk]
111              [handlers]
112              [net]
113              [options]
114              [startup]
115           global
116           [require-drbd-module-version-{eq,ne,gt,ge,lt,le}]
117           resource
118              connection
119                 multiple path | 2 host
120                 [net]
121                 [volume]
122                    [peer-device-options]
123                 [peer-device-options]
124              connection-mesh
125                 [net]
126              [disk]
127              floating
128              handlers
129              [net]
130              on
131                 volume
132                    disk
133                    [disk]
134              options
135              stacked-on-top-of
136              startup
137
138       Sections in brackets affect other parts of the configuration: inside
139       the common section, they apply to all resources. A disk section inside
140       a resource or on section applies to all volumes of that resource, and a
141       net section inside a resource section applies to all connections of
142       that resource. This allows to avoid repeating identical options for
143       each resource, connection, or volume. Options can be overridden in a
144       more specific resource, connection, on, or volume section.
145
146       peer-device-options are resync-rate, c-plan-ahead, c-delay-target,
147       c-fill-target, c-max-rate and c-min-rate. Due to backward
148       comapatibility they can be specified in any disk options section as
149       well. They are inherited into all relevant connections. If they are
150       given on connection level they are inherited to all volumes on that
151       connection. A peer-device-options section is started with the disk
152       keyword.
153
154   Sections
155       common
156
157           This section can contain each a disk, handlers, net, options, and
158           startup section. All resources inherit the parameters in these
159           sections as their default values.
160
161       connection [name]
162
163           Define a connection between two hosts. This section must contain
164           two host parameters or multiple path sections. The optional name is
165           used to refer to the connection in the system log and in other
166           messages. If no name is specified, the peer's host name is used
167           instead.
168
169       path
170
171           Define a path between two hosts. This section must contain two host
172           parameters.
173
174       connection-mesh
175
176           Define a connection mesh between multiple hosts. This section must
177           contain a hosts parameter, which has the host names as arguments.
178           This section is a shortcut to define many connections which share
179           the same network options.
180
181       disk
182
183           Define parameters for a volume. All parameters in this section are
184           optional.
185
186       floating [address-family] addr:port
187
188           Like the on section, except that instead of the host name a network
189           address is used to determine if it matches a floating section.
190
191           The node-id parameter in this section is required. If the address
192           parameter is not provided, no connections to peers will be created
193           by default. The device, disk, and meta-disk parameters must be
194           defined in, or inherited by, this section.
195
196       global
197
198           Define some global parameters. All parameters in this section are
199           optional. Only one global section is allowed in the configuration.
200
201       require-drbd-module-version-{eq,ne,gt,ge,lt,le}
202
203           This statement contains one of the valid forms and a three digit
204           version number (e.g., require-drbd-module-version-eq 9.0.16;). If
205           the currently loaded DRBD kernel module does not match the
206           specification, parsing is aborted. Comparison operator names have
207           same semantic as in test(1).
208
209       handlers
210
211           Define handlers to be invoked when certain events occur. The kernel
212           passes the resource name in the first command-line argument and
213           sets the following environment variables depending on the event's
214           context:
215
216           ·   For events related to a particular device: the device's minor
217               number in DRBD_MINOR, the device's volume number in
218               DRBD_VOLUME.
219
220           ·   For events related to a particular device on a particular peer:
221               the connection endpoints in DRBD_MY_ADDRESS, DRBD_MY_AF,
222               DRBD_PEER_ADDRESS, and DRBD_PEER_AF; the device's local minor
223               number in DRBD_MINOR, and the device's volume number in
224               DRBD_VOLUME.
225
226           ·   For events related to a particular connection: the connection
227               endpoints in DRBD_MY_ADDRESS, DRBD_MY_AF, DRBD_PEER_ADDRESS,
228               and DRBD_PEER_AF; and, for each device defined for that
229               connection: the device's minor number in
230               DRBD_MINOR_volume-number.
231
232           ·   For events that identify a device, if a lower-level device is
233               attached, the lower-level device's device name is passed in
234               DRBD_BACKING_DEV (or DRBD_BACKING_DEV_volume-number).
235
236           All parameters in this section are optional. Only a single handler
237           can be defined for each event; if no handler is defined, nothing
238           will happen.
239
240       net
241
242           Define parameters for a connection. All parameters in this section
243           are optional.
244
245       on host-name [...]
246
247           Define the properties of a resource on a particular host or set of
248           hosts. Specifying more than one host name can make sense in a setup
249           with IP address failover, for example. The host-name argument must
250           match the Linux host name (uname -n).
251
252           Usually contains or inherits at least one volume section. The
253           node-id and address parameters must be defined in this section. The
254           device, disk, and meta-disk parameters must be defined in, or
255           inherited by, this section.
256
257           A normal configuration file contains two or more on sections for
258           each resource. Also see the floating section.
259
260       options
261
262           Define parameters for a resource. All parameters in this section
263           are optional.
264
265       resource name
266
267           Define a resource. Usually contains at least two on sections and at
268           least one connection section.
269
270       stacked-on-top-of resource
271
272           Used instead of an on section for configuring a stacked resource
273           with three to four nodes.
274
275           Starting with DRBD 9, stacking is deprecated. It is advised to use
276           resources which are replicated among more than two nodes instead.
277
278       startup
279
280           The parameters in this section determine the behavior of a resource
281           at startup time.
282
283       volume volume-number
284
285           Define a volume within a resource. The volume numbers in the
286           various volume sections of a resource define which devices on which
287           hosts form a replicated device.
288
289   Section connection Parameters
290       host name [address [address-family] address] [port port-number]
291
292           Defines an endpoint for a connection. Each host statement refers to
293           an on section in a resource. If a port number is defined, this
294           endpoint will use the specified port instead of the port defined in
295           the on section. Each connection section must contain exactly two
296           host parameters. Instead of two host parameters the connection may
297           contain multiple path sections.
298
299   Section path Parameters
300       host name [address [address-family] address] [port port-number]
301
302           Defines an endpoint for a connection. Each host statement refers to
303           an on section in a resource. If a port number is defined, this
304           endpoint will use the specified port instead of the port defined in
305           the on section. Each path section must contain exactly two host
306           parameters.
307
308   Section connection-mesh Parameters
309       hosts name...
310
311           Defines all nodes of a mesh. Each name refers to an on section in a
312           resource. The port that is defined in the on section will be used.
313
314   Section disk Parameters
315       al-extents extents
316
317           DRBD automatically maintains a "hot" or "active" disk area likely
318           to be written to again soon based on the recent write activity. The
319           "active" disk area can be written to immediately, while "inactive"
320           disk areas must be "activated" first, which requires a meta-data
321           write. We also refer to this active disk area as the "activity
322           log".
323
324           The activity log saves meta-data writes, but the whole log must be
325           resynced upon recovery of a failed node. The size of the activity
326           log is a major factor of how long a resync will take and how fast a
327           replicated disk will become consistent after a crash.
328
329           The activity log consists of a number of 4-Megabyte segments; the
330           al-extents parameter determines how many of those segments can be
331           active at the same time. The default value for al-extents is 1237,
332           with a minimum of 7 and a maximum of 65536.
333
334           Note that the effective maximum may be smaller, depending on how
335           you created the device meta data, see also drbdmeta(8) The
336           effective maximum is 919 * (available on-disk activity-log
337           ring-buffer area/4kB -1), the default 32kB ring-buffer effects a
338           maximum of 6433 (covers more than 25 GiB of data) We recommend to
339           keep this well within the amount your backend storage and
340           replication link are able to resync inside of about 5 minutes.
341
342       al-updates {yes | no}
343
344           With this parameter, the activity log can be turned off entirely
345           (see the al-extents parameter). This will speed up writes because
346           fewer meta-data writes will be necessary, but the entire device
347           needs to be resynchronized opon recovery of a failed primary node.
348           The default value for al-updates is yes.
349
350       disk-barrier,
351       disk-flushes,
352       disk-drain
353           DRBD has three methods of handling the ordering of dependent write
354           requests:
355
356           disk-barrier
357               Use disk barriers to make sure that requests are written to
358               disk in the right order. Barriers ensure that all requests
359               submitted before a barrier make it to the disk before any
360               requests submitted after the barrier. This is implemented using
361               'tagged command queuing' on SCSI devices and 'native command
362               queuing' on SATA devices. Only some devices and device stacks
363               support this method. The device mapper (LVM) only supports
364               barriers in some configurations.
365
366               Note that on systems which do not support disk barriers,
367               enabling this option can lead to data loss or corruption. Until
368               DRBD 8.4.1, disk-barrier was turned on if the I/O stack below
369               DRBD did support barriers. Kernels since linux-2.6.36 (or
370               2.6.32 RHEL6) no longer allow to detect if barriers are
371               supported. Since drbd-8.4.2, this option is off by default and
372               needs to be enabled explicitly.
373
374           disk-flushes
375               Use disk flushes between dependent write requests, also
376               referred to as 'force unit access' by drive vendors. This
377               forces all data to disk. This option is enabled by default.
378
379           disk-drain
380               Wait for the request queue to "drain" (that is, wait for the
381               requests to finish) before submitting a dependent write
382               request. This method requires that requests are stable on disk
383               when they finish. Before DRBD 8.0.9, this was the only method
384               implemented. This option is enabled by default. Do not disable
385               in production environments.
386
387           From these three methods, drbd will use the first that is enabled
388           and supported by the backing storage device. If all three of these
389           options are turned off, DRBD will submit write requests without
390           bothering about dependencies. Depending on the I/O stack, write
391           requests can be reordered, and they can be submitted in a different
392           order on different cluster nodes. This can result in data loss or
393           corruption. Therefore, turning off all three methods of controlling
394           write ordering is strongly discouraged.
395
396           A general guideline for configuring write ordering is to use disk
397           barriers or disk flushes when using ordinary disks (or an ordinary
398           disk array) with a volatile write cache. On storage without cache
399           or with a battery backed write cache, disk draining can be a
400           reasonable choice.
401
402       disk-timeout
403           If the lower-level device on which a DRBD device stores its data
404           does not finish an I/O request within the defined disk-timeout,
405           DRBD treats this as a failure. The lower-level device is detached,
406           and the device's disk state advances to Diskless. If DRBD is
407           connected to one or more peers, the failed request is passed on to
408           one of them.
409
410           This option is dangerous and may lead to kernel panic!
411
412           "Aborting" requests, or force-detaching the disk, is intended for
413           completely blocked/hung local backing devices which do no longer
414           complete requests at all, not even do error completions. In this
415           situation, usually a hard-reset and failover is the only way out.
416
417           By "aborting", basically faking a local error-completion, we allow
418           for a more graceful swichover by cleanly migrating services. Still
419           the affected node has to be rebooted "soon".
420
421           By completing these requests, we allow the upper layers to re-use
422           the associated data pages.
423
424           If later the local backing device "recovers", and now DMAs some
425           data from disk into the original request pages, in the best case it
426           will just put random data into unused pages; but typically it will
427           corrupt meanwhile completely unrelated data, causing all sorts of
428           damage.
429
430           Which means delayed successful completion, especially for READ
431           requests, is a reason to panic(). We assume that a delayed *error*
432           completion is OK, though we still will complain noisily about it.
433
434           The default value of disk-timeout is 0, which stands for an
435           infinite timeout. Timeouts are specified in units of 0.1 seconds.
436           This option is available since DRBD 8.3.12.
437
438       md-flushes
439           Enable disk flushes and disk barriers on the meta-data device. This
440           option is enabled by default. See the disk-flushes parameter.
441
442       on-io-error handler
443
444           Configure how DRBD reacts to I/O errors on a lower-level device.
445           The following policies are defined:
446
447           pass_on
448               Change the disk status to Inconsistent, mark the failed block
449               as inconsistent in the bitmap, and retry the I/O operation on a
450               remote cluster node.
451
452           call-local-io-error
453               Call the local-io-error handler (see the handlers section).
454
455           detach
456               Detach the lower-level device and continue in diskless mode.
457
458
459       read-balancing policy
460           Distribute read requests among cluster nodes as defined by policy.
461           The supported policies are prefer-local (the default),
462           prefer-remote, round-robin, least-pending, when-congested-remote,
463           32K-striping, 64K-striping, 128K-striping, 256K-striping,
464           512K-striping and 1M-striping.
465
466           This option is available since DRBD 8.4.1.
467
468       resync-after res-name/volume
469
470           Define that a device should only resynchronize after the specified
471           other device. By default, no order between devices is defined, and
472           all devices will resynchronize in parallel. Depending on the
473           configuration of the lower-level devices, and the available network
474           and disk bandwidth, this can slow down the overall resync process.
475           This option can be used to form a chain or tree of dependencies
476           among devices.
477
478       rs-discard-granularity byte
479           When rs-discard-granularity is set to a non zero, positive value
480           then DRBD tries to do a resync operation in requests of this size.
481           In case such a block contains only zero bytes on the sync source
482           node, the sync target node will issue a discard/trim/unmap command
483           for the area.
484
485           The value is constrained by the discard granularity of the backing
486           block device. In case rs-discard-granularity is not a multiplier of
487           the discard granularity of the backing block device DRBD rounds it
488           up. The feature only gets active if the backing block device reads
489           back zeroes after a discard command.
490
491           The default value of is 0. This option is available since 8.4.7.
492
493       discard-zeroes-if-aligned {yes | no}
494
495           There are several aspects to discard/trim/unmap support on linux
496           block devices. Even if discard is supported in general, it may fail
497           silently, or may partially ignore discard requests. Devices also
498           announce whether reading from unmapped blocks returns defined data
499           (usually zeroes), or undefined data (possibly old data, possibly
500           garbage).
501
502           If on different nodes, DRBD is backed by devices with differing
503           discard characteristics, discards may lead to data divergence (old
504           data or garbage left over on one backend, zeroes due to unmapped
505           areas on the other backend). Online verify would now potentially
506           report tons of spurious differences. While probably harmless for
507           most use cases (fstrim on a file system), DRBD cannot have that.
508
509           To play safe, we have to disable discard support, if our local
510           backend (on a Primary) does not support "discard_zeroes_data=true".
511           We also have to translate discards to explicit zero-out on the
512           receiving side, unless the receiving side (Secondary) supports
513           "discard_zeroes_data=true", thereby allocating areas what were
514           supposed to be unmapped.
515
516           There are some devices (notably the LVM/DM thin provisioning) that
517           are capable of discard, but announce discard_zeroes_data=false. In
518           the case of DM-thin, discards aligned to the chunk size will be
519           unmapped, and reading from unmapped sectors will return zeroes.
520           However, unaligned partial head or tail areas of discard requests
521           will be silently ignored.
522
523           If we now add a helper to explicitly zero-out these unaligned
524           partial areas, while passing on the discard of the aligned full
525           chunks, we effectively achieve discard_zeroes_data=true on such
526           devices.
527
528           Setting discard-zeroes-if-aligned to yes will allow DRBD to use
529           discards, and to announce discard_zeroes_data=true, even on
530           backends that announce discard_zeroes_data=false.
531
532           Setting discard-zeroes-if-aligned to no will cause DRBD to always
533           fall-back to zero-out on the receiving side, and to not even
534           announce discard capabilities on the Primary, if the respective
535           backend announces discard_zeroes_data=false.
536
537           We used to ignore the discard_zeroes_data setting completely. To
538           not break established and expected behaviour, and suddenly cause
539           fstrim on thin-provisioned LVs to run out-of-space instead of
540           freeing up space, the default value is yes.
541
542           This option is available since 8.4.7.
543
544   Section peer-device-options Parameters
545       Please note that you open the section with the disk keyword.
546
547       c-delay-target delay_target,
548       c-fill-target fill_target,
549       c-max-rate max_rate,
550       c-plan-ahead plan_time
551           Dynamically control the resync speed. The following modes are
552           available:
553
554           ·   Dynamic control with fill target (default). Enabled when
555               c-plan-ahead is non-zero and c-fill-target is non-zero. The
556               goal is to fill the buffers along the data path with a defined
557               amount of data. This mode is recommended when DRBD-proxy is
558               used. Configured with c-plan-ahead, c-fill-target and
559               c-max-rate.
560
561           ·   Dynamic control with delay target. Enabled when c-plan-ahead is
562               non-zero (default) and c-fill-target is zero. The goal is to
563               have a defined delay along the path. Configured with
564               c-plan-ahead, c-delay-target and c-max-rate.
565
566           ·   Fixed resync rate. Enabled when c-plan-ahead is zero. DRBD will
567               try to perform resync I/O at a fixed rate. Configured with
568               resync-rate.
569
570           The c-plan-ahead parameter defines how fast DRBD adapts to changes
571           in the resync speed. It should be set to five times the network
572           round-trip time or more. The default value of c-plan-ahead is 20,
573           in units of 0.1 seconds.
574
575           The c-fill-target parameter defines the how much resync data DRBD
576           should aim to have in-flight at all times. Common values for
577           "normal" data paths range from 4K to 100K. The default value of
578           c-fill-target is 100, in units of sectors
579
580           The c-delay-target parameter defines the delay in the resync path
581           that DRBD should aim for. This should be set to five times the
582           network round-trip time or more. The default value of
583           c-delay-target is 10, in units of 0.1 seconds.
584
585           The c-max-rate parameter limits the maximum bandwidth used by
586           dynamically controlled resyncs. Setting this to zero removes the
587           limitation (since DRBD 9.0.28). It should be set to either the
588           bandwidth available between the DRBD hosts and the machines hosting
589           DRBD-proxy, or to the available disk bandwidth. The default value
590           of c-max-rate is 102400, in units of KiB/s.
591
592           Dynamic resync speed control is available since DRBD 8.3.9.
593
594       c-min-rate min_rate
595           A node which is primary and sync-source has to schedule application
596           I/O requests and resync I/O requests. The c-min-rate parameter
597           limits how much bandwidth is available for resync I/O; the
598           remaining bandwidth is used for application I/O.
599
600           A c-min-rate value of 0 means that there is no limit on the resync
601           I/O bandwidth. This can slow down application I/O significantly.
602           Use a value of 1 (1 KiB/s) for the lowest possible resync rate.
603
604           The default value of c-min-rate is 250, in units of KiB/s.
605
606       resync-rate rate
607
608           Define how much bandwidth DRBD may use for resynchronizing. DRBD
609           allows "normal" application I/O even during a resync. If the resync
610           takes up too much bandwidth, application I/O can become very slow.
611           This parameter allows to avoid that. Please note this is option
612           only works when the dynamic resync controller is disabled.
613
614   Section global Parameters
615       dialog-refresh time
616
617           The DRBD init script can be used to configure and start DRBD
618           devices, which can involve waiting for other cluster nodes. While
619           waiting, the init script shows the remaining waiting time. The
620           dialog-refresh defines the number of seconds between updates of
621           that countdown. The default value is 1; a value of 0 turns off the
622           countdown.
623
624       disable-ip-verification
625           Normally, DRBD verifies that the IP addresses in the configuration
626           match the host names. Use the disable-ip-verification parameter to
627           disable these checks.
628
629       usage-count {yes | no | ask}
630           A explained on DRBD's Online Usage Counter[2] web page, DRBD
631           includes a mechanism for anonymously counting how many
632           installations are using which versions of DRBD. The results are
633           available on the web page for anyone to see.
634
635           This parameter defines if a cluster node participates in the usage
636           counter; the supported values are yes, no, and ask (ask the user,
637           the default).
638
639           We would like to ask users to participate in the online usage
640           counter as this provides us valuable feedback for steering the
641           development of DRBD.
642
643       udev-always-use-vnr
644           When udev asks drbdadm for a list of device related symlinks,
645           drbdadm would suggest symlinks with differing naming conventions,
646           depending on whether the resource has explicit volume VNR { }
647           definitions, or only one single volume with the implicit volume
648           number 0:
649
650               # implicit single volume without "volume 0 {}" block
651               DEVICE=drbd<minor>
652               SYMLINK_BY_RES=drbd/by-res/<resource-name>
653               SYMLINK_BY_DISK=drbd/by-disk/<backing-disk-name>
654
655               # explicit volume definition: volume VNR { }
656               DEVICE=drbd<minor>
657               SYMLINK_BY_RES=drbd/by-res/<resource-name>/VNR
658               SYMLINK_BY_DISK=drbd/by-disk/<backing-disk-name>
659
660           If you define this parameter in the global section, drbdadm will
661           always add the .../VNR part, and will not care for whether the
662           volume definition was implicit or explicit.
663
664           For legacy backward compatibility, this is off by default, but we
665           do recommend to enable it.
666
667   Section handlers Parameters
668       after-resync-target cmd
669
670           Called on a resync target when a node state changes from
671           Inconsistent to Consistent when a resync finishes. This handler can
672           be used for removing the snapshot created in the
673           before-resync-target handler.
674
675       before-resync-target cmd
676
677           Called on a resync target before a resync begins. This handler can
678           be used for creating a snapshot of the lower-level device for the
679           duration of the resync: if the resync source becomes unavailable
680           during a resync, reverting to the snapshot can restore a consistent
681           state.
682
683       before-resync-source cmd
684
685           Called on a resync source before a resync begins.
686
687       out-of-sync cmd
688
689           Called on all nodes after a verify finishes and out-of-sync blocks
690           were found. This handler is mainly used for monitoring purposes. An
691           example would be to call a script that sends an alert SMS.
692
693       quorum-lost cmd
694
695           Called on a Primary that lost quorum. This handler is usually used
696           to reboot the node if it is not possible to restart the application
697           that uses the storage on top of DRBD.
698
699       fence-peer cmd
700
701           Called when a node should fence a resource on a particular peer.
702           The handler should not use the same communication path that DRBD
703           uses for talking to the peer.
704
705       unfence-peer cmd
706
707           Called when a node should remove fencing constraints from other
708           nodes.
709
710       initial-split-brain cmd
711
712           Called when DRBD connects to a peer and detects that the peer is in
713           a split-brain state with the local node. This handler is also
714           called for split-brain scenarios which will be resolved
715           automatically.
716
717       local-io-error cmd
718
719           Called when an I/O error occurs on a lower-level device.
720
721       pri-lost cmd
722
723           The local node is currently primary, but DRBD believes that it
724           should become a sync target. The node should give up its primary
725           role.
726
727       pri-lost-after-sb cmd
728
729           The local node is currently primary, but it has lost the
730           after-split-brain auto recovery procedure. The node should be
731           abandoned.
732
733       pri-on-incon-degr cmd
734
735           The local node is primary, and neither the local lower-level device
736           nor a lower-level device on a peer is up to date. (The primary has
737           no device to read from or to write to.)
738
739       split-brain cmd
740
741           DRBD has detected a split-brain situation which could not be
742           resolved automatically. Manual recovery is necessary. This handler
743           can be used to call for administrator attention.
744
745       disconnected cmd
746
747           A connection to a peer went down. The handler can learn about the
748           reason for the disconnect from the DRBD_CSTATE environment
749           variable.
750
751   Section net Parameters
752       after-sb-0pri policy
753           Define how to react if a split-brain scenario is detected and none
754           of the two nodes is in primary role. (We detect split-brain
755           scenarios when two nodes connect; split-brain decisions are always
756           between two nodes.) The defined policies are:
757
758           disconnect
759               No automatic resynchronization; simply disconnect.
760
761           discard-younger-primary,
762           discard-older-primary
763               Resynchronize from the node which became primary first
764               (discard-younger-primary) or last (discard-older-primary). If
765               both nodes became primary independently, the
766               discard-least-changes policy is used.
767
768           discard-zero-changes
769               If only one of the nodes wrote data since the split brain
770               situation was detected, resynchronize from this node to the
771               other. If both nodes wrote data, disconnect.
772
773           discard-least-changes
774               Resynchronize from the node with more modified blocks.
775
776           discard-node-nodename
777               Always resynchronize to the named node.
778
779       after-sb-1pri policy
780           Define how to react if a split-brain scenario is detected, with one
781           node in primary role and one node in secondary role. (We detect
782           split-brain scenarios when two nodes connect, so split-brain
783           decisions are always among two nodes.) The defined policies are:
784
785           disconnect
786               No automatic resynchronization, simply disconnect.
787
788           consensus
789               Discard the data on the secondary node if the after-sb-0pri
790               algorithm would also discard the data on the secondary node.
791               Otherwise, disconnect.
792
793           violently-as0p
794               Always take the decision of the after-sb-0pri algorithm, even
795               if it causes an erratic change of the primary's view of the
796               data. This is only useful if a single-node file system (i.e.,
797               not OCFS2 or GFS) with the allow-two-primaries flag is used.
798               This option can cause the primary node to crash, and should not
799               be used.
800
801           discard-secondary
802               Discard the data on the secondary node.
803
804           call-pri-lost-after-sb
805               Always take the decision of the after-sb-0pri algorithm. If the
806               decision is to discard the data on the primary node, call the
807               pri-lost-after-sb handler on the primary node.
808
809       after-sb-2pri policy
810           Define how to react if a split-brain scenario is detected and both
811           nodes are in primary role. (We detect split-brain scenarios when
812           two nodes connect, so split-brain decisions are always among two
813           nodes.) The defined policies are:
814
815           disconnect
816               No automatic resynchronization, simply disconnect.
817
818           violently-as0p
819               See the violently-as0p policy for after-sb-1pri.
820
821           call-pri-lost-after-sb
822               Call the pri-lost-after-sb helper program on one of the
823               machines unless that machine can demote to secondary. The
824               helper program is expected to reboot the machine, which brings
825               the node into a secondary role. Which machine runs the helper
826               program is determined by the after-sb-0pri strategy.
827
828       allow-two-primaries
829
830           The most common way to configure DRBD devices is to allow only one
831           node to be primary (and thus writable) at a time.
832
833           In some scenarios it is preferable to allow two nodes to be primary
834           at once; a mechanism outside of DRBD then must make sure that
835           writes to the shared, replicated device happen in a coordinated
836           way. This can be done with a shared-storage cluster file system
837           like OCFS2 and GFS, or with virtual machine images and a virtual
838           machine manager that can migrate virtual machines between physical
839           machines.
840
841           The allow-two-primaries parameter tells DRBD to allow two nodes to
842           be primary at the same time. Never enable this option when using a
843           non-distributed file system; otherwise, data corruption and node
844           crashes will result!
845
846       always-asbp
847           Normally the automatic after-split-brain policies are only used if
848           current states of the UUIDs do not indicate the presence of a third
849           node.
850
851           With this option you request that the automatic after-split-brain
852           policies are used as long as the data sets of the nodes are somehow
853           related. This might cause a full sync, if the UUIDs indicate the
854           presence of a third node. (Or double faults led to strange UUID
855           sets.)
856
857       connect-int time
858
859           As soon as a connection between two nodes is configured with
860           drbdsetup connect, DRBD immediately tries to establish the
861           connection. If this fails, DRBD waits for connect-int seconds and
862           then repeats. The default value of connect-int is 10 seconds.
863
864       cram-hmac-alg hash-algorithm
865
866           Configure the hash-based message authentication code (HMAC) or
867           secure hash algorithm to use for peer authentication. The kernel
868           supports a number of different algorithms, some of which may be
869           loadable as kernel modules. See the shash algorithms listed in
870           /proc/crypto. By default, cram-hmac-alg is unset. Peer
871           authentication also requires a shared-secret to be configured.
872
873       csums-alg hash-algorithm
874
875           Normally, when two nodes resynchronize, the sync target requests a
876           piece of out-of-sync data from the sync source, and the sync source
877           sends the data. With many usage patterns, a significant number of
878           those blocks will actually be identical.
879
880           When a csums-alg algorithm is specified, when requesting a piece of
881           out-of-sync data, the sync target also sends along a hash of the
882           data it currently has. The sync source compares this hash with its
883           own version of the data. It sends the sync target the new data if
884           the hashes differ, and tells it that the data are the same
885           otherwise. This reduces the network bandwidth required, at the cost
886           of higher cpu utilization and possibly increased I/O on the sync
887           target.
888
889           The csums-alg can be set to one of the secure hash algorithms
890           supported by the kernel; see the shash algorithms listed in
891           /proc/crypto. By default, csums-alg is unset.
892
893       csums-after-crash-only
894
895           Enabling this option (and csums-alg, above) makes it possible to
896           use the checksum based resync only for the first resync after
897           primary crash, but not for later "network hickups".
898
899           In most cases, block that are marked as need-to-be-resynced are in
900           fact changed, so calculating checksums, and both reading and
901           writing the blocks on the resync target is all effective overhead.
902
903           The advantage of checksum based resync is mostly after primary
904           crash recovery, where the recovery marked larger areas (those
905           covered by the activity log) as need-to-be-resynced, just in case.
906           Introduced in 8.4.5.
907
908       data-integrity-alg  alg
909           DRBD normally relies on the data integrity checks built into the
910           TCP/IP protocol, but if a data integrity algorithm is configured,
911           it will additionally use this algorithm to make sure that the data
912           received over the network match what the sender has sent. If a data
913           integrity error is detected, DRBD will close the network connection
914           and reconnect, which will trigger a resync.
915
916           The data-integrity-alg can be set to one of the secure hash
917           algorithms supported by the kernel; see the shash algorithms listed
918           in /proc/crypto. By default, this mechanism is turned off.
919
920           Because of the CPU overhead involved, we recommend not to use this
921           option in production environments. Also see the notes on data
922           integrity below.
923
924       fencing fencing_policy
925
926           Fencing is a preventive measure to avoid situations where both
927           nodes are primary and disconnected. This is also known as a
928           split-brain situation. DRBD supports the following fencing
929           policies:
930
931           dont-care
932               No fencing actions are taken. This is the default policy.
933
934           resource-only
935               If a node becomes a disconnected primary, it tries to fence the
936               peer. This is done by calling the fence-peer handler. The
937               handler is supposed to reach the peer over an alternative
938               communication path and call 'drbdadm outdate minor' there.
939
940           resource-and-stonith
941               If a node becomes a disconnected primary, it freezes all its IO
942               operations and calls its fence-peer handler. The fence-peer
943               handler is supposed to reach the peer over an alternative
944               communication path and call 'drbdadm outdate minor' there. In
945               case it cannot do that, it should stonith the peer. IO is
946               resumed as soon as the situation is resolved. In case the
947               fence-peer handler fails, I/O can be resumed manually with
948               'drbdadm resume-io'.
949
950       ko-count number
951
952           If a secondary node fails to complete a write request in ko-count
953           times the timeout parameter, it is excluded from the cluster. The
954           primary node then sets the connection to this secondary node to
955           Standalone. To disable this feature, you should explicitly set it
956           to 0; defaults may change between versions.
957
958       max-buffers number
959
960           Limits the memory usage per DRBD minor device on the receiving
961           side, or for internal buffers during resync or online-verify. Unit
962           is PAGE_SIZE, which is 4 KiB on most systems. The minimum possible
963           setting is hard coded to 32 (=128 KiB). These buffers are used to
964           hold data blocks while they are written to/read from disk. To avoid
965           possible distributed deadlocks on congestion, this setting is used
966           as a throttle threshold rather than a hard limit. Once more than
967           max-buffers pages are in use, further allocation from this pool is
968           throttled. You want to increase max-buffers if you cannot saturate
969           the IO backend on the receiving side.
970
971       max-epoch-size number
972
973           Define the maximum number of write requests DRBD may issue before
974           issuing a write barrier. The default value is 2048, with a minimum
975           of 1 and a maximum of 20000. Setting this parameter to a value
976           below 10 is likely to decrease performance.
977
978       on-congestion policy,
979       congestion-fill threshold,
980       congestion-extents threshold
981           By default, DRBD blocks when the TCP send queue is full. This
982           prevents applications from generating further write requests until
983           more buffer space becomes available again.
984
985           When DRBD is used together with DRBD-proxy, it can be better to use
986           the pull-ahead on-congestion policy, which can switch DRBD into
987           ahead/behind mode before the send queue is full. DRBD then records
988           the differences between itself and the peer in its bitmap, but it
989           no longer replicates them to the peer. When enough buffer space
990           becomes available again, the node resynchronizes with the peer and
991           switches back to normal replication.
992
993           This has the advantage of not blocking application I/O even when
994           the queues fill up, and the disadvantage that peer nodes can fall
995           behind much further. Also, while resynchronizing, peer nodes will
996           become inconsistent.
997
998           The available congestion policies are block (the default) and
999           pull-ahead. The congestion-fill parameter defines how much data is
1000           allowed to be "in flight" in this connection. The default value is
1001           0, which disables this mechanism of congestion control, with a
1002           maximum of 10 GiBytes. The congestion-extents parameter defines how
1003           many bitmap extents may be active before switching into
1004           ahead/behind mode, with the same default and limits as the
1005           al-extents parameter. The congestion-extents parameter is effective
1006           only when set to a value smaller than al-extents.
1007
1008           Ahead/behind mode is available since DRBD 8.3.10.
1009
1010       ping-int interval
1011
1012           When the TCP/IP connection to a peer is idle for more than ping-int
1013           seconds, DRBD will send a keep-alive packet to make sure that a
1014           failed peer or network connection is detected reasonably soon. The
1015           default value is 10 seconds, with a minimum of 1 and a maximum of
1016           120 seconds. The unit is seconds.
1017
1018       ping-timeout timeout
1019
1020           Define the timeout for replies to keep-alive packets. If the peer
1021           does not reply within ping-timeout, DRBD will close and try to
1022           reestablish the connection. The default value is 0.5 seconds, with
1023           a minimum of 0.1 seconds and a maximum of 3 seconds. The unit is
1024           tenths of a second.
1025
1026       socket-check-timeout timeout
1027           In setups involving a DRBD-proxy and connections that experience a
1028           lot of buffer-bloat it might be necessary to set ping-timeout to an
1029           unusual high value. By default DRBD uses the same value to wait if
1030           a newly established TCP-connection is stable. Since the DRBD-proxy
1031           is usually located in the same data center such a long wait time
1032           may hinder DRBD's connect process.
1033
1034           In such setups socket-check-timeout should be set to at least to
1035           the round trip time between DRBD and DRBD-proxy. I.e. in most cases
1036           to 1.
1037
1038           The default unit is tenths of a second, the default value is 0
1039           (which causes DRBD to use the value of ping-timeout instead).
1040           Introduced in 8.4.5.
1041
1042       protocol name
1043           Use the specified protocol on this connection. The supported
1044           protocols are:
1045
1046           A
1047               Writes to the DRBD device complete as soon as they have reached
1048               the local disk and the TCP/IP send buffer.
1049
1050           B
1051               Writes to the DRBD device complete as soon as they have reached
1052               the local disk, and all peers have acknowledged the receipt of
1053               the write requests.
1054
1055           C
1056               Writes to the DRBD device complete as soon as they have reached
1057               the local and all remote disks.
1058
1059
1060       rcvbuf-size size
1061
1062           Configure the size of the TCP/IP receive buffer. A value of 0 (the
1063           default) causes the buffer size to adjust dynamically. This
1064           parameter usually does not need to be set, but it can be set to a
1065           value up to 10 MiB. The default unit is bytes.
1066
1067       rr-conflict policy
1068           This option helps to solve the cases when the outcome of the resync
1069           decision is incompatible with the current role assignment in the
1070           cluster. The defined policies are:
1071
1072           disconnect
1073               No automatic resynchronization, simply disconnect.
1074
1075           retry-connect
1076               Disconnect now, and retry to connect immediatly afterwards.
1077
1078           violently
1079               Resync to the primary node is allowed, violating the assumption
1080               that data on a block device are stable for one of the nodes.
1081               Do not use this option, it is dangerous.
1082
1083           call-pri-lost
1084               Call the pri-lost handler on one of the machines. The handler
1085               is expected to reboot the machine, which puts it into secondary
1086               role.
1087
1088       shared-secret secret
1089
1090           Configure the shared secret used for peer authentication. The
1091           secret is a string of up to 64 characters. Peer authentication also
1092           requires the cram-hmac-alg parameter to be set.
1093
1094       sndbuf-size size
1095
1096           Configure the size of the TCP/IP send buffer. Since DRBD 8.0.13 /
1097           8.2.7, a value of 0 (the default) causes the buffer size to adjust
1098           dynamically. Values below 32 KiB are harmful to the throughput on
1099           this connection. Large buffer sizes can be useful especially when
1100           protocol A is used over high-latency networks; the maximum value
1101           supported is 10 MiB.
1102
1103       tcp-cork
1104           By default, DRBD uses the TCP_CORK socket option to prevent the
1105           kernel from sending partial messages; this results in fewer and
1106           bigger packets on the network. Some network stacks can perform
1107           worse with this optimization. On these, the tcp-cork parameter can
1108           be used to turn this optimization off.
1109
1110       timeout time
1111
1112           Define the timeout for replies over the network: if a peer node
1113           does not send an expected reply within the specified timeout, it is
1114           considered dead and the TCP/IP connection is closed. The timeout
1115           value must be lower than connect-int and lower than ping-int. The
1116           default is 6 seconds; the value is specified in tenths of a second.
1117
1118       transport type
1119
1120           With DRBD9 the network transport used by DRBD is loaded as a
1121           seperate module. With this option you can specify which transport
1122           and module to load. At present only two options exist, tcp and
1123           rdma. Please note that currently the RDMA transport module is only
1124           available with a license purchased from LINBIT. Default is tcp.
1125
1126       use-rle
1127
1128           Each replicated device on a cluster node has a separate bitmap for
1129           each of its peer devices. The bitmaps are used for tracking the
1130           differences between the local and peer device: depending on the
1131           cluster state, a disk range can be marked as different from the
1132           peer in the device's bitmap, in the peer device's bitmap, or in
1133           both bitmaps. When two cluster nodes connect, they exchange each
1134           other's bitmaps, and they each compute the union of the local and
1135           peer bitmap to determine the overall differences.
1136
1137           Bitmaps of very large devices are also relatively large, but they
1138           usually compress very well using run-length encoding. This can save
1139           time and bandwidth for the bitmap transfers.
1140
1141           The use-rle parameter determines if run-length encoding should be
1142           used. It is on by default since DRBD 8.4.0.
1143
1144       verify-alg hash-algorithm
1145           Online verification (drbdadm verify) computes and compares
1146           checksums of disk blocks (i.e., hash values) in order to detect if
1147           they differ. The verify-alg parameter determines which algorithm to
1148           use for these checksums. It must be set to one of the secure hash
1149           algorithms supported by the kernel before online verify can be
1150           used; see the shash algorithms listed in /proc/crypto.
1151
1152           We recommend to schedule online verifications regularly during
1153           low-load periods, for example once a month. Also see the notes on
1154           data integrity below.
1155
1156       allow-remote-read bool-value
1157           Allows or disallows DRBD to read from a peer node.
1158
1159           When the disk of a primary node is detached, DRBD will try to
1160           continue reading and writing from another node in the cluster. For
1161           this purpose, it searches for nodes with up-to-date data, and uses
1162           any found node to resume operations. In some cases it may not be
1163           desirable to read back data from a peer node, because the node
1164           should only be used as a replication target. In this case, the
1165           allow-remote-read parameter can be set to no, which would prohibit
1166           this node from reading data from the peer node.
1167
1168           The allow-remote-read parameter is available since DRBD 9.0.19, and
1169           defaults to yes.
1170
1171   Section on Parameters
1172       address [address-family] address:port
1173
1174           Defines the address family, address, and port of a connection
1175           endpoint.
1176
1177           The address families ipv4, ipv6, ssocks (Dolphin Interconnect
1178           Solutions' "super sockets"), sdp (Infiniband Sockets Direct
1179           Protocol), and sci are supported (sci is an alias for ssocks). If
1180           no address family is specified, ipv4 is assumed. For all address
1181           families except ipv6, the address is specified in IPV4 address
1182           notation (for example, 1.2.3.4). For ipv6, the address is enclosed
1183           in brackets and uses IPv6 address notation (for example,
1184           [fd01:2345:6789:abcd::1]). The port is always specified as a
1185           decimal number from 1 to 65535.
1186
1187           On each host, the port numbers must be unique for each address;
1188           ports cannot be shared.
1189
1190       node-id value
1191
1192           Defines the unique node identifier for a node in the cluster. Node
1193           identifiers are used to identify individual nodes in the network
1194           protocol, and to assign bitmap slots to nodes in the metadata.
1195
1196           Node identifiers can only be reasssigned in a cluster when the
1197           cluster is down. It is essential that the node identifiers in the
1198           configuration and in the device metadata are changed consistently
1199           on all hosts. To change the metadata, dump the current state with
1200           drbdmeta dump-md, adjust the bitmap slot assignment, and update the
1201           metadata with drbdmeta restore-md.
1202
1203           The node-id parameter exists since DRBD 9. Its value ranges from 0
1204           to 16; there is no default.
1205
1206   Section options Parameters (Resource Options)
1207       auto-promote bool-value
1208           A resource must be promoted to primary role before any of its
1209           devices can be mounted or opened for writing.
1210
1211           Before DRBD 9, this could only be done explicitly ("drbdadm
1212           primary"). Since DRBD 9, the auto-promote parameter allows to
1213           automatically promote a resource to primary role when one of its
1214           devices is mounted or opened for writing. As soon as all devices
1215           are unmounted or closed with no more remaining users, the role of
1216           the resource changes back to secondary.
1217
1218           Automatic promotion only succeeds if the cluster state allows it
1219           (that is, if an explicit drbdadm primary command would succeed).
1220           Otherwise, mounting or opening the device fails as it already did
1221           before DRBD 9: the mount(2) system call fails with errno set to
1222           EROFS (Read-only file system); the open(2) system call fails with
1223           errno set to EMEDIUMTYPE (wrong medium type).
1224
1225           Irrespective of the auto-promote parameter, if a device is promoted
1226           explicitly (drbdadm primary), it also needs to be demoted
1227           explicitly (drbdadm secondary).
1228
1229           The auto-promote parameter is available since DRBD 9.0.0, and
1230           defaults to yes.
1231
1232       cpu-mask cpu-mask
1233
1234           Set the cpu affinity mask for DRBD kernel threads. The cpu mask is
1235           specified as a hexadecimal number. The default value is 0, which
1236           lets the scheduler decide which kernel threads run on which CPUs.
1237           CPU numbers in cpu-mask which do not exist in the system are
1238           ignored.
1239
1240       on-no-data-accessible policy
1241           Determine how to deal with I/O requests when the requested data is
1242           not available locally or remotely (for example, when all disks have
1243           failed). The defined policies are:
1244
1245           io-error
1246               System calls fail with errno set to EIO.
1247
1248           suspend-io
1249               The resource suspends I/O. I/O can be resumed by (re)attaching
1250               the lower-level device, by connecting to a peer which has
1251               access to the data, or by forcing DRBD to resume I/O with
1252               drbdadm resume-io res. When no data is available, forcing I/O
1253               to resume will result in the same behavior as the io-error
1254               policy.
1255
1256           This setting is available since DRBD 8.3.9; the default policy is
1257           io-error.
1258
1259       peer-ack-window value
1260
1261           On each node and for each device, DRBD maintains a bitmap of the
1262           differences between the local and remote data for each peer device.
1263           For example, in a three-node setup (nodes A, B, C) each with a
1264           single device, every node maintains one bitmap for each of its
1265           peers.
1266
1267           When nodes receive write requests, they know how to update the
1268           bitmaps for the writing node, but not how to update the bitmaps
1269           between themselves. In this example, when a write request
1270           propagates from node A to B and C, nodes B and C know that they
1271           have the same data as node A, but not whether or not they both have
1272           the same data.
1273
1274           As a remedy, the writing node occasionally sends peer-ack packets
1275           to its peers which tell them which state they are in relative to
1276           each other.
1277
1278           The peer-ack-window parameter specifies how much data a primary
1279           node may send before sending a peer-ack packet. A low value causes
1280           increased network traffic; a high value causes less network traffic
1281           but higher memory consumption on secondary nodes and higher resync
1282           times between the secondary nodes after primary node failures.
1283           (Note: peer-ack packets may be sent due to other reasons as well,
1284           e.g. membership changes or expiry of the peer-ack-delay timer.)
1285
1286           The default value for peer-ack-window is 2 MiB, the default unit is
1287           sectors. This option is available since 9.0.0.
1288
1289       peer-ack-delay expiry-time
1290
1291           If after the last finished write request no new write request gets
1292           issued for expiry-time, then a peer-ack packet is sent. If a new
1293           write request is issued before the timer expires, the timer gets
1294           reset to expiry-time. (Note: peer-ack packets may be sent due to
1295           other reasons as well, e.g. membership changes or the
1296           peer-ack-window option.)
1297
1298           This parameter may influence resync behavior on remote nodes. Peer
1299           nodes need to wait until they receive an peer-ack for releasing a
1300           lock on an AL-extent. Resync operations between peers may need to
1301           wait for for these locks.
1302
1303           The default value for peer-ack-delay is 100 milliseconds, the
1304           default unit is milliseconds. This option is available since 9.0.0.
1305
1306       quorum value
1307
1308           When activated, a cluster partition requires quorum in order to
1309           modify the replicated data set. That means a node in the cluster
1310           partition can only be promoted to primary if the cluster partition
1311           has quorum. Every node with a disk directly connected to the node
1312           that should be promoted counts. If a primary node should execute a
1313           write request, but the cluster partition has lost quorum, it will
1314           freeze IO or reject the write request with an error (depending on
1315           the on-no-quorum setting). Upon loosing quorum a primary always
1316           invokes the quorum-lost handler. The handler is intended for
1317           notification purposes, its return code is ignored.
1318
1319           The option's value might be set to off, majority, all or a numeric
1320           value. If you set it to a numeric value, make sure that the value
1321           is greater than half of your number of nodes. Quorum is a mechanism
1322           to avoid data divergence, it might be used instead of fencing when
1323           there are more than two repicas. It defaults to off
1324
1325           If all missing nodes are marked as outdated, a partition always has
1326           quorum, no matter how small it is. I.e. If you disconnect all
1327           secondary nodes gracefully a single primary continues to operate.
1328           In the moment a single secondary is lost, it has to be assumed that
1329           it forms a partition with all the missing outdated nodes. In case
1330           my partition might be smaller than the other, quorum is lost in
1331           this moment.
1332
1333           In case you want to allow permanently diskless nodes to gain quorum
1334           it is recommendet to not use majority or all. It is recommended to
1335           specify an absolute number, since DBRD's heuristic to determine the
1336           complete number of diskfull nodes in the cluster is unreliable.
1337
1338           The quorum implementation is available starting with the DRBD
1339           kernel driver version 9.0.7.
1340
1341       quorum-minimum-redundancy value
1342
1343           This option sets the minimal required number of nodes with an
1344           UpToDate disk to allow the partition to gain quorum. This is a
1345           different requirement than the plain quorum option expresses.
1346
1347           The option's value might be set to off, majority, all or a numeric
1348           value. If you set it to a numeric value, make sure that the value
1349           is greater than half of your number of nodes.
1350
1351           In case you want to allow permanently diskless nodes to gain quorum
1352           it is recommendet to not use majority or all. It is recommended to
1353           specify an absolute number, since DBRD's heuristic to determine the
1354           complete number of diskfull nodes in the cluster is unreliable.
1355
1356           This option is available starting with the DRBD kernel driver
1357           version 9.0.10.
1358
1359       on-no-quorum {io-error | suspend-io}
1360
1361           By default DRBD freezes IO on a device, that lost quorum. By
1362           setting the on-no-quorum to io-error it completes all IO operations
1363           with an error if quorum ist lost.
1364
1365           The on-no-quorum options is available starting with the DRBD kernel
1366           driver version 9.0.8.
1367
1368   Section startup Parameters
1369       The parameters in this section define the behavior of DRBD at system
1370       startup time, in the DRBD init script. They have no effect once the
1371       system is up and running.
1372
1373       degr-wfc-timeout timeout
1374
1375           Define how long to wait until all peers are connected in case the
1376           cluster consisted of a single node only when the system went down.
1377           This parameter is usually set to a value smaller than wfc-timeout.
1378           The assumption here is that peers which were unreachable before a
1379           reboot are less likely to be reachable after the reboot, so waiting
1380           is less likely to help.
1381
1382           The timeout is specified in seconds. The default value is 0, which
1383           stands for an infinite timeout. Also see the wfc-timeout parameter.
1384
1385       outdated-wfc-timeout timeout
1386
1387           Define how long to wait until all peers are connected if all peers
1388           were outdated when the system went down. This parameter is usually
1389           set to a value smaller than wfc-timeout. The assumption here is
1390           that an outdated peer cannot have become primary in the meantime,
1391           so we don't need to wait for it as long as for a node which was
1392           alive before.
1393
1394           The timeout is specified in seconds. The default value is 0, which
1395           stands for an infinite timeout. Also see the wfc-timeout parameter.
1396
1397       stacked-timeouts
1398           On stacked devices, the wfc-timeout and degr-wfc-timeout parameters
1399           in the configuration are usually ignored, and both timeouts are set
1400           to twice the connect-int timeout. The stacked-timeouts parameter
1401           tells DRBD to use the wfc-timeout and degr-wfc-timeout parameters
1402           as defined in the configuration, even on stacked devices. Only use
1403           this parameter if the peer of the stacked resource is usually not
1404           available, or will not become primary. Incorrect use of this
1405           parameter can lead to unexpected split-brain scenarios.
1406
1407       wait-after-sb
1408           This parameter causes DRBD to continue waiting in the init script
1409           even when a split-brain situation has been detected, and the nodes
1410           therefore refuse to connect to each other.
1411
1412       wfc-timeout timeout
1413
1414           Define how long the init script waits until all peers are
1415           connected. This can be useful in combination with a cluster manager
1416           which cannot manage DRBD resources: when the cluster manager
1417           starts, the DRBD resources will already be up and running. With a
1418           more capable cluster manager such as Pacemaker, it makes more sense
1419           to let the cluster manager control DRBD resources. The timeout is
1420           specified in seconds. The default value is 0, which stands for an
1421           infinite timeout. Also see the degr-wfc-timeout parameter.
1422
1423   Section volume Parameters
1424       device /dev/drbdminor-number
1425
1426           Define the device name and minor number of a replicated block
1427           device. This is the device that applications are supposed to
1428           access; in most cases, the device is not used directly, but as a
1429           file system. This parameter is required and the standard device
1430           naming convention is assumed.
1431
1432           In addition to this device, udev will create
1433           /dev/drbd/by-res/resource/volume and
1434           /dev/drbd/by-disk/lower-level-device symlinks to the device.
1435
1436       disk {[disk] | none}
1437
1438           Define the lower-level block device that DRBD will use for storing
1439           the actual data. While the replicated drbd device is configured,
1440           the lower-level device must not be used directly. Even read-only
1441           access with tools like dumpe2fs(8) and similar is not allowed. The
1442           keyword none specifies that no lower-level block device is
1443           configured; this also overrides inheritance of the lower-level
1444           device.
1445
1446       meta-disk internal,
1447       meta-disk device,
1448       meta-disk device [index]
1449
1450           Define where the metadata of a replicated block device resides: it
1451           can be internal, meaning that the lower-level device contains both
1452           the data and the metadata, or on a separate device.
1453
1454           When the index form of this parameter is used, multiple replicated
1455           devices can share the same metadata device, each using a separate
1456           index. Each index occupies 128 MiB of data, which corresponds to a
1457           replicated device size of at most 4 TiB with two cluster nodes. We
1458           recommend not to share metadata devices anymore, and to instead use
1459           the lvm volume manager for creating metadata devices as needed.
1460
1461           When the index form of this parameter is not used, the size of the
1462           lower-level device determines the size of the metadata. The size
1463           needed is 36 KiB + (size of lower-level device) / 32K * (number of
1464           nodes - 1). If the metadata device is bigger than that, the extra
1465           space is not used.
1466
1467           This parameter is required if a disk other than none is specified,
1468           and ignored if disk is set to none. A meta-disk parameter without a
1469           disk parameter is not allowed.
1470

NOTES ON DATA INTEGRITY

1472       DRBD supports two different mechanisms for data integrity checking:
1473       first, the data-integrity-alg network parameter allows to add a
1474       checksum to the data sent over the network. Second, the online
1475       verification mechanism (drbdadm verify and the verify-alg parameter)
1476       allows to check for differences in the on-disk data.
1477
1478       Both mechanisms can produce false positives if the data is modified
1479       during I/O (i.e., while it is being sent over the network or written to
1480       disk). This does not always indicate a problem: for example, some file
1481       systems and applications do modify data under I/O for certain
1482       operations. Swap space can also undergo changes while under I/O.
1483
1484       Network data integrity checking tries to identify data modification
1485       during I/O by verifying the checksums on the sender side after sending
1486       the data. If it detects a mismatch, it logs an error. The receiver also
1487       logs an error when it detects a mismatch. Thus, an error logged only on
1488       the receiver side indicates an error on the network, and an error
1489       logged on both sides indicates data modification under I/O.
1490
1491       The most recent example of systematic data corruption was identified as
1492       a bug in the TCP offloading engine and driver of a certain type of GBit
1493       NIC in 2007: the data corruption happened on the DMA transfer from core
1494       memory to the card. Because the TCP checksum were calculated on the
1495       card, the TCP/IP protocol checksums did not reveal this problem.
1496

VERSION

1498       This document was revised for version 9.0.0 of the DRBD distribution.
1499

AUTHOR

1501       Written by Philipp Reisner <philipp.reisner@linbit.com> and Lars
1502       Ellenberg <lars.ellenberg@linbit.com>.
1503

REPORTING BUGS

1505       Report bugs to <drbd-user@lists.linbit.com>.
1506
1508       Copyright 2001-2018 LINBIT Information Technologies, Philipp Reisner,
1509       Lars Ellenberg. This is free software; see the source for copying
1510       conditions. There is NO warranty; not even for MERCHANTABILITY or
1511       FITNESS FOR A PARTICULAR PURPOSE.
1512

SEE ALSO

1514       drbd(8), drbdsetup(8), drbdadm(8), DRBD User's Guide[1], DRBD Web
1515       Site[3]
1516

NOTES

1518        1. DRBD User's Guide
1519           http://www.drbd.org/users-guide/
1520
1521        2.
1522
1523                 Online Usage Counter
1524           http://usage.drbd.org
1525
1526        3. DRBD Web Site
1527           http://www.drbd.org/
1528
1529
1530
1531DRBD 9.0.x                      17 January 2018                   DRBD.CONF(5)
Impressum