1DRBDSETUP(8)                 System Administration                DRBDSETUP(8)
2
3
4

NAME

6       drbdsetup - Configure the DRBD kernel module
7

SYNOPSIS

9       drbdsetup command {argument...} [option...]
10

DESCRIPTION

12       The drbdsetup utility serves to configure the DRBD kernel module and to
13       show its current configuration. Users usually interact with the drbdadm
14       utility, which provides a more high-level interface to DRBD than
15       drbdsetup. (See drbdadm's --dry-run option to see how drbdadm uses
16       drbdsetup.)
17
18       Some option arguments have a default scale which applies when a plain
19       number is specified (for example Kilo, or 1024 times the numeric
20       value). Such default scales can be overridden by using a suffix (for
21       example, M for Mega). The common suffixes K = 2^10 = 1024, M = 1024 K,
22       and G = 1024 M are supported.
23

COMMANDS

25       drbdsetup attach minor lower_dev meta_data_dev meta_data_index,
26       drbdsetup disk-options minor
27           The attach command attaches a lower-level device to an existing
28           replicated device. The disk-options command changes the disk
29           options of an attached lower-level device. In either case, the
30           replicated device must have been created with drbdsetup new-minor.
31
32           Both commands refer to the replicated device by its minor number.
33           lower_dev is the name of the lower-level device.  meta_data_dev is
34           the name of the device containing the metadata, and may be the same
35           as lower_dev.  meta_data_index is either a numeric metadata index,
36           or the keyword internal for internal metadata, or the keyword
37           flexible for variable-size external metadata. Available options:
38
39           --al-extents extents
40               DRBD automatically maintains a "hot" or "active" disk area
41               likely to be written to again soon based on the recent write
42               activity. The "active" disk area can be written to immediately,
43               while "inactive" disk areas must be "activated" first, which
44               requires a meta-data write. We also refer to this active disk
45               area as the "activity log".
46
47               The activity log saves meta-data writes, but the whole log must
48               be resynced upon recovery of a failed node. The size of the
49               activity log is a major factor of how long a resync will take
50               and how fast a replicated disk will become consistent after a
51               crash.
52
53               The activity log consists of a number of 4-Megabyte segments;
54               the al-extents parameter determines how many of those segments
55               can be active at the same time. The default value for
56               al-extents is 1237, with a minimum of 7 and a maximum of 65536.
57
58               Note that the effective maximum may be smaller, depending on
59               how you created the device meta data, see also drbdmeta(8) The
60               effective maximum is 919 * (available on-disk activity-log
61               ring-buffer area/4kB -1), the default 32kB ring-buffer effects
62               a maximum of 6433 (covers more than 25 GiB of data) We
63               recommend to keep this well within the amount your backend
64               storage and replication link are able to resync inside of about
65               5 minutes.
66
67           --al-updates {yes | no}
68               With this parameter, the activity log can be turned off
69               entirely (see the al-extents parameter). This will speed up
70               writes because fewer meta-data writes will be necessary, but
71               the entire device needs to be resynchronized opon recovery of a
72               failed primary node. The default value for al-updates is yes.
73
74           --disk-barrier,
75           --disk-flushes,
76           --disk-drain
77               DRBD has three methods of handling the ordering of dependent
78               write requests:
79
80               disk-barrier
81                   Use disk barriers to make sure that requests are written to
82                   disk in the right order. Barriers ensure that all requests
83                   submitted before a barrier make it to the disk before any
84                   requests submitted after the barrier. This is implemented
85                   using 'tagged command queuing' on SCSI devices and 'native
86                   command queuing' on SATA devices. Only some devices and
87                   device stacks support this method. The device mapper (LVM)
88                   only supports barriers in some configurations.
89
90                   Note that on systems which do not support disk barriers,
91                   enabling this option can lead to data loss or corruption.
92                   Until DRBD 8.4.1, disk-barrier was turned on if the I/O
93                   stack below DRBD did support barriers. Kernels since
94                   linux-2.6.36 (or 2.6.32 RHEL6) no longer allow to detect if
95                   barriers are supported. Since drbd-8.4.2, this option is
96                   off by default and needs to be enabled explicitly.
97
98               disk-flushes
99                   Use disk flushes between dependent write requests, also
100                   referred to as 'force unit access' by drive vendors. This
101                   forces all data to disk. This option is enabled by default.
102
103               disk-drain
104                   Wait for the request queue to "drain" (that is, wait for
105                   the requests to finish) before submitting a dependent write
106                   request. This method requires that requests are stable on
107                   disk when they finish. Before DRBD 8.0.9, this was the only
108                   method implemented. This option is enabled by default. Do
109                   not disable in production environments.
110
111               From these three methods, drbd will use the first that is
112               enabled and supported by the backing storage device. If all
113               three of these options are turned off, DRBD will submit write
114               requests without bothering about dependencies. Depending on the
115               I/O stack, write requests can be reordered, and they can be
116               submitted in a different order on different cluster nodes. This
117               can result in data loss or corruption. Therefore, turning off
118               all three methods of controlling write ordering is strongly
119               discouraged.
120
121               A general guideline for configuring write ordering is to use
122               disk barriers or disk flushes when using ordinary disks (or an
123               ordinary disk array) with a volatile write cache. On storage
124               without cache or with a battery backed write cache, disk
125               draining can be a reasonable choice.
126
127           --disk-timeout
128               If the lower-level device on which a DRBD device stores its
129               data does not finish an I/O request within the defined
130               disk-timeout, DRBD treats this as a failure. The lower-level
131               device is detached, and the device's disk state advances to
132               Diskless. If DRBD is connected to one or more peers, the failed
133               request is passed on to one of them.
134
135               This option is dangerous and may lead to kernel panic!
136
137               "Aborting" requests, or force-detaching the disk, is intended
138               for completely blocked/hung local backing devices which do no
139               longer complete requests at all, not even do error completions.
140               In this situation, usually a hard-reset and failover is the
141               only way out.
142
143               By "aborting", basically faking a local error-completion, we
144               allow for a more graceful swichover by cleanly migrating
145               services. Still the affected node has to be rebooted "soon".
146
147               By completing these requests, we allow the upper layers to
148               re-use the associated data pages.
149
150               If later the local backing device "recovers", and now DMAs some
151               data from disk into the original request pages, in the best
152               case it will just put random data into unused pages; but
153               typically it will corrupt meanwhile completely unrelated data,
154               causing all sorts of damage.
155
156               Which means delayed successful completion, especially for READ
157               requests, is a reason to panic(). We assume that a delayed
158               *error* completion is OK, though we still will complain noisily
159               about it.
160
161               The default value of disk-timeout is 0, which stands for an
162               infinite timeout. Timeouts are specified in units of 0.1
163               seconds. This option is available since DRBD 8.3.12.
164
165           --md-flushes
166               Enable disk flushes and disk barriers on the meta-data device.
167               This option is enabled by default. See the disk-flushes
168               parameter.
169
170           --on-io-error handler
171               Configure how DRBD reacts to I/O errors on a lower-level
172               device. The following policies are defined:
173
174               pass_on
175                   Change the disk status to Inconsistent, mark the failed
176                   block as inconsistent in the bitmap, and retry the I/O
177                   operation on a remote cluster node.
178
179               call-local-io-error
180                   Call the local-io-error handler (see the handlers section).
181
182               detach
183                   Detach the lower-level device and continue in diskless
184                   mode.
185
186
187           --read-balancing policy
188               Distribute read requests among cluster nodes as defined by
189               policy. The supported policies are prefer-local (the default),
190               prefer-remote, round-robin, least-pending,
191               when-congested-remote, 32K-striping, 64K-striping,
192               128K-striping, 256K-striping, 512K-striping and 1M-striping.
193
194               This option is available since DRBD 8.4.1.
195
196           resync-after minor
197               Define that a device should only resynchronize after the
198               specified other device. By default, no order between devices is
199               defined, and all devices will resynchronize in parallel.
200               Depending on the configuration of the lower-level devices, and
201               the available network and disk bandwidth, this can slow down
202               the overall resync process. This option can be used to form a
203               chain or tree of dependencies among devices.
204
205           --size size
206               Specify the size of the lower-level device explicitly instead
207               of determining it automatically. The device size must be
208               determined once and is remembered for the lifetime of the
209               device. In order to determine it automatically, all the
210               lower-level devices on all nodes must be attached, and all
211               nodes must be connected. If the size is specified explicitly,
212               this is not necessary. The size value is assumed to be in units
213               of sectors (512 bytes) by default.
214
215           --discard-zeroes-if-aligned {yes | no}
216               There are several aspects to discard/trim/unmap support on
217               linux block devices. Even if discard is supported in general,
218               it may fail silently, or may partially ignore discard requests.
219               Devices also announce whether reading from unmapped blocks
220               returns defined data (usually zeroes), or undefined data
221               (possibly old data, possibly garbage).
222
223               If on different nodes, DRBD is backed by devices with differing
224               discard characteristics, discards may lead to data divergence
225               (old data or garbage left over on one backend, zeroes due to
226               unmapped areas on the other backend). Online verify would now
227               potentially report tons of spurious differences. While probably
228               harmless for most use cases (fstrim on a file system), DRBD
229               cannot have that.
230
231               To play safe, we have to disable discard support, if our local
232               backend (on a Primary) does not support
233               "discard_zeroes_data=true". We also have to translate discards
234               to explicit zero-out on the receiving side, unless the
235               receiving side (Secondary) supports "discard_zeroes_data=true",
236               thereby allocating areas what were supposed to be unmapped.
237
238               There are some devices (notably the LVM/DM thin provisioning)
239               that are capable of discard, but announce
240               discard_zeroes_data=false. In the case of DM-thin, discards
241               aligned to the chunk size will be unmapped, and reading from
242               unmapped sectors will return zeroes. However, unaligned partial
243               head or tail areas of discard requests will be silently
244               ignored.
245
246               If we now add a helper to explicitly zero-out these unaligned
247               partial areas, while passing on the discard of the aligned full
248               chunks, we effectively achieve discard_zeroes_data=true on such
249               devices.
250
251               Setting discard-zeroes-if-aligned to yes will allow DRBD to use
252               discards, and to announce discard_zeroes_data=true, even on
253               backends that announce discard_zeroes_data=false.
254
255               Setting discard-zeroes-if-aligned to no will cause DRBD to
256               always fall-back to zero-out on the receiving side, and to not
257               even announce discard capabilities on the Primary, if the
258               respective backend announces discard_zeroes_data=false.
259
260               We used to ignore the discard_zeroes_data setting completely.
261               To not break established and expected behaviour, and suddenly
262               cause fstrim on thin-provisioned LVs to run out-of-space
263               instead of freeing up space, the default value is yes.
264
265               This option is available since 8.4.7.
266
267           --rs-discard-granularity byte
268               When rs-discard-granularity is set to a non zero, positive
269               value then DRBD tries to do a resync operation in requests of
270               this size. In case such a block contains only zero bytes on the
271               sync source node, the sync target node will issue a
272               discard/trim/unmap command for the area.
273
274               The value is constrained by the discard granularity of the
275               backing block device. In case rs-discard-granularity is not a
276               multiplier of the discard granularity of the backing block
277               device DRBD rounds it up. The feature only gets active if the
278               backing block device reads back zeroes after a discard command.
279
280               The default value of is 0. This option is available since
281               8.4.7.
282
283       drbdsetup peer-device-options resource peer_node_id volume
284           These are options that affect the peer's device.
285
286           --c-delay-target delay_target,
287           --c-fill-target fill_target,
288           --c-max-rate max_rate,
289           --c-plan-ahead plan_time
290               Dynamically control the resync speed. This mechanism is enabled
291               by setting the c-plan-ahead parameter to a positive value. The
292               goal is to either fill the buffers along the data path with a
293               defined amount of data if c-fill-target is defined, or to have
294               a defined delay along the path if c-delay-target is defined.
295               The maximum bandwidth is limited by the c-max-rate parameter.
296
297               The c-plan-ahead parameter defines how fast drbd adapts to
298               changes in the resync speed. It should be set to five times the
299               network round-trip time or more. Common values for
300               c-fill-target for "normal" data paths range from 4K to 100K. If
301               drbd-proxy is used, it is advised to use c-delay-target instead
302               of c-fill-target. The c-delay-target parameter is used if the
303               c-fill-target parameter is undefined or set to 0. The
304               c-delay-target parameter should be set to five times the
305               network round-trip time or more. The c-max-rate option should
306               be set to either the bandwidth available between the DRBD-hosts
307               and the machines hosting DRBD-proxy, or to the available disk
308               bandwidth.
309
310               The default values of these parameters are: c-plan-ahead = 20
311               (in units of 0.1 seconds), c-fill-target = 0 (in units of
312               sectors), c-delay-target = 1 (in units of 0.1 seconds), and
313               c-max-rate = 102400 (in units of KiB/s).
314
315               Dynamic resync speed control is available since DRBD 8.3.9.
316
317           --c-min-rate min_rate
318               A node which is primary and sync-source has to schedule
319               application I/O requests and resync I/O requests. The
320               c-min-rate parameter limits how much bandwidth is available for
321               resync I/O; the remaining bandwidth is used for application
322               I/O.
323
324               A c-min-rate value of 0 means that there is no limit on the
325               resync I/O bandwidth. This can slow down application I/O
326               significantly. Use a value of 1 (1 KiB/s) for the lowest
327               possible resync rate.
328
329               The default value of c-min-rate is 4096, in units of KiB/s.
330
331           --resync-rate rate
332               Define how much bandwidth DRBD may use for resynchronizing.
333               DRBD allows "normal" application I/O even during a resync. If
334               the resync takes up too much bandwidth, application I/O can
335               become very slow. This parameter allows to avoid that. Please
336               note this is option only works when the dynamic resync
337               controller is disabled.
338
339       drbdsetup check-resize minor
340           Remember the current size of the lower-level device of the
341           specified replicated device. Used by drbdadm. The size information
342           is stored in file /var/lib/drbd/drbd-minor-minor.lkbd.
343
344       drbdsetup new-peer resource peer_node_id,
345       drbdsetup net-options resource peer_node_id
346           The new-peer command creates a connection within a resource. The
347           resource must have been created with drbdsetup new-resource. The
348           net-options command changes the network options of an existing
349           connection. Before a connection can be activated with the connect
350           command, at least one path need to added with the new-path command.
351           Available options:
352
353           --after-sb-0pri policy
354               Define how to react if a split-brain scenario is detected and
355               none of the two nodes is in primary role. (We detect
356               split-brain scenarios when two nodes connect; split-brain
357               decisions are always between two nodes.) The defined policies
358               are:
359
360               disconnect
361                   No automatic resynchronization; simply disconnect.
362
363               discard-younger-primary,
364               discard-older-primary
365                   Resynchronize from the node which became primary first
366                   (discard-younger-primary) or last (discard-older-primary).
367                   If both nodes became primary independently, the
368                   discard-least-changes policy is used.
369
370               discard-zero-changes
371                   If only one of the nodes wrote data since the split brain
372                   situation was detected, resynchronize from this node to the
373                   other. If both nodes wrote data, disconnect.
374
375               discard-least-changes
376                   Resynchronize from the node with more modified blocks.
377
378               discard-node-nodename
379                   Always resynchronize to the named node.
380
381           --after-sb-1pri policy
382               Define how to react if a split-brain scenario is detected, with
383               one node in primary role and one node in secondary role. (We
384               detect split-brain scenarios when two nodes connect, so
385               split-brain decisions are always among two nodes.) The defined
386               policies are:
387
388               disconnect
389                   No automatic resynchronization, simply disconnect.
390
391               consensus
392                   Discard the data on the secondary node if the after-sb-0pri
393                   algorithm would also discard the data on the secondary
394                   node. Otherwise, disconnect.
395
396               violently-as0p
397                   Always take the decision of the after-sb-0pri algorithm,
398                   even if it causes an erratic change of the primary's view
399                   of the data. This is only useful if a single-node file
400                   system (i.e., not OCFS2 or GFS) with the
401                   allow-two-primaries flag is used. This option can cause the
402                   primary node to crash, and should not be used.
403
404               discard-secondary
405                   Discard the data on the secondary node.
406
407               call-pri-lost-after-sb
408                   Always take the decision of the after-sb-0pri algorithm. If
409                   the decision is to discard the data on the primary node,
410                   call the pri-lost-after-sb handler on the primary node.
411
412           --after-sb-2pri policy
413               Define how to react if a split-brain scenario is detected and
414               both nodes are in primary role. (We detect split-brain
415               scenarios when two nodes connect, so split-brain decisions are
416               always among two nodes.) The defined policies are:
417
418               disconnect
419                   No automatic resynchronization, simply disconnect.
420
421               violently-as0p
422                   See the violently-as0p policy for after-sb-1pri.
423
424               call-pri-lost-after-sb
425                   Call the pri-lost-after-sb helper program on one of the
426                   machines unless that machine can demote to secondary. The
427                   helper program is expected to reboot the machine, which
428                   brings the node into a secondary role. Which machine runs
429                   the helper program is determined by the after-sb-0pri
430                   strategy.
431
432           --allow-two-primaries
433               The most common way to configure DRBD devices is to allow only
434               one node to be primary (and thus writable) at a time.
435
436               In some scenarios it is preferable to allow two nodes to be
437               primary at once; a mechanism outside of DRBD then must make
438               sure that writes to the shared, replicated device happen in a
439               coordinated way. This can be done with a shared-storage cluster
440               file system like OCFS2 and GFS, or with virtual machine images
441               and a virtual machine manager that can migrate virtual machines
442               between physical machines.
443
444               The allow-two-primaries parameter tells DRBD to allow two nodes
445               to be primary at the same time. Never enable this option when
446               using a non-distributed file system; otherwise, data corruption
447               and node crashes will result!
448
449           --always-asbp
450               Normally the automatic after-split-brain policies are only used
451               if current states of the UUIDs do not indicate the presence of
452               a third node.
453
454               With this option you request that the automatic
455               after-split-brain policies are used as long as the data sets of
456               the nodes are somehow related. This might cause a full sync, if
457               the UUIDs indicate the presence of a third node. (Or double
458               faults led to strange UUID sets.)
459
460           --connect-int time
461               As soon as a connection between two nodes is configured with
462               drbdsetup connect, DRBD immediately tries to establish the
463               connection. If this fails, DRBD waits for connect-int seconds
464               and then repeats. The default value of connect-int is 10
465               seconds.
466
467           --cram-hmac-alg hash-algorithm
468               Configure the hash-based message authentication code (HMAC) or
469               secure hash algorithm to use for peer authentication. The
470               kernel supports a number of different algorithms, some of which
471               may be loadable as kernel modules. See the shash algorithms
472               listed in /proc/crypto. By default, cram-hmac-alg is unset.
473               Peer authentication also requires a shared-secret to be
474               configured.
475
476           --csums-alg hash-algorithm
477               Normally, when two nodes resynchronize, the sync target
478               requests a piece of out-of-sync data from the sync source, and
479               the sync source sends the data. With many usage patterns, a
480               significant number of those blocks will actually be identical.
481
482               When a csums-alg algorithm is specified, when requesting a
483               piece of out-of-sync data, the sync target also sends along a
484               hash of the data it currently has. The sync source compares
485               this hash with its own version of the data. It sends the sync
486               target the new data if the hashes differ, and tells it that the
487               data are the same otherwise. This reduces the network bandwidth
488               required, at the cost of higher cpu utilization and possibly
489               increased I/O on the sync target.
490
491               The csums-alg can be set to one of the secure hash algorithms
492               supported by the kernel; see the shash algorithms listed in
493               /proc/crypto. By default, csums-alg is unset.
494
495           --csums-after-crash-only
496               Enabling this option (and csums-alg, above) makes it possible
497               to use the checksum based resync only for the first resync
498               after primary crash, but not for later "network hickups".
499
500               In most cases, block that are marked as need-to-be-resynced are
501               in fact changed, so calculating checksums, and both reading and
502               writing the blocks on the resync target is all effective
503               overhead.
504
505               The advantage of checksum based resync is mostly after primary
506               crash recovery, where the recovery marked larger areas (those
507               covered by the activity log) as need-to-be-resynced, just in
508               case. Introduced in 8.4.5.
509
510           --data-integrity-alg  alg
511               DRBD normally relies on the data integrity checks built into
512               the TCP/IP protocol, but if a data integrity algorithm is
513               configured, it will additionally use this algorithm to make
514               sure that the data received over the network match what the
515               sender has sent. If a data integrity error is detected, DRBD
516               will close the network connection and reconnect, which will
517               trigger a resync.
518
519               The data-integrity-alg can be set to one of the secure hash
520               algorithms supported by the kernel; see the shash algorithms
521               listed in /proc/crypto. By default, this mechanism is turned
522               off.
523
524               Because of the CPU overhead involved, we recommend not to use
525               this option in production environments. Also see the notes on
526               data integrity below.
527
528           --fencing fencing_policy
529               Fencing is a preventive measure to avoid situations where both
530               nodes are primary and disconnected. This is also known as a
531               split-brain situation. DRBD supports the following fencing
532               policies:
533
534               dont-care
535                   No fencing actions are taken. This is the default policy.
536
537               resource-only
538                   If a node becomes a disconnected primary, it tries to fence
539                   the peer. This is done by calling the fence-peer handler.
540                   The handler is supposed to reach the peer over an
541                   alternative communication path and call 'drbdadm outdate
542                   minor' there.
543
544               resource-and-stonith
545                   If a node becomes a disconnected primary, it freezes all
546                   its IO operations and calls its fence-peer handler. The
547                   fence-peer handler is supposed to reach the peer over an
548                   alternative communication path and call 'drbdadm outdate
549                   minor' there. In case it cannot do that, it should stonith
550                   the peer. IO is resumed as soon as the situation is
551                   resolved. In case the fence-peer handler fails, I/O can be
552                   resumed manually with 'drbdadm resume-io'.
553
554           --ko-count number
555               If a secondary node fails to complete a write request in
556               ko-count times the timeout parameter, it is excluded from the
557               cluster. The primary node then sets the connection to this
558               secondary node to Standalone. To disable this feature, you
559               should explicitly set it to 0; defaults may change between
560               versions.
561
562           --max-buffers number
563               Limits the memory usage per DRBD minor device on the receiving
564               side, or for internal buffers during resync or online-verify.
565               Unit is PAGE_SIZE, which is 4 KiB on most systems. The minimum
566               possible setting is hard coded to 32 (=128 KiB). These buffers
567               are used to hold data blocks while they are written to/read
568               from disk. To avoid possible distributed deadlocks on
569               congestion, this setting is used as a throttle threshold rather
570               than a hard limit. Once more than max-buffers pages are in use,
571               further allocation from this pool is throttled. You want to
572               increase max-buffers if you cannot saturate the IO backend on
573               the receiving side.
574
575           --max-epoch-size number
576               Define the maximum number of write requests DRBD may issue
577               before issuing a write barrier. The default value is 2048, with
578               a minimum of 1 and a maximum of 20000. Setting this parameter
579               to a value below 10 is likely to decrease performance.
580
581           --on-congestion policy,
582           --congestion-fill threshold,
583           --congestion-extents threshold
584               By default, DRBD blocks when the TCP send queue is full. This
585               prevents applications from generating further write requests
586               until more buffer space becomes available again.
587
588               When DRBD is used together with DRBD-proxy, it can be better to
589               use the pull-ahead on-congestion policy, which can switch DRBD
590               into ahead/behind mode before the send queue is full. DRBD then
591               records the differences between itself and the peer in its
592               bitmap, but it no longer replicates them to the peer. When
593               enough buffer space becomes available again, the node
594               resynchronizes with the peer and switches back to normal
595               replication.
596
597               This has the advantage of not blocking application I/O even
598               when the queues fill up, and the disadvantage that peer nodes
599               can fall behind much further. Also, while resynchronizing, peer
600               nodes will become inconsistent.
601
602               The available congestion policies are block (the default) and
603               pull-ahead. The congestion-fill parameter defines how much data
604               is allowed to be "in flight" in this connection. The default
605               value is 0, which disables this mechanism of congestion
606               control, with a maximum of 10 GiBytes. The congestion-extents
607               parameter defines how many bitmap extents may be active before
608               switching into ahead/behind mode, with the same default and
609               limits as the al-extents parameter. The congestion-extents
610               parameter is effective only when set to a value smaller than
611               al-extents.
612
613               Ahead/behind mode is available since DRBD 8.3.10.
614
615           --ping-int interval
616               When the TCP/IP connection to a peer is idle for more than
617               ping-int seconds, DRBD will send a keep-alive packet to make
618               sure that a failed peer or network connection is detected
619               reasonably soon. The default value is 10 seconds, with a
620               minimum of 1 and a maximum of 120 seconds. The unit is seconds.
621
622           --ping-timeout timeout
623               Define the timeout for replies to keep-alive packets. If the
624               peer does not reply within ping-timeout, DRBD will close and
625               try to reestablish the connection. The default value is 0.5
626               seconds, with a minimum of 0.1 seconds and a maximum of 3
627               seconds. The unit is tenths of a second.
628
629           --socket-check-timeout timeout
630               In setups involving a DRBD-proxy and connections that
631               experience a lot of buffer-bloat it might be necessary to set
632               ping-timeout to an unusual high value. By default DRBD uses the
633               same value to wait if a newly established TCP-connection is
634               stable. Since the DRBD-proxy is usually located in the same
635               data center such a long wait time may hinder DRBD's connect
636               process.
637
638               In such setups socket-check-timeout should be set to at least
639               to the round trip time between DRBD and DRBD-proxy. I.e. in
640               most cases to 1.
641
642               The default unit is tenths of a second, the default value is 0
643               (which causes DRBD to use the value of ping-timeout instead).
644               Introduced in 8.4.5.
645
646           --protocol name
647               Use the specified protocol on this connection. The supported
648               protocols are:
649
650               A
651                   Writes to the DRBD device complete as soon as they have
652                   reached the local disk and the TCP/IP send buffer.
653
654               B
655                   Writes to the DRBD device complete as soon as they have
656                   reached the local disk, and all peers have acknowledged the
657                   receipt of the write requests.
658
659               C
660                   Writes to the DRBD device complete as soon as they have
661                   reached the local and all remote disks.
662
663
664           --rcvbuf-size size
665               Configure the size of the TCP/IP receive buffer. A value of 0
666               (the default) causes the buffer size to adjust dynamically.
667               This parameter usually does not need to be set, but it can be
668               set to a value up to 10 MiB. The default unit is bytes.
669
670           --rr-conflict policy
671               This option helps to solve the cases when the outcome of the
672               resync decision is incompatible with the current role
673               assignment in the cluster. The defined policies are:
674
675               disconnect
676                   No automatic resynchronization, simply disconnect.
677
678               retry-connect
679                   Disconnect now, and retry to connect immediatly afterwards.
680
681               violently
682                   Resync to the primary node is allowed, violating the
683                   assumption that data on a block device are stable for one
684                   of the nodes.  Do not use this option, it is dangerous.
685
686               call-pri-lost
687                   Call the pri-lost handler on one of the machines. The
688                   handler is expected to reboot the machine, which puts it
689                   into secondary role.
690
691           --shared-secret secret
692               Configure the shared secret used for peer authentication. The
693               secret is a string of up to 64 characters. Peer authentication
694               also requires the cram-hmac-alg parameter to be set.
695
696           --sndbuf-size size
697               Configure the size of the TCP/IP send buffer. Since DRBD 8.0.13
698               / 8.2.7, a value of 0 (the default) causes the buffer size to
699               adjust dynamically. Values below 32 KiB are harmful to the
700               throughput on this connection. Large buffer sizes can be useful
701               especially when protocol A is used over high-latency networks;
702               the maximum value supported is 10 MiB.
703
704           --tcp-cork
705               By default, DRBD uses the TCP_CORK socket option to prevent the
706               kernel from sending partial messages; this results in fewer and
707               bigger packets on the network. Some network stacks can perform
708               worse with this optimization. On these, the tcp-cork parameter
709               can be used to turn this optimization off.
710
711           --timeout time
712               Define the timeout for replies over the network: if a peer node
713               does not send an expected reply within the specified timeout,
714               it is considered dead and the TCP/IP connection is closed. The
715               timeout value must be lower than connect-int and lower than
716               ping-int. The default is 6 seconds; the value is specified in
717               tenths of a second.
718
719           --use-rle
720               Each replicated device on a cluster node has a separate bitmap
721               for each of its peer devices. The bitmaps are used for tracking
722               the differences between the local and peer device: depending on
723               the cluster state, a disk range can be marked as different from
724               the peer in the device's bitmap, in the peer device's bitmap,
725               or in both bitmaps. When two cluster nodes connect, they
726               exchange each other's bitmaps, and they each compute the union
727               of the local and peer bitmap to determine the overall
728               differences.
729
730               Bitmaps of very large devices are also relatively large, but
731               they usually compress very well using run-length encoding. This
732               can save time and bandwidth for the bitmap transfers.
733
734               The use-rle parameter determines if run-length encoding should
735               be used. It is on by default since DRBD 8.4.0.
736
737           --verify-alg hash-algorithm
738               Online verification (drbdadm verify) computes and compares
739               checksums of disk blocks (i.e., hash values) in order to detect
740               if they differ. The verify-alg parameter determines which
741               algorithm to use for these checksums. It must be set to one of
742               the secure hash algorithms supported by the kernel before
743               online verify can be used; see the shash algorithms listed in
744               /proc/crypto.
745
746               We recommend to schedule online verifications regularly during
747               low-load periods, for example once a month. Also see the notes
748               on data integrity below.
749
750       drbdsetup new-path resource peer_node_id local-addr remote-addr
751           The new-path command creates a path within a connection. The
752           connection must have been created with drbdsetup new-peer.
753           Local_addr and remote_addr refer to the local and remote protocol,
754           network address, and port in the format
755           [address-family:]address[:port]. The address families ipv4, ipv6,
756           ssocks (Dolphin Interconnect Solutions' "super sockets"), sdp
757           (Infiniband Sockets Direct Protocol), and sci are supported (sci is
758           an alias for ssocks). If no address family is specified, ipv4 is
759           assumed. For all address families except ipv6, the address uses
760           IPv4 address notation (for example, 1.2.3.4). For ipv6, the address
761           is enclosed in brackets and uses IPv6 address notation (for
762           example, [fd01:2345:6789:abcd::1]). The port defaults to 7788.
763
764       drbdsetup connect resource peer_node_id
765           The connect command activates a connection. That means that the
766           DRBD driver will bind and listen on all local addresses of the
767           connection-'s paths. It will begin to try to establish one or more
768           paths of the connection. Available options:
769
770           --tentative
771               Only determine if a connection to the peer can be established
772               and if a resync is necessary (and in which direction) without
773               actually establishing the connection or starting the resync.
774               Check the system log to see what DRBD would do without the
775               --tentative option.
776
777           --discard-my-data
778               Discard the local data and resynchronize with the peer that has
779               the most up-to-data data. Use this option to manually recover
780               from a split-brain situation.
781
782       drbdsetup del-peer resource peer_node_id
783           The del-peer command removes a connection from a resource.
784
785       drbdsetup del-path resource peer_node_id local-addr remote-addr
786           The del-path command removes a path from a connection. Please note
787           that it fails if the path is necessary to keep a connected
788           connection in tact. In order to remove all paths, disconnect the
789           connection first.
790
791       drbdsetup cstate resource peer_node_id
792           Show the current state of a connection. The connection is
793           identified by the node-id of the peer; see the drbdsetup connect
794           command.
795
796       drbdsetup del-minor minor
797           Remove a replicated device. No lower-level device may be attached;
798           see drbdsetup detach.
799
800       drbdsetup del-resource resource
801           Remove a resource. All volumes and connections must be removed
802           first (drbdsetup del-minor, drbdsetup disconnect). Alternatively,
803           drbdsetup down can be used to remove a resource together with all
804           its volumes and connections.
805
806       drbdsetup detach minor
807           Detach the lower-level device of a replicated device. Available
808           options:
809
810           --force
811               Force the detach and return immediately. This puts the
812               lower-level device into failed state until all pending I/O has
813               completed, and then detaches the device. Any I/O not yet
814               submitted to the lower-level device (for example, because I/O
815               on the device was suspended) is assumed to have failed.
816
817
818       drbdsetup disconnect resource peer_node_id
819           Remove a connection to a peer host. The connection is identified by
820           the node-id of the peer; see the drbdsetup connect command.
821
822       drbdsetup down {resource | all}
823           Take a resource down by removing all volumes, connections, and the
824           resource itself.
825
826       drbdsetup dstate minor
827           Show the current disk state of a lower-level device.
828
829       drbdsetup events2 {resource | all}
830           Show the current state of all configured DRBD objects, followed by
831           all changes to the state.
832
833           The output format is meant to be human as well as machine readable.
834           The line starts with a word that indicates the kind of event:
835           exists for an existing object; create, destroy, and change if an
836           object is created, destroyed, or changed; or call or response if an
837           event handler is called or it returns. The second word indicates
838           the object the event applies to: resource, device, connection,
839           peer-device, helper, or a dash (-) to indicate that the current
840           state has been dumped completely.
841
842           The remaining words identify the object and describe the state that
843           he object is in. Available options:
844
845           --now
846               Terminate after reporting the current state. The default is to
847               continuously listen and report state changes.
848
849           --poll
850               This is completely ignored if --now is not given. In
851               combination with --now it prints the current state once and
852               then reads on stdin. If a n is read, this triggers another run.
853               Newlines are ignored. Every other input terminates the command.
854
855           --statistics
856               Include statistics in the output.
857
858
859       drbdsetup get-gi resource peer_node_id volume
860           Show the data generation identifiers for a device on a particular
861           connection. The device is identified by its volume number. The
862           connection is identified by its endpoints; see the drbdsetup
863           connect command.
864
865           The output consists of the current UUID, bitmap UUID, and the first
866           two history UUIDS, folowed by a set of flags. The current UUID and
867           history UUIDs are device specific; the bitmap UUID and flags are
868           peer device specific. This command only shows the first two history
869           UUIDs. Internally, DRBD maintains one history UUID for each
870           possible peer device.
871
872       drbdsetup invalidate minor
873           Replace the local data of a device with that of a peer. All the
874           local data will be marked out-of-sync, and a resync with the
875           specified peer device will be initialted.
876
877       drbdsetup invalidate-remote resource peer_node_id volume
878           Replace a peer device's data of a resource with the local data. The
879           peer device's data will be marked out-of-sync, and a resync from
880           the local node to the specified peer will be initiated.
881
882       drbdsetup new-current-uuid minor
883           Generate a new current UUID and rotates all other UUID values. This
884           has at least two use cases, namely to skip the initial sync, and to
885           reduce network bandwidth when starting in a single node
886           configuration and then later (re-)integrating a remote site.
887
888           Available option:
889
890           --clear-bitmap
891               Clears the sync bitmap in addition to generating a new current
892               UUID.
893
894           This can be used to skip the initial sync, if you want to start
895           from scratch. This use-case does only work on "Just Created" meta
896           data. Necessary steps:
897
898            1. On both nodes, initialize meta data and configure the device.
899
900               drbdadm create-md --force res/volume-number
901
902            2. They need to do the initial handshake, so they know their
903               sizes.
904
905               drbdadm up res
906
907            3. They are now Connected Secondary/Secondary
908               Inconsistent/Inconsistent. Generate a new current-uuid and
909               clear the dirty bitmap.
910
911               drbdadm --clear-bitmap new-current-uuid res
912
913            4. They are now Connected Secondary/Secondary UpToDate/UpToDate.
914               Make one side primary and create a file system.
915
916               drbdadm primary res
917
918               mkfs -t fs-type $(drbdadm sh-dev res)
919
920           One obvious side-effect is that the replica is full of old garbage
921           (unless you made them identical using other means), so any
922           online-verify is expected to find any number of out-of-sync blocks.
923
924           You must not use this on pre-existing data!  Even though it may
925           appear to work at first glance, once you switch to the other node,
926           your data is toast, as it never got replicated. So do not leave out
927           the mkfs (or equivalent).
928
929           This can also be used to shorten the initial resync of a cluster
930           where the second node is added after the first node is gone into
931           production, by means of disk shipping. This use-case works on
932           disconnected devices only, the device may be in primary or
933           secondary role.
934
935           The necessary steps on the current active server are:
936
937            1. drbdsetup new-current-uuid --clear-bitmap minor
938
939            2. Take the copy of the current active server. E.g. by pulling a
940               disk out of the RAID1 controller, or by copying with dd. You
941               need to copy the actual data, and the meta data.
942
943            3. drbdsetup new-current-uuid minor
944
945           Now add the disk to the new secondary node, and join it to the
946           cluster. You will get a resync of that parts that were changed
947           since the first call to drbdsetup in step 1.
948
949       drbdsetup new-minor resource minor volume
950           Create a new replicated device within a resource. The command
951           creates a block device inode for the replicated device (by default,
952           /dev/drbdminor). The volume number identifies the device within the
953           resource.
954
955       drbdsetup new-resource resource node_id,
956       drbdsetup resource-options resource
957           The new-resource command creates a new resource. The
958           resource-options command changes the resource options of an
959           existing resource. Available options:
960
961           --auto-promote bool-value
962               A resource must be promoted to primary role before any of its
963               devices can be mounted or opened for writing.
964
965               Before DRBD 9, this could only be done explicitly ("drbdadm
966               primary"). Since DRBD 9, the auto-promote parameter allows to
967               automatically promote a resource to primary role when one of
968               its devices is mounted or opened for writing. As soon as all
969               devices are unmounted or closed with no more remaining users,
970               the role of the resource changes back to secondary.
971
972               Automatic promotion only succeeds if the cluster state allows
973               it (that is, if an explicit drbdadm primary command would
974               succeed). Otherwise, mounting or opening the device fails as it
975               already did before DRBD 9: the mount(2) system call fails with
976               errno set to EROFS (Read-only file system); the open(2) system
977               call fails with errno set to EMEDIUMTYPE (wrong medium type).
978
979               Irrespective of the auto-promote parameter, if a device is
980               promoted explicitly (drbdadm primary), it also needs to be
981               demoted explicitly (drbdadm secondary).
982
983               The auto-promote parameter is available since DRBD 9.0.0, and
984               defaults to yes.
985
986           --cpu-mask cpu-mask
987               Set the cpu affinity mask for DRBD kernel threads. The cpu mask
988               is specified as a hexadecimal number. The default value is 0,
989               which lets the scheduler decide which kernel threads run on
990               which CPUs. CPU numbers in cpu-mask which do not exist in the
991               system are ignored.
992
993           --on-no-data-accessible policy
994               Determine how to deal with I/O requests when the requested data
995               is not available locally or remotely (for example, when all
996               disks have failed). The defined policies are:
997
998               io-error
999                   System calls fail with errno set to EIO.
1000
1001               suspend-io
1002                   The resource suspends I/O. I/O can be resumed by
1003                   (re)attaching the lower-level device, by connecting to a
1004                   peer which has access to the data, or by forcing DRBD to
1005                   resume I/O with drbdadm resume-io res. When no data is
1006                   available, forcing I/O to resume will result in the same
1007                   behavior as the io-error policy.
1008
1009               This setting is available since DRBD 8.3.9; the default policy
1010               is io-error.
1011
1012           --peer-ack-window value
1013               On each node and for each device, DRBD maintains a bitmap of
1014               the differences between the local and remote data for each peer
1015               device. For example, in a three-node setup (nodes A, B, C) each
1016               with a single device, every node maintains one bitmap for each
1017               of its peers.
1018
1019               When nodes receive write requests, they know how to update the
1020               bitmaps for the writing node, but not how to update the bitmaps
1021               between themselves. In this example, when a write request
1022               propagates from node A to B and C, nodes B and C know that they
1023               have the same data as node A, but not whether or not they both
1024               have the same data.
1025
1026               As a remedy, the writing node occasionally sends peer-ack
1027               packets to its peers which tell them which state they are in
1028               relative to each other.
1029
1030               The peer-ack-window parameter specifies how much data a primary
1031               node may send before sending a peer-ack packet. A low value
1032               causes increased network traffic; a high value causes less
1033               network traffic but higher memory consumption on secondary
1034               nodes and higher resync times between the secondary nodes after
1035               primary node failures. (Note: peer-ack packets may be sent due
1036               to other reasons as well, e.g. membership changes or expiry of
1037               the peer-ack-delay timer.)
1038
1039               The default value for peer-ack-window is 2 MiB, the default
1040               unit is sectors. This option is available since 9.0.0.
1041
1042           --peer-ack-delay expiry-time
1043               If after the last finished write request no new write request
1044               gets issued for expiry-time, then a peer-ack packet is sent. If
1045               a new write request is issued before the timer expires, the
1046               timer gets reset to expiry-time. (Note: peer-ack packets may be
1047               sent due to other reasons as well, e.g. membership changes or
1048               the peer-ack-window option.)
1049
1050               This parameter may influence resync behavior on remote nodes.
1051               Peer nodes need to wait until they receive an peer-ack for
1052               releasing a lock on an AL-extent. Resync operations between
1053               peers may need to wait for for these locks.
1054
1055               The default value for peer-ack-delay is 100 milliseconds, the
1056               default unit is milliseconds. This option is available since
1057               9.0.0.
1058
1059           --quorum value
1060               When activated, a cluster partition requires quorum in order to
1061               modify the replicated data set. That means a node in the
1062               cluster partition can only be promoted to primary if the
1063               cluster partition has quorum. Every node with a disk directly
1064               connected to the node that should be promoted counts. If a
1065               primary node should execute a write request, but the cluster
1066               partition has lost quorum, it will freeze IO or reject the
1067               write request with an error (depending on the on-no-quorum
1068               setting). Upon loosing quorum a primary always invokes the
1069               quorum-lost handler. The handler is intended for notification
1070               purposes, its return code is ignored.
1071
1072               The option's value might be set to off, majority, all or a
1073               numeric value. If you set it to a numeric value, make sure that
1074               the value is greater than half of your number of nodes. Quorum
1075               is a mechanism to avoid data divergence, it might be used
1076               instead of fencing when there are more than two repicas. It
1077               defaults to off
1078
1079               If all missing nodes are marked as outdated, a partition always
1080               has quorum, no matter how small it is. I.e. If you disconnect
1081               all secondary nodes gracefully a single primary continues to
1082               operate. In the moment a single secondary is lost, it has to be
1083               assumed that it forms a partition with all the missing outdated
1084               nodes. In case my partition might be smaller than the other,
1085               quorum is lost in this moment.
1086
1087               In case you want to allow permanently diskless nodes to gain
1088               quorum it is recommendet to not use majority or all. It is
1089               recommended to specify an absolute number, since DBRD's
1090               heuristic to determine the complete number of diskfull nodes in
1091               the cluster is unreliable.
1092
1093               The quorum implementation is available starting with the DRBD
1094               kernel driver version 9.0.7.
1095
1096           --quorum-minimum-redundancy value
1097               This option sets the minimal required number of nodes with an
1098               UpToDate disk to allow the partition to gain quorum. This is a
1099               different requirement than the plain quorum option expresses.
1100
1101               The option's value might be set to off, majority, all or a
1102               numeric value. If you set it to a numeric value, make sure that
1103               the value is greater than half of your number of nodes.
1104
1105               In case you want to allow permanently diskless nodes to gain
1106               quorum it is recommendet to not use majority or all. It is
1107               recommended to specify an absolute number, since DBRD's
1108               heuristic to determine the complete number of diskfull nodes in
1109               the cluster is unreliable.
1110
1111               This option is available starting with the DRBD kernel driver
1112               version 9.0.10.
1113
1114           --on-no-quorum {io-error | suspend-io}
1115               By default DRBD freezes IO on a device, that lost quorum. By
1116               setting the on-no-quorum to io-error it completes all IO
1117               operations with an error if quorum ist lost.
1118
1119               The on-no-quorum options is available starting with the DRBD
1120               kernel driver version 9.0.8.
1121
1122
1123       drbdsetup outdate minor
1124           Mark the data on a lower-level device as outdated. This is used for
1125           fencing, and prevents the resource the device is part of from
1126           becoming primary in the future. See the --fencing disk option.
1127
1128       drbdsetup pause-sync resource peer_node_id volume
1129           Stop resynchronizing between a local and a peer device by setting
1130           the local pause flag. The resync can only resume if the pause flags
1131           on both sides of a connection are cleared.
1132
1133       drbdsetup primary resource
1134           Change the role of a node in a resource to primary. This allows the
1135           replicated devices in this resource to be mounted or opened for
1136           writing. Available options:
1137
1138           --overwrite-data-of-peer
1139               This option is an alias for the --force option.
1140
1141           --force
1142               Force the resource to become primary even if some devices are
1143               not guaranteed to have up-to-date data. This option is used to
1144               turn one of the nodes in a newly created cluster into the
1145               primary node, or when manually recovering from a disaster.
1146
1147               Note that this can lead to split-brain scenarios. Also, when
1148               forcefully turning an inconsistent device into an up-to-date
1149               device, it is highly recommended to use any integrity checks
1150               available (such as a filesystem check) to make sure that the
1151               device can at least be used without crashing the system.
1152
1153           Note that DRBD usually only allows one node in a cluster to be in
1154           primary role at any time; this allows DRBD to coordinate access to
1155           the devices in a resource across nodes. The --allow-two-primaries
1156           network option changes this; in that case, a mechanism outside of
1157           DRBD needs to coordinate device access.
1158
1159       drbdsetup resize minor
1160           Reexamine the size of the lower-level devices of a replicated
1161           device on all nodes. This command is called after the lower-level
1162           devices on all nodes have been grown to adjust the size of the
1163           replicated device. Available options:
1164
1165           --assume-peer-has-space
1166               Resize the device even if some of the peer devices are not
1167               connected at the moment. DRBD will try to resize the peer
1168               devices when they next connect. It will refuse to connect to a
1169               peer device which is too small.
1170
1171           --assume-clean
1172               Do not resynchronize the added disk space; instead, assume that
1173               it is identical on all nodes. This option can be used when the
1174               disk space is uninitialized and differences do not matter, or
1175               when it is known to be identical on all nodes. See the
1176               drbdsetup verify command.
1177
1178           --size val
1179               This option can be used to online shrink the usable size of a
1180               drbd device. It's the users responsibility to make sure that a
1181               file system on the device is not truncated by that operation.
1182
1183           --al-stripes val --al-stripes val
1184               These options may be used to change the layout of the activity
1185               log online. In case of internal meta data this may invovle
1186               shrinking the user visible size at the same time (unsing the
1187               --size) or increasing the avalable space on the backing
1188               devices.
1189
1190
1191       drbdsetup resume-io minor
1192           Resume I/O on a replicated device. See the --fencing net option.
1193
1194       drbdsetup resume-sync resource peer_node_id volume
1195           Allow resynchronization to resume by clearing the local sync pause
1196           flag.
1197
1198       drbdsetup role resource
1199           Show the current role of a resource.
1200
1201       drbdsetup secondary resource
1202           Change the role of a node in a resource to secondary. This command
1203           fails if the replicated device is in use.
1204
1205       drbdsetup show {resource | all}
1206           Show the current configuration of a resource, or of all resources.
1207           Available options:
1208
1209           --show-defaults
1210               Show all configuration parameters, even the ones with default
1211               values. Normally, parameters with default values are not shown.
1212
1213
1214       drbdsetup show-gi resource peer_node_id volume
1215           Show the data generation identifiers for a device on a particular
1216           connection. In addition, explain the output. The output otherwise
1217           is the same as in the drbdsetup get-gi command.
1218
1219       drbdsetup state
1220           This is an alias for drbdsetup role. Deprecated.
1221
1222       drbdsetup status {resource | all}
1223           Show the status of a resource, or of all resources. The output
1224           consists of one paragraph for each configured resource. Each
1225           paragraph contains one line for each resource, followed by one line
1226           for each device, and one line for each connection. The device and
1227           connection lines are indented. The connection lines are followed by
1228           one line for each peer device; these lines are indented against the
1229           connection line.
1230
1231           Long lines are wrapped around at terminal width, and indented to
1232           indicate how the lines belongs together. Available options:
1233
1234           --verbose
1235               Include more information in the output even when it is likely
1236               redundant or irrelevant.
1237
1238           --statistics
1239               Include data transfer statistics in the output.
1240
1241           --color={always | auto | never}
1242               Colorize the output. With --color=auto, drbdsetup emits color
1243               codes only when standard output is connected to a terminal.
1244
1245           For example, the non-verbose output for a resource with only one
1246           connection and only one volume could look like this:
1247
1248               drbd0 role:Primary
1249                 disk:UpToDate
1250                 host2.example.com role:Secondary
1251                   disk:UpToDate
1252
1253
1254           With the --verbose option, the same resource could be reported as:
1255
1256               drbd0 node-id:1 role:Primary suspended:no
1257                 volume:0 minor:1 disk:UpToDate blocked:no
1258                 host2.example.com local:ipv4:192.168.123.4:7788
1259                     peer:ipv4:192.168.123.2:7788 node-id:0 connection:WFReportParams
1260                     role:Secondary congested:no
1261                   volume:0 replication:Connected disk:UpToDate resync-suspended:no
1262
1263
1264
1265       drbdsetup suspend-io minor
1266           Suspend I/O on a replicated device. It is not usually necessary to
1267           use this command.
1268
1269       drbdsetup verify resource peer_node_id volume
1270           Start online verification, change which part of the device will be
1271           verified, or stop online verification. The command requires the
1272           specified peer to be connected.
1273
1274           Online verification compares each disk block on the local and peer
1275           node. Blocks which differ between the nodes are marked as
1276           out-of-sync, but they are not automatically brought back into sync.
1277           To bring them into sync, the resource must be disconnected and
1278           reconnected. Progress can be monitored in the output of drbdsetup
1279           status --statistics. Available options:
1280
1281           --start position
1282               Define where online verification should start. This parameter
1283               is ignored if online verification is already in progress. If
1284               the start parameter is not specified, online verification will
1285               continue where it was interrupted (if the connection to the
1286               peer was lost while verifying), after the previous stop sector
1287               (if the previous online verification has finished), or at the
1288               beginning of the device (if the end of the device was reached,
1289               or online verify has not run before).
1290
1291               The position on disk is specified in disk sectors (512 bytes)
1292               by default.
1293
1294           --stop position
1295               Define where online verification should stop. If online
1296               verification is already in progress, the stop position of the
1297               active online verification process is changed. Use this to stop
1298               online verification.
1299
1300               The position on disk is specified in disk sectors (512 bytes)
1301               by default.
1302
1303           Also see the notes on data integrity in the drbd.conf(5) manual
1304           page.
1305
1306       drbdsetup wait-connect-volume resource peer_node_id volume,
1307       drbdsetup wait-connect-connection resource peer_node_id,
1308       drbdsetup wait-connect-resource resource,
1309       drbdsetup wait-sync-volume resource peer_node_id volume,
1310       drbdsetup wait-sync-connection resource peer_node_id,
1311       drbdsetup wait-sync-resource resource
1312           The wait-connect-* commands waits until a device on a peer is
1313           visible. The wait-sync-* commands waits until a device on a peer is
1314           up to date. Available options for both commands:
1315
1316           --degr-wfc-timeout timeout
1317               Define how long to wait until all peers are connected in case
1318               the cluster consisted of a single node only when the system
1319               went down. This parameter is usually set to a value smaller
1320               than wfc-timeout. The assumption here is that peers which were
1321               unreachable before a reboot are less likely to be reachable
1322               after the reboot, so waiting is less likely to help.
1323
1324               The timeout is specified in seconds. The default value is 0,
1325               which stands for an infinite timeout. Also see the wfc-timeout
1326               parameter.
1327
1328           --outdated-wfc-timeout timeout
1329               Define how long to wait until all peers are connected if all
1330               peers were outdated when the system went down. This parameter
1331               is usually set to a value smaller than wfc-timeout. The
1332               assumption here is that an outdated peer cannot have become
1333               primary in the meantime, so we don't need to wait for it as
1334               long as for a node which was alive before.
1335
1336               The timeout is specified in seconds. The default value is 0,
1337               which stands for an infinite timeout. Also see the wfc-timeout
1338               parameter.
1339
1340           --wait-after-sb
1341               This parameter causes DRBD to continue waiting in the init
1342               script even when a split-brain situation has been detected, and
1343               the nodes therefore refuse to connect to each other.
1344
1345           --wfc-timeout timeout
1346               Define how long the init script waits until all peers are
1347               connected. This can be useful in combination with a cluster
1348               manager which cannot manage DRBD resources: when the cluster
1349               manager starts, the DRBD resources will already be up and
1350               running. With a more capable cluster manager such as Pacemaker,
1351               it makes more sense to let the cluster manager control DRBD
1352               resources. The timeout is specified in seconds. The default
1353               value is 0, which stands for an infinite timeout. Also see the
1354               degr-wfc-timeout parameter.
1355
1356
1357       drbdsetup forget-peer resource peer_node_id
1358           The forget-peer command removes all traces of a peer node from the
1359           meta-data. It frees a bitmap slot in the meta-data and make it
1360           avalable for futher bitmap slot allocation in case a so-far never
1361           seen node connects.
1362
1363           The connection must be taken down before this command may be used.
1364           In case the peer re-connects at a later point a bit-map based
1365           resync will be turned into a full-sync.
1366

EXAMPLES

1368       Please see the DRBD User's Guide[1] for examples.
1369

VERSION

1371       This document was revised for version 9.0.0 of the DRBD distribution.
1372

AUTHOR

1374       Written by Philipp Reisner <philipp.reisner@linbit.com> and Lars
1375       Ellenberg <lars.ellenberg@linbit.com>.
1376

REPORTING BUGS

1378       Report bugs to <drbd-user@lists.linbit.com>.
1379
1381       Copyright 2001-2018 LINBIT Information Technologies, Philipp Reisner,
1382       Lars Ellenberg. This is free software; see the source for copying
1383       conditions. There is NO warranty; not even for MERCHANTABILITY or
1384       FITNESS FOR A PARTICULAR PURPOSE.
1385

SEE ALSO

1387       drbd.conf(5), drbd(8), drbdadm(8), DRBD User's Guide[1], DRBD Web
1388       Site[2]
1389

NOTES

1391        1. DRBD User's Guide
1392           http://www.drbd.org/users-guide/
1393
1394        2. DRBD Web Site
1395           http://www.drbd.org/
1396
1397
1398
1399DRBD 9.0.x                      17 January 2018                   DRBDSETUP(8)
Impressum