1DRBDSETUP(8)                 System Administration                DRBDSETUP(8)
2
3
4

NAME

6       drbdsetup - Configure the DRBD kernel module
7

SYNOPSIS

9       drbdsetup command {argument...} [option...]
10

DESCRIPTION

12       The drbdsetup utility serves to configure the DRBD kernel module and to
13       show its current configuration. Users usually interact with the drbdadm
14       utility, which provides a more high-level interface to DRBD than
15       drbdsetup. (See drbdadm's --dry-run option to see how drbdadm uses
16       drbdsetup.)
17
18       Some option arguments have a default scale which applies when a plain
19       number is specified (for example Kilo, or 1024 times the numeric
20       value). Such default scales can be overridden by using a suffix (for
21       example, M for Mega). The common suffixes K = 2^10 = 1024, M = 1024 K,
22       and G = 1024 M are supported.
23

COMMANDS

25       drbdsetup attach minor lower_dev meta_data_dev meta_data_index,
26       drbdsetup disk-options minor
27           The attach command attaches a lower-level device to an existing
28           replicated device. The disk-options command changes the disk
29           options of an attached lower-level device. In either case, the
30           replicated device must have been created with drbdsetup new-minor.
31
32           Both commands refer to the replicated device by its minor number.
33           lower_dev is the name of the lower-level device.  meta_data_dev is
34           the name of the device containing the metadata, and may be the same
35           as lower_dev.  meta_data_index is either a numeric metadata index,
36           or the keyword internal for internal metadata, or the keyword
37           flexible for variable-size external metadata. Available options:
38
39           --al-extents extents
40               DRBD automatically maintains a "hot" or "active" disk area
41               likely to be written to again soon based on the recent write
42               activity. The "active" disk area can be written to immediately,
43               while "inactive" disk areas must be "activated" first, which
44               requires a meta-data write. We also refer to this active disk
45               area as the "activity log".
46
47               The activity log saves meta-data writes, but the whole log must
48               be resynced upon recovery of a failed node. The size of the
49               activity log is a major factor of how long a resync will take
50               and how fast a replicated disk will become consistent after a
51               crash.
52
53               The activity log consists of a number of 4-Megabyte segments;
54               the al-extents parameter determines how many of those segments
55               can be active at the same time. The default value for
56               al-extents is 1237, with a minimum of 7 and a maximum of 65536.
57
58               Note that the effective maximum may be smaller, depending on
59               how you created the device meta data, see also drbdmeta(8) The
60               effective maximum is 919 * (available on-disk activity-log
61               ring-buffer area/4kB -1), the default 32kB ring-buffer effects
62               a maximum of 6433 (covers more than 25 GiB of data) We
63               recommend to keep this well within the amount your backend
64               storage and replication link are able to resync inside of about
65               5 minutes.
66
67           --al-updates {yes | no}
68               With this parameter, the activity log can be turned off
69               entirely (see the al-extents parameter). This will speed up
70               writes because fewer meta-data writes will be necessary, but
71               the entire device needs to be resynchronized opon recovery of a
72               failed primary node. The default value for al-updates is yes.
73
74           --disk-barrier,
75           --disk-flushes,
76           --disk-drain
77               DRBD has three methods of handling the ordering of dependent
78               write requests:
79
80               disk-barrier
81                   Use disk barriers to make sure that requests are written to
82                   disk in the right order. Barriers ensure that all requests
83                   submitted before a barrier make it to the disk before any
84                   requests submitted after the barrier. This is implemented
85                   using 'tagged command queuing' on SCSI devices and 'native
86                   command queuing' on SATA devices. Only some devices and
87                   device stacks support this method. The device mapper (LVM)
88                   only supports barriers in some configurations.
89
90                   Note that on systems which do not support disk barriers,
91                   enabling this option can lead to data loss or corruption.
92                   Until DRBD 8.4.1, disk-barrier was turned on if the I/O
93                   stack below DRBD did support barriers. Kernels since
94                   linux-2.6.36 (or 2.6.32 RHEL6) no longer allow to detect if
95                   barriers are supported. Since drbd-8.4.2, this option is
96                   off by default and needs to be enabled explicitly.
97
98               disk-flushes
99                   Use disk flushes between dependent write requests, also
100                   referred to as 'force unit access' by drive vendors. This
101                   forces all data to disk. This option is enabled by default.
102
103               disk-drain
104                   Wait for the request queue to "drain" (that is, wait for
105                   the requests to finish) before submitting a dependent write
106                   request. This method requires that requests are stable on
107                   disk when they finish. Before DRBD 8.0.9, this was the only
108                   method implemented. This option is enabled by default. Do
109                   not disable in production environments.
110
111               From these three methods, drbd will use the first that is
112               enabled and supported by the backing storage device. If all
113               three of these options are turned off, DRBD will submit write
114               requests without bothering about dependencies. Depending on the
115               I/O stack, write requests can be reordered, and they can be
116               submitted in a different order on different cluster nodes. This
117               can result in data loss or corruption. Therefore, turning off
118               all three methods of controlling write ordering is strongly
119               discouraged.
120
121               A general guideline for configuring write ordering is to use
122               disk barriers or disk flushes when using ordinary disks (or an
123               ordinary disk array) with a volatile write cache. On storage
124               without cache or with a battery backed write cache, disk
125               draining can be a reasonable choice.
126
127           --disk-timeout
128               If the lower-level device on which a DRBD device stores its
129               data does not finish an I/O request within the defined
130               disk-timeout, DRBD treats this as a failure. The lower-level
131               device is detached, and the device's disk state advances to
132               Diskless. If DRBD is connected to one or more peers, the failed
133               request is passed on to one of them.
134
135               This option is dangerous and may lead to kernel panic!
136
137               "Aborting" requests, or force-detaching the disk, is intended
138               for completely blocked/hung local backing devices which do no
139               longer complete requests at all, not even do error completions.
140               In this situation, usually a hard-reset and failover is the
141               only way out.
142
143               By "aborting", basically faking a local error-completion, we
144               allow for a more graceful swichover by cleanly migrating
145               services. Still the affected node has to be rebooted "soon".
146
147               By completing these requests, we allow the upper layers to
148               re-use the associated data pages.
149
150               If later the local backing device "recovers", and now DMAs some
151               data from disk into the original request pages, in the best
152               case it will just put random data into unused pages; but
153               typically it will corrupt meanwhile completely unrelated data,
154               causing all sorts of damage.
155
156               Which means delayed successful completion, especially for READ
157               requests, is a reason to panic(). We assume that a delayed
158               *error* completion is OK, though we still will complain noisily
159               about it.
160
161               The default value of disk-timeout is 0, which stands for an
162               infinite timeout. Timeouts are specified in units of 0.1
163               seconds. This option is available since DRBD 8.3.12.
164
165           --md-flushes
166               Enable disk flushes and disk barriers on the meta-data device.
167               This option is enabled by default. See the disk-flushes
168               parameter.
169
170           --on-io-error handler
171               Configure how DRBD reacts to I/O errors on a lower-level
172               device. The following policies are defined:
173
174               pass_on
175                   Change the disk status to Inconsistent, mark the failed
176                   block as inconsistent in the bitmap, and retry the I/O
177                   operation on a remote cluster node.
178
179               call-local-io-error
180                   Call the local-io-error handler (see the handlers section).
181
182               detach
183                   Detach the lower-level device and continue in diskless
184                   mode.
185
186
187           --read-balancing policy
188               Distribute read requests among cluster nodes as defined by
189               policy. The supported policies are prefer-local (the default),
190               prefer-remote, round-robin, least-pending,
191               when-congested-remote, 32K-striping, 64K-striping,
192               128K-striping, 256K-striping, 512K-striping and 1M-striping.
193
194               This option is available since DRBD 8.4.1.
195
196           resync-after minor
197               Define that a device should only resynchronize after the
198               specified other device. By default, no order between devices is
199               defined, and all devices will resynchronize in parallel.
200               Depending on the configuration of the lower-level devices, and
201               the available network and disk bandwidth, this can slow down
202               the overall resync process. This option can be used to form a
203               chain or tree of dependencies among devices.
204
205           --size size
206               Specify the size of the lower-level device explicitly instead
207               of determining it automatically. The device size must be
208               determined once and is remembered for the lifetime of the
209               device. In order to determine it automatically, all the
210               lower-level devices on all nodes must be attached, and all
211               nodes must be connected. If the size is specified explicitly,
212               this is not necessary. The size value is assumed to be in units
213               of sectors (512 bytes) by default.
214
215           --discard-zeroes-if-aligned {yes | no}
216               There are several aspects to discard/trim/unmap support on
217               linux block devices. Even if discard is supported in general,
218               it may fail silently, or may partially ignore discard requests.
219               Devices also announce whether reading from unmapped blocks
220               returns defined data (usually zeroes), or undefined data
221               (possibly old data, possibly garbage).
222
223               If on different nodes, DRBD is backed by devices with differing
224               discard characteristics, discards may lead to data divergence
225               (old data or garbage left over on one backend, zeroes due to
226               unmapped areas on the other backend). Online verify would now
227               potentially report tons of spurious differences. While probably
228               harmless for most use cases (fstrim on a file system), DRBD
229               cannot have that.
230
231               To play safe, we have to disable discard support, if our local
232               backend (on a Primary) does not support
233               "discard_zeroes_data=true". We also have to translate discards
234               to explicit zero-out on the receiving side, unless the
235               receiving side (Secondary) supports "discard_zeroes_data=true",
236               thereby allocating areas what were supposed to be unmapped.
237
238               There are some devices (notably the LVM/DM thin provisioning)
239               that are capable of discard, but announce
240               discard_zeroes_data=false. In the case of DM-thin, discards
241               aligned to the chunk size will be unmapped, and reading from
242               unmapped sectors will return zeroes. However, unaligned partial
243               head or tail areas of discard requests will be silently
244               ignored.
245
246               If we now add a helper to explicitly zero-out these unaligned
247               partial areas, while passing on the discard of the aligned full
248               chunks, we effectively achieve discard_zeroes_data=true on such
249               devices.
250
251               Setting discard-zeroes-if-aligned to yes will allow DRBD to use
252               discards, and to announce discard_zeroes_data=true, even on
253               backends that announce discard_zeroes_data=false.
254
255               Setting discard-zeroes-if-aligned to no will cause DRBD to
256               always fall-back to zero-out on the receiving side, and to not
257               even announce discard capabilities on the Primary, if the
258               respective backend announces discard_zeroes_data=false.
259
260               We used to ignore the discard_zeroes_data setting completely.
261               To not break established and expected behaviour, and suddenly
262               cause fstrim on thin-provisioned LVs to run out-of-space
263               instead of freeing up space, the default value is yes.
264
265               This option is available since 8.4.7.
266
267           --disable-write-same {yes | no}
268               Some disks announce WRITE_SAME support to the kernel but fail
269               with an I/O error upon actually receiving such a request. This
270               mostly happens when using virtualized disks -- notably, this
271               behavior has been observed with VMware's virtual disks.
272
273               When disable-write-same is set to yes, WRITE_SAME detection is
274               manually overriden and support is disabled.
275
276               The default value of disable-write-same is no. This option is
277               available since 8.4.7.
278
279           --rs-discard-granularity byte
280               When rs-discard-granularity is set to a non zero, positive
281               value then DRBD tries to do a resync operation in requests of
282               this size. In case such a block contains only zero bytes on the
283               sync source node, the sync target node will issue a
284               discard/trim/unmap command for the area.
285
286               The value is constrained by the discard granularity of the
287               backing block device. In case rs-discard-granularity is not a
288               multiplier of the discard granularity of the backing block
289               device DRBD rounds it up. The feature only gets active if the
290               backing block device reads back zeroes after a discard command.
291
292               The default value of rs-discard-granularity is 0. This option
293               is available since 8.4.7.
294
295       drbdsetup peer-device-options resource peer_node_id volume
296           These are options that affect the peer's device.
297
298           --c-delay-target delay_target,
299           --c-fill-target fill_target,
300           --c-max-rate max_rate,
301           --c-plan-ahead plan_time
302               Dynamically control the resync speed. The following modes are
303               available:
304
305               •   Dynamic control with fill target (default). Enabled when
306                   c-plan-ahead is non-zero and c-fill-target is non-zero. The
307                   goal is to fill the buffers along the data path with a
308                   defined amount of data. This mode is recommended when
309                   DRBD-proxy is used. Configured with c-plan-ahead,
310                   c-fill-target and c-max-rate.
311
312               •   Dynamic control with delay target. Enabled when
313                   c-plan-ahead is non-zero (default) and c-fill-target is
314                   zero. The goal is to have a defined delay along the path.
315                   Configured with c-plan-ahead, c-delay-target and
316                   c-max-rate.
317
318               •   Fixed resync rate. Enabled when c-plan-ahead is zero. DRBD
319                   will try to perform resync I/O at a fixed rate. Configured
320                   with resync-rate.
321
322               The c-plan-ahead parameter defines how fast DRBD adapts to
323               changes in the resync speed. It should be set to five times the
324               network round-trip time or more. The default value of
325               c-plan-ahead is 20, in units of 0.1 seconds.
326
327               The c-fill-target parameter defines the how much resync data
328               DRBD should aim to have in-flight at all times. Common values
329               for "normal" data paths range from 4K to 100K. The default
330               value of c-fill-target is 100, in units of sectors
331
332               The c-delay-target parameter defines the delay in the resync
333               path that DRBD should aim for. This should be set to five times
334               the network round-trip time or more. The default value of
335               c-delay-target is 10, in units of 0.1 seconds.
336
337               The c-max-rate parameter limits the maximum bandwidth used by
338               dynamically controlled resyncs. Setting this to zero removes
339               the limitation (since DRBD 9.0.28). It should be set to either
340               the bandwidth available between the DRBD hosts and the machines
341               hosting DRBD-proxy, or to the available disk bandwidth. The
342               default value of c-max-rate is 102400, in units of KiB/s.
343
344               Dynamic resync speed control is available since DRBD 8.3.9.
345
346           --c-min-rate min_rate
347               A node which is primary and sync-source has to schedule
348               application I/O requests and resync I/O requests. The
349               c-min-rate parameter limits how much bandwidth is available for
350               resync I/O; the remaining bandwidth is used for application
351               I/O.
352
353               A c-min-rate value of 0 means that there is no limit on the
354               resync I/O bandwidth. This can slow down application I/O
355               significantly. Use a value of 1 (1 KiB/s) for the lowest
356               possible resync rate.
357
358               The default value of c-min-rate is 250, in units of KiB/s.
359
360           --resync-rate rate
361               Define how much bandwidth DRBD may use for resynchronizing.
362               DRBD allows "normal" application I/O even during a resync. If
363               the resync takes up too much bandwidth, application I/O can
364               become very slow. This parameter allows to avoid that. Please
365               note this is option only works when the dynamic resync
366               controller is disabled.
367
368       drbdsetup check-resize minor
369           Remember the current size of the lower-level device of the
370           specified replicated device. Used by drbdadm. The size information
371           is stored in file /var/lib/drbd/drbd-minor-minor.lkbd.
372
373       drbdsetup new-peer resource peer_node_id,
374       drbdsetup net-options resource peer_node_id
375           The new-peer command creates a connection within a resource. The
376           resource must have been created with drbdsetup new-resource. The
377           net-options command changes the network options of an existing
378           connection. Before a connection can be activated with the connect
379           command, at least one path need to added with the new-path command.
380           Available options:
381
382           --after-sb-0pri policy
383               Define how to react if a split-brain scenario is detected and
384               none of the two nodes is in primary role. (We detect
385               split-brain scenarios when two nodes connect; split-brain
386               decisions are always between two nodes.) The defined policies
387               are:
388
389               disconnect
390                   No automatic resynchronization; simply disconnect.
391
392               discard-younger-primary,
393               discard-older-primary
394                   Resynchronize from the node which became primary first
395                   (discard-younger-primary) or last (discard-older-primary).
396                   If both nodes became primary independently, the
397                   discard-least-changes policy is used.
398
399               discard-zero-changes
400                   If only one of the nodes wrote data since the split brain
401                   situation was detected, resynchronize from this node to the
402                   other. If both nodes wrote data, disconnect.
403
404               discard-least-changes
405                   Resynchronize from the node with more modified blocks.
406
407               discard-node-nodename
408                   Always resynchronize to the named node.
409
410           --after-sb-1pri policy
411               Define how to react if a split-brain scenario is detected, with
412               one node in primary role and one node in secondary role. (We
413               detect split-brain scenarios when two nodes connect, so
414               split-brain decisions are always among two nodes.) The defined
415               policies are:
416
417               disconnect
418                   No automatic resynchronization, simply disconnect.
419
420               consensus
421                   Discard the data on the secondary node if the after-sb-0pri
422                   algorithm would also discard the data on the secondary
423                   node. Otherwise, disconnect.
424
425               violently-as0p
426                   Always take the decision of the after-sb-0pri algorithm,
427                   even if it causes an erratic change of the primary's view
428                   of the data. This is only useful if a single-node file
429                   system (i.e., not OCFS2 or GFS) with the
430                   allow-two-primaries flag is used. This option can cause the
431                   primary node to crash, and should not be used.
432
433               discard-secondary
434                   Discard the data on the secondary node.
435
436               call-pri-lost-after-sb
437                   Always take the decision of the after-sb-0pri algorithm. If
438                   the decision is to discard the data on the primary node,
439                   call the pri-lost-after-sb handler on the primary node.
440
441           --after-sb-2pri policy
442               Define how to react if a split-brain scenario is detected and
443               both nodes are in primary role. (We detect split-brain
444               scenarios when two nodes connect, so split-brain decisions are
445               always among two nodes.) The defined policies are:
446
447               disconnect
448                   No automatic resynchronization, simply disconnect.
449
450               violently-as0p
451                   See the violently-as0p policy for after-sb-1pri.
452
453               call-pri-lost-after-sb
454                   Call the pri-lost-after-sb helper program on one of the
455                   machines unless that machine can demote to secondary. The
456                   helper program is expected to reboot the machine, which
457                   brings the node into a secondary role. Which machine runs
458                   the helper program is determined by the after-sb-0pri
459                   strategy.
460
461           --allow-two-primaries
462               The most common way to configure DRBD devices is to allow only
463               one node to be primary (and thus writable) at a time.
464
465               In some scenarios it is preferable to allow two nodes to be
466               primary at once; a mechanism outside of DRBD then must make
467               sure that writes to the shared, replicated device happen in a
468               coordinated way. This can be done with a shared-storage cluster
469               file system like OCFS2 and GFS, or with virtual machine images
470               and a virtual machine manager that can migrate virtual machines
471               between physical machines.
472
473               The allow-two-primaries parameter tells DRBD to allow two nodes
474               to be primary at the same time. Never enable this option when
475               using a non-distributed file system; otherwise, data corruption
476               and node crashes will result!
477
478           --always-asbp
479               Normally the automatic after-split-brain policies are only used
480               if current states of the UUIDs do not indicate the presence of
481               a third node.
482
483               With this option you request that the automatic
484               after-split-brain policies are used as long as the data sets of
485               the nodes are somehow related. This might cause a full sync, if
486               the UUIDs indicate the presence of a third node. (Or double
487               faults led to strange UUID sets.)
488
489           --connect-int time
490               As soon as a connection between two nodes is configured with
491               drbdsetup connect, DRBD immediately tries to establish the
492               connection. If this fails, DRBD waits for connect-int seconds
493               and then repeats. The default value of connect-int is 10
494               seconds.
495
496           --cram-hmac-alg hash-algorithm
497               Configure the hash-based message authentication code (HMAC) or
498               secure hash algorithm to use for peer authentication. The
499               kernel supports a number of different algorithms, some of which
500               may be loadable as kernel modules. See the shash algorithms
501               listed in /proc/crypto. By default, cram-hmac-alg is unset.
502               Peer authentication also requires a shared-secret to be
503               configured.
504
505           --csums-alg hash-algorithm
506               Normally, when two nodes resynchronize, the sync target
507               requests a piece of out-of-sync data from the sync source, and
508               the sync source sends the data. With many usage patterns, a
509               significant number of those blocks will actually be identical.
510
511               When a csums-alg algorithm is specified, when requesting a
512               piece of out-of-sync data, the sync target also sends along a
513               hash of the data it currently has. The sync source compares
514               this hash with its own version of the data. It sends the sync
515               target the new data if the hashes differ, and tells it that the
516               data are the same otherwise. This reduces the network bandwidth
517               required, at the cost of higher cpu utilization and possibly
518               increased I/O on the sync target.
519
520               The csums-alg can be set to one of the secure hash algorithms
521               supported by the kernel; see the shash algorithms listed in
522               /proc/crypto. By default, csums-alg is unset.
523
524           --csums-after-crash-only
525               Enabling this option (and csums-alg, above) makes it possible
526               to use the checksum based resync only for the first resync
527               after primary crash, but not for later "network hickups".
528
529               In most cases, block that are marked as need-to-be-resynced are
530               in fact changed, so calculating checksums, and both reading and
531               writing the blocks on the resync target is all effective
532               overhead.
533
534               The advantage of checksum based resync is mostly after primary
535               crash recovery, where the recovery marked larger areas (those
536               covered by the activity log) as need-to-be-resynced, just in
537               case. Introduced in 8.4.5.
538
539           --data-integrity-alg  alg
540               DRBD normally relies on the data integrity checks built into
541               the TCP/IP protocol, but if a data integrity algorithm is
542               configured, it will additionally use this algorithm to make
543               sure that the data received over the network match what the
544               sender has sent. If a data integrity error is detected, DRBD
545               will close the network connection and reconnect, which will
546               trigger a resync.
547
548               The data-integrity-alg can be set to one of the secure hash
549               algorithms supported by the kernel; see the shash algorithms
550               listed in /proc/crypto. By default, this mechanism is turned
551               off.
552
553               Because of the CPU overhead involved, we recommend not to use
554               this option in production environments. Also see the notes on
555               data integrity below.
556
557           --fencing fencing_policy
558               Fencing is a preventive measure to avoid situations where both
559               nodes are primary and disconnected. This is also known as a
560               split-brain situation. DRBD supports the following fencing
561               policies:
562
563               dont-care
564                   No fencing actions are taken. This is the default policy.
565
566               resource-only
567                   If a node becomes a disconnected primary, it tries to fence
568                   the peer. This is done by calling the fence-peer handler.
569                   The handler is supposed to reach the peer over an
570                   alternative communication path and call 'drbdadm outdate
571                   minor' there.
572
573               resource-and-stonith
574                   If a node becomes a disconnected primary, it freezes all
575                   its IO operations and calls its fence-peer handler. The
576                   fence-peer handler is supposed to reach the peer over an
577                   alternative communication path and call 'drbdadm outdate
578                   minor' there. In case it cannot do that, it should stonith
579                   the peer. IO is resumed as soon as the situation is
580                   resolved. In case the fence-peer handler fails, I/O can be
581                   resumed manually with 'drbdadm resume-io'.
582
583           --ko-count number
584               If a secondary node fails to complete a write request in
585               ko-count times the timeout parameter, it is excluded from the
586               cluster. The primary node then sets the connection to this
587               secondary node to Standalone. To disable this feature, you
588               should explicitly set it to 0; defaults may change between
589               versions.
590
591           --max-buffers number
592               Limits the memory usage per DRBD minor device on the receiving
593               side, or for internal buffers during resync or online-verify.
594               Unit is PAGE_SIZE, which is 4 KiB on most systems. The minimum
595               possible setting is hard coded to 32 (=128 KiB). These buffers
596               are used to hold data blocks while they are written to/read
597               from disk. To avoid possible distributed deadlocks on
598               congestion, this setting is used as a throttle threshold rather
599               than a hard limit. Once more than max-buffers pages are in use,
600               further allocation from this pool is throttled. You want to
601               increase max-buffers if you cannot saturate the IO backend on
602               the receiving side.
603
604           --max-epoch-size number
605               Define the maximum number of write requests DRBD may issue
606               before issuing a write barrier. The default value is 2048, with
607               a minimum of 1 and a maximum of 20000. Setting this parameter
608               to a value below 10 is likely to decrease performance.
609
610           --on-congestion policy,
611           --congestion-fill threshold,
612           --congestion-extents threshold
613               By default, DRBD blocks when the TCP send queue is full. This
614               prevents applications from generating further write requests
615               until more buffer space becomes available again.
616
617               When DRBD is used together with DRBD-proxy, it can be better to
618               use the pull-ahead on-congestion policy, which can switch DRBD
619               into ahead/behind mode before the send queue is full. DRBD then
620               records the differences between itself and the peer in its
621               bitmap, but it no longer replicates them to the peer. When
622               enough buffer space becomes available again, the node
623               resynchronizes with the peer and switches back to normal
624               replication.
625
626               This has the advantage of not blocking application I/O even
627               when the queues fill up, and the disadvantage that peer nodes
628               can fall behind much further. Also, while resynchronizing, peer
629               nodes will become inconsistent.
630
631               The available congestion policies are block (the default) and
632               pull-ahead. The congestion-fill parameter defines how much data
633               is allowed to be "in flight" in this connection. The default
634               value is 0, which disables this mechanism of congestion
635               control, with a maximum of 10 GiBytes. The congestion-extents
636               parameter defines how many bitmap extents may be active before
637               switching into ahead/behind mode, with the same default and
638               limits as the al-extents parameter. The congestion-extents
639               parameter is effective only when set to a value smaller than
640               al-extents.
641
642               Ahead/behind mode is available since DRBD 8.3.10.
643
644           --ping-int interval
645               When the TCP/IP connection to a peer is idle for more than
646               ping-int seconds, DRBD will send a keep-alive packet to make
647               sure that a failed peer or network connection is detected
648               reasonably soon. The default value is 10 seconds, with a
649               minimum of 1 and a maximum of 120 seconds. The unit is seconds.
650
651           --ping-timeout timeout
652               Define the timeout for replies to keep-alive packets. If the
653               peer does not reply within ping-timeout, DRBD will close and
654               try to reestablish the connection. The default value is 0.5
655               seconds, with a minimum of 0.1 seconds and a maximum of 3
656               seconds. The unit is tenths of a second.
657
658           --socket-check-timeout timeout
659               In setups involving a DRBD-proxy and connections that
660               experience a lot of buffer-bloat it might be necessary to set
661               ping-timeout to an unusual high value. By default DRBD uses the
662               same value to wait if a newly established TCP-connection is
663               stable. Since the DRBD-proxy is usually located in the same
664               data center such a long wait time may hinder DRBD's connect
665               process.
666
667               In such setups socket-check-timeout should be set to at least
668               to the round trip time between DRBD and DRBD-proxy. I.e. in
669               most cases to 1.
670
671               The default unit is tenths of a second, the default value is 0
672               (which causes DRBD to use the value of ping-timeout instead).
673               Introduced in 8.4.5.
674
675           --protocol name
676               Use the specified protocol on this connection. The supported
677               protocols are:
678
679               A
680                   Writes to the DRBD device complete as soon as they have
681                   reached the local disk and the TCP/IP send buffer.
682
683               B
684                   Writes to the DRBD device complete as soon as they have
685                   reached the local disk, and all peers have acknowledged the
686                   receipt of the write requests.
687
688               C
689                   Writes to the DRBD device complete as soon as they have
690                   reached the local and all remote disks.
691
692
693           --rcvbuf-size size
694               Configure the size of the TCP/IP receive buffer. A value of 0
695               (the default) causes the buffer size to adjust dynamically.
696               This parameter usually does not need to be set, but it can be
697               set to a value up to 10 MiB. The default unit is bytes.
698
699           --rr-conflict policy
700               This option helps to solve the cases when the outcome of the
701               resync decision is incompatible with the current role
702               assignment in the cluster. The defined policies are:
703
704               disconnect
705                   No automatic resynchronization, simply disconnect.
706
707               retry-connect
708                   Disconnect now, and retry to connect immediatly afterwards.
709
710               violently
711                   Resync to the primary node is allowed, violating the
712                   assumption that data on a block device are stable for one
713                   of the nodes.  Do not use this option, it is dangerous.
714
715               call-pri-lost
716                   Call the pri-lost handler on one of the machines. The
717                   handler is expected to reboot the machine, which puts it
718                   into secondary role.
719
720           --shared-secret secret
721               Configure the shared secret used for peer authentication. The
722               secret is a string of up to 64 characters. Peer authentication
723               also requires the cram-hmac-alg parameter to be set.
724
725           --sndbuf-size size
726               Configure the size of the TCP/IP send buffer. Since DRBD 8.0.13
727               / 8.2.7, a value of 0 (the default) causes the buffer size to
728               adjust dynamically. Values below 32 KiB are harmful to the
729               throughput on this connection. Large buffer sizes can be useful
730               especially when protocol A is used over high-latency networks;
731               the maximum value supported is 10 MiB.
732
733           --tcp-cork
734               By default, DRBD uses the TCP_CORK socket option to prevent the
735               kernel from sending partial messages; this results in fewer and
736               bigger packets on the network. Some network stacks can perform
737               worse with this optimization. On these, the tcp-cork parameter
738               can be used to turn this optimization off.
739
740           --timeout time
741               Define the timeout for replies over the network: if a peer node
742               does not send an expected reply within the specified timeout,
743               it is considered dead and the TCP/IP connection is closed. The
744               timeout value must be lower than connect-int and lower than
745               ping-int. The default is 6 seconds; the value is specified in
746               tenths of a second.
747
748           --use-rle
749               Each replicated device on a cluster node has a separate bitmap
750               for each of its peer devices. The bitmaps are used for tracking
751               the differences between the local and peer device: depending on
752               the cluster state, a disk range can be marked as different from
753               the peer in the device's bitmap, in the peer device's bitmap,
754               or in both bitmaps. When two cluster nodes connect, they
755               exchange each other's bitmaps, and they each compute the union
756               of the local and peer bitmap to determine the overall
757               differences.
758
759               Bitmaps of very large devices are also relatively large, but
760               they usually compress very well using run-length encoding. This
761               can save time and bandwidth for the bitmap transfers.
762
763               The use-rle parameter determines if run-length encoding should
764               be used. It is on by default since DRBD 8.4.0.
765
766           --verify-alg hash-algorithm
767               Online verification (drbdadm verify) computes and compares
768               checksums of disk blocks (i.e., hash values) in order to detect
769               if they differ. The verify-alg parameter determines which
770               algorithm to use for these checksums. It must be set to one of
771               the secure hash algorithms supported by the kernel before
772               online verify can be used; see the shash algorithms listed in
773               /proc/crypto.
774
775               We recommend to schedule online verifications regularly during
776               low-load periods, for example once a month. Also see the notes
777               on data integrity below.
778
779       drbdsetup new-path resource peer_node_id local-addr remote-addr
780           The new-path command creates a path within a connection. The
781           connection must have been created with drbdsetup new-peer.
782           Local_addr and remote_addr refer to the local and remote protocol,
783           network address, and port in the format
784           [address-family:]address[:port]. The address families ipv4, ipv6,
785           ssocks (Dolphin Interconnect Solutions' "super sockets"), sdp
786           (Infiniband Sockets Direct Protocol), and sci are supported (sci is
787           an alias for ssocks). If no address family is specified, ipv4 is
788           assumed. For all address families except ipv6, the address uses
789           IPv4 address notation (for example, 1.2.3.4). For ipv6, the address
790           is enclosed in brackets and uses IPv6 address notation (for
791           example, [fd01:2345:6789:abcd::1]). The port defaults to 7788.
792
793       drbdsetup connect resource peer_node_id
794           The connect command activates a connection. That means that the
795           DRBD driver will bind and listen on all local addresses of the
796           connection-'s paths. It will begin to try to establish one or more
797           paths of the connection. Available options:
798
799           --tentative
800               Only determine if a connection to the peer can be established
801               and if a resync is necessary (and in which direction) without
802               actually establishing the connection or starting the resync.
803               Check the system log to see what DRBD would do without the
804               --tentative option.
805
806           --discard-my-data
807               Discard the local data and resynchronize with the peer that has
808               the most up-to-data data. Use this option to manually recover
809               from a split-brain situation.
810
811       drbdsetup del-peer resource peer_node_id
812           The del-peer command removes a connection from a resource.
813
814       drbdsetup del-path resource peer_node_id local-addr remote-addr
815           The del-path command removes a path from a connection. Please note
816           that it fails if the path is necessary to keep a connected
817           connection in tact. In order to remove all paths, disconnect the
818           connection first.
819
820       drbdsetup cstate resource peer_node_id
821           Show the current state of a connection. The connection is
822           identified by the node-id of the peer; see the drbdsetup connect
823           command.
824
825       drbdsetup del-minor minor
826           Remove a replicated device. No lower-level device may be attached;
827           see drbdsetup detach.
828
829       drbdsetup del-resource resource
830           Remove a resource. All volumes and connections must be removed
831           first (drbdsetup del-minor, drbdsetup disconnect). Alternatively,
832           drbdsetup down can be used to remove a resource together with all
833           its volumes and connections.
834
835       drbdsetup detach minor
836           Detach the lower-level device of a replicated device. Available
837           options:
838
839           --force
840               Force the detach and return immediately. This puts the
841               lower-level device into failed state until all pending I/O has
842               completed, and then detaches the device. Any I/O not yet
843               submitted to the lower-level device (for example, because I/O
844               on the device was suspended) is assumed to have failed.
845
846
847       drbdsetup disconnect resource peer_node_id
848           Remove a connection to a peer host. The connection is identified by
849           the node-id of the peer; see the drbdsetup connect command.
850
851       drbdsetup down {resource | all}
852           Take a resource down by removing all volumes, connections, and the
853           resource itself.
854
855       drbdsetup dstate minor
856           Show the current disk state of a lower-level device.
857
858       drbdsetup events2 {resource | all}
859           Show the current state of all configured DRBD objects, followed by
860           all changes to the state.
861
862           The output format is meant to be human as well as machine readable.
863           The line starts with a word that indicates the kind of event:
864           exists for an existing object; create, destroy, and change if an
865           object is created, destroyed, or changed; call or response if an
866           event handler is called or it returns; or rename when the name of
867           an object is changed. The second word indicates the object the
868           event applies to: resource, device, connection, peer-device, path,
869           helper, or a dash (-) to indicate that the current state has been
870           dumped completely.
871
872           The remaining words identify the object and describe the state that
873           the object is in. Some special keys are worth mentioning:
874
875           resource may_promote:{yes|no}
876               Whether promoting to primary is expected to succeed. When
877               quorum is enabled, this can be used to trigger failover. When
878               may_promote:yes is reported on this node, then no writes are
879               possible on any other node, which generally means that the
880               application can be started on this node, even when it has been
881               running on another.
882
883           resource promotion_score:score
884               An integer heuristic indicating the relative preference for
885               promoting this resource. A higher score is better in terms of
886               having local disks and having access to up-to-date data. The
887               score may be positive even when some node is primary. It will
888               be zero when promotion is impossible due to quorum or lack of
889               any access to up-to-date data.
890
891           Available options:
892
893           --now
894               Terminate after reporting the current state. The default is to
895               continuously listen and report state changes.
896
897           --poll
898               Read from stdin and update when n is read. Newlines are
899               ignored. Every other input terminates the command.
900
901               Without --now, changes are printed as usual. On each n the
902               current state is fetched, but only changed objects are printed.
903               This is useful with --statistics or --full because DRBD does
904               not otherwise send updates when only the statistics change.
905
906               In combination with --now the full state is printed on each n.
907               No other changes are printed.
908
909           --statistics
910               Include statistics in the output.
911
912           --diff
913               Write information in form of a diff between old and new state.
914               This helps simple tools to avoid (old) state tracking on their
915               own.
916
917           --full
918               Write complete state information, especially on change events.
919               This enables --statistics and --verbose.
920
921
922       drbdsetup get-gi resource peer_node_id volume
923           Show the data generation identifiers for a device on a particular
924           connection. The device is identified by its volume number. The
925           connection is identified by its endpoints; see the drbdsetup
926           connect command.
927
928           The output consists of the current UUID, bitmap UUID, and the first
929           two history UUIDS, folowed by a set of flags. The current UUID and
930           history UUIDs are device specific; the bitmap UUID and flags are
931           peer device specific. This command only shows the first two history
932           UUIDs. Internally, DRBD maintains one history UUID for each
933           possible peer device.
934
935       drbdsetup invalidate minor
936           Replace the local data of a device with that of a peer. All the
937           local data will be marked out-of-sync, and a resync with the
938           specified peer device will be initialted.
939
940           Available options:
941
942           --reset-bitmap=no
943               Usually an invalidate operation sets all bits in the bitmap to
944               out-of-sync before beginning the resync from the peer. By
945               giving --reset-bitmap=no DRBD will use the bitmap as it is.
946               Usually this is used after an online verify operation found
947               differences in the backing devices.
948
949               The --reset-bitmap option is available since DRBD kernel driver
950               9.0.29 and drbd-utils 9.17.
951
952           --sync-from-peer-node-id
953               This option allows the caller to select the node to resync
954               from. if it is not gives, DRBD selects a suitable source node
955               itself.
956
957
958       drbdsetup invalidate-remote resource peer_node_id volume
959           Replace a peer device's data of a resource with the local data. The
960           peer device's data will be marked out-of-sync, and a resync from
961           the local node to the specified peer will be initiated.
962
963           Available options:
964
965           --reset-bitmap=no
966               Usually an invalidate remote operation sets all bits in the
967               bitmap to out-of-sync before beginning the resync to the peer.
968               By giving --reset-bitmap=no DRBD will use the bitmap as it is.
969               Usually this is used after an online verify operation found
970               differences in the backing devices.
971
972               The --reset-bitmap option is available since DRBD kernel driver
973               9.0.29 and drbd-utils 9.17.
974
975
976       drbdsetup new-current-uuid minor
977           Generate a new current UUID and rotates all other UUID values. This
978           has at least two use cases, namely to skip the initial sync, and to
979           reduce network bandwidth when starting in a single node
980           configuration and then later (re-)integrating a remote site.
981
982           Available option:
983
984           --clear-bitmap
985               Clears the sync bitmap in addition to generating a new current
986               UUID.
987
988           This can be used to skip the initial sync, if you want to start
989           from scratch. This use-case does only work on "Just Created" meta
990           data. Necessary steps:
991
992            1. On both nodes, initialize meta data and configure the device.
993
994               drbdadm create-md --force res/volume-number
995
996            2. They need to do the initial handshake, so they know their
997               sizes.
998
999               drbdadm up res
1000
1001            3. They are now Connected Secondary/Secondary
1002               Inconsistent/Inconsistent. Generate a new current-uuid and
1003               clear the dirty bitmap.
1004
1005               drbdadm --clear-bitmap new-current-uuid res
1006
1007            4. They are now Connected Secondary/Secondary UpToDate/UpToDate.
1008               Make one side primary and create a file system.
1009
1010               drbdadm primary res
1011
1012               mkfs -t fs-type $(drbdadm sh-dev res)
1013
1014           One obvious side-effect is that the replica is full of old garbage
1015           (unless you made them identical using other means), so any
1016           online-verify is expected to find any number of out-of-sync blocks.
1017
1018           You must not use this on pre-existing data!  Even though it may
1019           appear to work at first glance, once you switch to the other node,
1020           your data is toast, as it never got replicated. So do not leave out
1021           the mkfs (or equivalent).
1022
1023           This can also be used to shorten the initial resync of a cluster
1024           where the second node is added after the first node is gone into
1025           production, by means of disk shipping. This use-case works on
1026           disconnected devices only, the device may be in primary or
1027           secondary role.
1028
1029           The necessary steps on the current active server are:
1030
1031            1. drbdsetup new-current-uuid --clear-bitmap minor
1032
1033            2. Take the copy of the current active server. E.g. by pulling a
1034               disk out of the RAID1 controller, or by copying with dd. You
1035               need to copy the actual data, and the meta data.
1036
1037            3. drbdsetup new-current-uuid minor
1038
1039           Now add the disk to the new secondary node, and join it to the
1040           cluster. You will get a resync of that parts that were changed
1041           since the first call to drbdsetup in step 1.
1042
1043       drbdsetup new-minor resource minor volume
1044           Create a new replicated device within a resource. The command
1045           creates a block device inode for the replicated device (by default,
1046           /dev/drbdminor). The volume number identifies the device within the
1047           resource.
1048
1049       drbdsetup new-resource resource node_id,
1050       drbdsetup resource-options resource
1051           The new-resource command creates a new resource. The
1052           resource-options command changes the resource options of an
1053           existing resource. Available options:
1054
1055           --auto-promote bool-value
1056               A resource must be promoted to primary role before any of its
1057               devices can be mounted or opened for writing.
1058
1059               Before DRBD 9, this could only be done explicitly ("drbdadm
1060               primary"). Since DRBD 9, the auto-promote parameter allows to
1061               automatically promote a resource to primary role when one of
1062               its devices is mounted or opened for writing. As soon as all
1063               devices are unmounted or closed with no more remaining users,
1064               the role of the resource changes back to secondary.
1065
1066               Automatic promotion only succeeds if the cluster state allows
1067               it (that is, if an explicit drbdadm primary command would
1068               succeed). Otherwise, mounting or opening the device fails as it
1069               already did before DRBD 9: the mount(2) system call fails with
1070               errno set to EROFS (Read-only file system); the open(2) system
1071               call fails with errno set to EMEDIUMTYPE (wrong medium type).
1072
1073               Irrespective of the auto-promote parameter, if a device is
1074               promoted explicitly (drbdadm primary), it also needs to be
1075               demoted explicitly (drbdadm secondary).
1076
1077               The auto-promote parameter is available since DRBD 9.0.0, and
1078               defaults to yes.
1079
1080           --cpu-mask cpu-mask
1081               Set the cpu affinity mask for DRBD kernel threads. The cpu mask
1082               is specified as a hexadecimal number. The default value is 0,
1083               which lets the scheduler decide which kernel threads run on
1084               which CPUs. CPU numbers in cpu-mask which do not exist in the
1085               system are ignored.
1086
1087           --on-no-data-accessible policy
1088               Determine how to deal with I/O requests when the requested data
1089               is not available locally or remotely (for example, when all
1090               disks have failed). The defined policies are:
1091
1092               io-error
1093                   System calls fail with errno set to EIO.
1094
1095               suspend-io
1096                   The resource suspends I/O. I/O can be resumed by
1097                   (re)attaching the lower-level device, by connecting to a
1098                   peer which has access to the data, or by forcing DRBD to
1099                   resume I/O with drbdadm resume-io res. When no data is
1100                   available, forcing I/O to resume will result in the same
1101                   behavior as the io-error policy.
1102
1103               This setting is available since DRBD 8.3.9; the default policy
1104               is io-error.
1105
1106           --peer-ack-window value
1107               On each node and for each device, DRBD maintains a bitmap of
1108               the differences between the local and remote data for each peer
1109               device. For example, in a three-node setup (nodes A, B, C) each
1110               with a single device, every node maintains one bitmap for each
1111               of its peers.
1112
1113               When nodes receive write requests, they know how to update the
1114               bitmaps for the writing node, but not how to update the bitmaps
1115               between themselves. In this example, when a write request
1116               propagates from node A to B and C, nodes B and C know that they
1117               have the same data as node A, but not whether or not they both
1118               have the same data.
1119
1120               As a remedy, the writing node occasionally sends peer-ack
1121               packets to its peers which tell them which state they are in
1122               relative to each other.
1123
1124               The peer-ack-window parameter specifies how much data a primary
1125               node may send before sending a peer-ack packet. A low value
1126               causes increased network traffic; a high value causes less
1127               network traffic but higher memory consumption on secondary
1128               nodes and higher resync times between the secondary nodes after
1129               primary node failures. (Note: peer-ack packets may be sent due
1130               to other reasons as well, e.g. membership changes or expiry of
1131               the peer-ack-delay timer.)
1132
1133               The default value for peer-ack-window is 2 MiB, the default
1134               unit is sectors. This option is available since 9.0.0.
1135
1136           --peer-ack-delay expiry-time
1137               If after the last finished write request no new write request
1138               gets issued for expiry-time, then a peer-ack packet is sent. If
1139               a new write request is issued before the timer expires, the
1140               timer gets reset to expiry-time. (Note: peer-ack packets may be
1141               sent due to other reasons as well, e.g. membership changes or
1142               the peer-ack-window option.)
1143
1144               This parameter may influence resync behavior on remote nodes.
1145               Peer nodes need to wait until they receive an peer-ack for
1146               releasing a lock on an AL-extent. Resync operations between
1147               peers may need to wait for for these locks.
1148
1149               The default value for peer-ack-delay is 100 milliseconds, the
1150               default unit is milliseconds. This option is available since
1151               9.0.0.
1152
1153           --quorum value
1154               When activated, a cluster partition requires quorum in order to
1155               modify the replicated data set. That means a node in the
1156               cluster partition can only be promoted to primary if the
1157               cluster partition has quorum. Every node with a disk directly
1158               connected to the node that should be promoted counts. If a
1159               primary node should execute a write request, but the cluster
1160               partition has lost quorum, it will freeze IO or reject the
1161               write request with an error (depending on the on-no-quorum
1162               setting). Upon loosing quorum a primary always invokes the
1163               quorum-lost handler. The handler is intended for notification
1164               purposes, its return code is ignored.
1165
1166               The option's value might be set to off, majority, all or a
1167               numeric value. If you set it to a numeric value, make sure that
1168               the value is greater than half of your number of nodes. Quorum
1169               is a mechanism to avoid data divergence, it might be used
1170               instead of fencing when there are more than two repicas. It
1171               defaults to off
1172
1173               If all missing nodes are marked as outdated, a partition always
1174               has quorum, no matter how small it is. I.e. If you disconnect
1175               all secondary nodes gracefully a single primary continues to
1176               operate. In the moment a single secondary is lost, it has to be
1177               assumed that it forms a partition with all the missing outdated
1178               nodes. In case my partition might be smaller than the other,
1179               quorum is lost in this moment.
1180
1181               In case you want to allow permanently diskless nodes to gain
1182               quorum it is recommendet to not use majority or all. It is
1183               recommended to specify an absolute number, since DBRD's
1184               heuristic to determine the complete number of diskfull nodes in
1185               the cluster is unreliable.
1186
1187               The quorum implementation is available starting with the DRBD
1188               kernel driver version 9.0.7.
1189
1190           --quorum-minimum-redundancy value
1191               This option sets the minimal required number of nodes with an
1192               UpToDate disk to allow the partition to gain quorum. This is a
1193               different requirement than the plain quorum option expresses.
1194
1195               The option's value might be set to off, majority, all or a
1196               numeric value. If you set it to a numeric value, make sure that
1197               the value is greater than half of your number of nodes.
1198
1199               In case you want to allow permanently diskless nodes to gain
1200               quorum it is recommendet to not use majority or all. It is
1201               recommended to specify an absolute number, since DBRD's
1202               heuristic to determine the complete number of diskfull nodes in
1203               the cluster is unreliable.
1204
1205               This option is available starting with the DRBD kernel driver
1206               version 9.0.10.
1207
1208           --on-no-quorum {io-error | suspend-io}
1209               By default DRBD freezes IO on a device, that lost quorum. By
1210               setting the on-no-quorum to io-error it completes all IO
1211               operations with an error if quorum ist lost.
1212
1213               The on-no-quorum options is available starting with the DRBD
1214               kernel driver version 9.0.8.
1215
1216
1217       drbdsetup outdate minor
1218           Mark the data on a lower-level device as outdated. This is used for
1219           fencing, and prevents the resource the device is part of from
1220           becoming primary in the future. See the --fencing disk option.
1221
1222       drbdsetup pause-sync resource peer_node_id volume
1223           Stop resynchronizing between a local and a peer device by setting
1224           the local pause flag. The resync can only resume if the pause flags
1225           on both sides of a connection are cleared.
1226
1227       drbdsetup primary resource
1228           Change the role of a node in a resource to primary. This allows the
1229           replicated devices in this resource to be mounted or opened for
1230           writing. Available options:
1231
1232           --overwrite-data-of-peer
1233               This option is an alias for the --force option.
1234
1235           --force
1236               Force the resource to become primary even if some devices are
1237               not guaranteed to have up-to-date data. This option is used to
1238               turn one of the nodes in a newly created cluster into the
1239               primary node, or when manually recovering from a disaster.
1240
1241               Note that this can lead to split-brain scenarios. Also, when
1242               forcefully turning an inconsistent device into an up-to-date
1243               device, it is highly recommended to use any integrity checks
1244               available (such as a filesystem check) to make sure that the
1245               device can at least be used without crashing the system.
1246
1247           Note that DRBD usually only allows one node in a cluster to be in
1248           primary role at any time; this allows DRBD to coordinate access to
1249           the devices in a resource across nodes. The --allow-two-primaries
1250           network option changes this; in that case, a mechanism outside of
1251           DRBD needs to coordinate device access.
1252
1253       drbdsetup resize minor
1254           Reexamine the size of the lower-level devices of a replicated
1255           device on all nodes. This command is called after the lower-level
1256           devices on all nodes have been grown to adjust the size of the
1257           replicated device. Available options:
1258
1259           --assume-peer-has-space
1260               Resize the device even if some of the peer devices are not
1261               connected at the moment. DRBD will try to resize the peer
1262               devices when they next connect. It will refuse to connect to a
1263               peer device which is too small.
1264
1265           --assume-clean
1266               Do not resynchronize the added disk space; instead, assume that
1267               it is identical on all nodes. This option can be used when the
1268               disk space is uninitialized and differences do not matter, or
1269               when it is known to be identical on all nodes. See the
1270               drbdsetup verify command.
1271
1272           --size val
1273               This option can be used to online shrink the usable size of a
1274               drbd device. It's the users responsibility to make sure that a
1275               file system on the device is not truncated by that operation.
1276
1277           --al-stripes val --al-stripes val
1278               These options may be used to change the layout of the activity
1279               log online. In case of internal meta data this may invovle
1280               shrinking the user visible size at the same time (unsing the
1281               --size) or increasing the avalable space on the backing
1282               devices.
1283
1284
1285       drbdsetup resume-io minor
1286           Resume I/O on a replicated device. See the --fencing net option.
1287
1288       drbdsetup resume-sync resource peer_node_id volume
1289           Allow resynchronization to resume by clearing the local sync pause
1290           flag.
1291
1292       drbdsetup role resource
1293           Show the current role of a resource.
1294
1295       drbdsetup secondary resource
1296           Change the role of a node in a resource to secondary. This command
1297           fails if the replicated device is in use.
1298
1299       drbdsetup show {resource | all}
1300           Show the current configuration of a resource, or of all resources.
1301           Available options:
1302
1303           --show-defaults
1304               Show all configuration parameters, even the ones with default
1305               values. Normally, parameters with default values are not shown.
1306
1307
1308       drbdsetup show-gi resource peer_node_id volume
1309           Show the data generation identifiers for a device on a particular
1310           connection. In addition, explain the output. The output otherwise
1311           is the same as in the drbdsetup get-gi command.
1312
1313       drbdsetup state
1314           This is an alias for drbdsetup role. Deprecated.
1315
1316       drbdsetup status {resource | all}
1317           Show the status of a resource, or of all resources. The output
1318           consists of one paragraph for each configured resource. Each
1319           paragraph contains one line for each resource, followed by one line
1320           for each device, and one line for each connection. The device and
1321           connection lines are indented. The connection lines are followed by
1322           one line for each peer device; these lines are indented against the
1323           connection line.
1324
1325           Long lines are wrapped around at terminal width, and indented to
1326           indicate how the lines belongs together. Available options:
1327
1328           --verbose
1329               Include more information in the output even when it is likely
1330               redundant or irrelevant.
1331
1332           --statistics
1333               Include data transfer statistics in the output.
1334
1335           --color={always | auto | never}
1336               Colorize the output. With --color=auto, drbdsetup emits color
1337               codes only when standard output is connected to a terminal.
1338
1339           For example, the non-verbose output for a resource with only one
1340           connection and only one volume could look like this:
1341
1342               drbd0 role:Primary
1343                 disk:UpToDate
1344                 host2.example.com role:Secondary
1345                   disk:UpToDate
1346
1347
1348           With the --verbose option, the same resource could be reported as:
1349
1350               drbd0 node-id:1 role:Primary suspended:no
1351                 volume:0 minor:1 disk:UpToDate blocked:no
1352                 host2.example.com local:ipv4:192.168.123.4:7788
1353                     peer:ipv4:192.168.123.2:7788 node-id:0 connection:WFReportParams
1354                     role:Secondary congested:no
1355                   volume:0 replication:Connected disk:UpToDate resync-suspended:no
1356
1357
1358
1359       drbdsetup suspend-io minor
1360           Suspend I/O on a replicated device. It is not usually necessary to
1361           use this command.
1362
1363       drbdsetup verify resource peer_node_id volume
1364           Start online verification, change which part of the device will be
1365           verified, or stop online verification. The command requires the
1366           specified peer to be connected.
1367
1368           Online verification compares each disk block on the local and peer
1369           node. Blocks which differ between the nodes are marked as
1370           out-of-sync, but they are not automatically brought back into sync.
1371           To bring them into sync, the drbdsetup invalidate or drbdsetup
1372           invalidate-remote with the --reset-bitmap=no option can be used.
1373           Progress can be monitored in the output of drbdsetup status
1374           --statistics. Available options:
1375
1376           --start position
1377               Define where online verification should start. This parameter
1378               is ignored if online verification is already in progress. If
1379               the start parameter is not specified, online verification will
1380               continue where it was interrupted (if the connection to the
1381               peer was lost while verifying), after the previous stop sector
1382               (if the previous online verification has finished), or at the
1383               beginning of the device (if the end of the device was reached,
1384               or online verify has not run before).
1385
1386               The position on disk is specified in disk sectors (512 bytes)
1387               by default.
1388
1389           --stop position
1390               Define where online verification should stop. If online
1391               verification is already in progress, the stop position of the
1392               active online verification process is changed. Use this to stop
1393               online verification.
1394
1395               The position on disk is specified in disk sectors (512 bytes)
1396               by default.
1397
1398           Also see the notes on data integrity in the drbd.conf(5) manual
1399           page.
1400
1401       drbdsetup wait-connect-volume resource peer_node_id volume,
1402       drbdsetup wait-connect-connection resource peer_node_id,
1403       drbdsetup wait-connect-resource resource,
1404       drbdsetup wait-sync-volume resource peer_node_id volume,
1405       drbdsetup wait-sync-connection resource peer_node_id,
1406       drbdsetup wait-sync-resource resource
1407           The wait-connect-* commands waits until a device on a peer is
1408           visible. The wait-sync-* commands waits until a device on a peer is
1409           up to date. Available options for both commands:
1410
1411           --degr-wfc-timeout timeout
1412               Define how long to wait until all peers are connected in case
1413               the cluster consisted of a single node only when the system
1414               went down. This parameter is usually set to a value smaller
1415               than wfc-timeout. The assumption here is that peers which were
1416               unreachable before a reboot are less likely to be reachable
1417               after the reboot, so waiting is less likely to help.
1418
1419               The timeout is specified in seconds. The default value is 0,
1420               which stands for an infinite timeout. Also see the wfc-timeout
1421               parameter.
1422
1423           --outdated-wfc-timeout timeout
1424               Define how long to wait until all peers are connected if all
1425               peers were outdated when the system went down. This parameter
1426               is usually set to a value smaller than wfc-timeout. The
1427               assumption here is that an outdated peer cannot have become
1428               primary in the meantime, so we don't need to wait for it as
1429               long as for a node which was alive before.
1430
1431               The timeout is specified in seconds. The default value is 0,
1432               which stands for an infinite timeout. Also see the wfc-timeout
1433               parameter.
1434
1435           --wait-after-sb
1436               This parameter causes DRBD to continue waiting in the init
1437               script even when a split-brain situation has been detected, and
1438               the nodes therefore refuse to connect to each other.
1439
1440           --wfc-timeout timeout
1441               Define how long the init script waits until all peers are
1442               connected. This can be useful in combination with a cluster
1443               manager which cannot manage DRBD resources: when the cluster
1444               manager starts, the DRBD resources will already be up and
1445               running. With a more capable cluster manager such as Pacemaker,
1446               it makes more sense to let the cluster manager control DRBD
1447               resources. The timeout is specified in seconds. The default
1448               value is 0, which stands for an infinite timeout. Also see the
1449               degr-wfc-timeout parameter.
1450
1451
1452       drbdsetup forget-peer resource peer_node_id
1453           The forget-peer command removes all traces of a peer node from the
1454           meta-data. It frees a bitmap slot in the meta-data and make it
1455           avalable for futher bitmap slot allocation in case a so-far never
1456           seen node connects.
1457
1458           The connection must be taken down before this command may be used.
1459           In case the peer re-connects at a later point a bit-map based
1460           resync will be turned into a full-sync.
1461
1462       drbdsetup rename-resource resource new_name
1463           Change the name of resource to new_name on the local node. Note
1464           that, since there is no concept of resource names in DRBD's network
1465           protocol, it is technically possible to have different names for a
1466           resource on different nodes. However, it is strongly recommended to
1467           issue the same rename-resource command on all nodes to have
1468           consistent naming across the cluster.
1469
1470           A rename event will be issued on the events2 stream to notify users
1471           of the new name.
1472

EXAMPLES

1474       Please see the DRBD User's Guide[1] for examples.
1475

VERSION

1477       This document was revised for version 9.0.0 of the DRBD distribution.
1478

AUTHOR

1480       Written by Philipp Reisner <philipp.reisner@linbit.com> and Lars
1481       Ellenberg <lars.ellenberg@linbit.com>.
1482

REPORTING BUGS

1484       Report bugs to <drbd-user@lists.linbit.com>.
1485
1487       Copyright 2001-2018 LINBIT Information Technologies, Philipp Reisner,
1488       Lars Ellenberg. This is free software; see the source for copying
1489       conditions. There is NO warranty; not even for MERCHANTABILITY or
1490       FITNESS FOR A PARTICULAR PURPOSE.
1491

SEE ALSO

1493       drbd.conf(5), drbd(8), drbdadm(8), DRBD User's Guide[1], DRBD Web
1494       Site[2]
1495

NOTES

1497        1. DRBD User's Guide
1498           http://www.drbd.org/users-guide/
1499
1500        2. DRBD Web Site
1501           http://www.drbd.org/
1502
1503
1504
1505DRBD 9.0.x                      17 January 2018                   DRBDSETUP(8)
Impressum