1DRBDSETUP(8)                 System Administration                DRBDSETUP(8)
2
3
4

NAME

6       drbdsetup - Configure the DRBD kernel module
7

SYNOPSIS

9       drbdsetup command {argument...} [option...]
10

DESCRIPTION

12       The drbdsetup utility serves to configure the DRBD kernel module and to
13       show its current configuration. Users usually interact with the drbdadm
14       utility, which provides a more high-level interface to DRBD than
15       drbdsetup. (See drbdadm's --dry-run option to see how drbdadm uses
16       drbdsetup.)
17
18       Some option arguments have a default scale which applies when a plain
19       number is specified (for example Kilo, or 1024 times the numeric
20       value). Such default scales can be overridden by using a suffix (for
21       example, M for Mega). The common suffixes K = 2^10 = 1024, M = 1024 K,
22       and G = 1024 M are supported.
23

COMMANDS

25       drbdsetup attach minor lower_dev meta_data_dev meta_data_index,
26       drbdsetup disk-options minor
27           The attach command attaches a lower-level device to an existing
28           replicated device. The disk-options command changes the disk
29           options of an attached lower-level device. In either case, the
30           replicated device must have been created with drbdsetup new-minor.
31
32           Both commands refer to the replicated device by its minor number.
33           lower_dev is the name of the lower-level device.  meta_data_dev is
34           the name of the device containing the metadata, and may be the same
35           as lower_dev.  meta_data_index is either a numeric metadata index,
36           or the keyword internal for internal metadata, or the keyword
37           flexible for variable-size external metadata. Available options:
38
39           --al-extents extents
40               DRBD automatically maintains a "hot" or "active" disk area
41               likely to be written to again soon based on the recent write
42               activity. The "active" disk area can be written to immediately,
43               while "inactive" disk areas must be "activated" first, which
44               requires a meta-data write. We also refer to this active disk
45               area as the "activity log".
46
47               The activity log saves meta-data writes, but the whole log must
48               be resynced upon recovery of a failed node. The size of the
49               activity log is a major factor of how long a resync will take
50               and how fast a replicated disk will become consistent after a
51               crash.
52
53               The activity log consists of a number of 4-Megabyte segments;
54               the al-extents parameter determines how many of those segments
55               can be active at the same time. The default value for
56               al-extents is 1237, with a minimum of 7 and a maximum of 65536.
57
58               Note that the effective maximum may be smaller, depending on
59               how you created the device meta data, see also drbdmeta(8) The
60               effective maximum is 919 * (available on-disk activity-log
61               ring-buffer area/4kB -1), the default 32kB ring-buffer effects
62               a maximum of 6433 (covers more than 25 GiB of data) We
63               recommend to keep this well within the amount your backend
64               storage and replication link are able to resync inside of about
65               5 minutes.
66
67           --al-updates {yes | no}
68               With this parameter, the activity log can be turned off
69               entirely (see the al-extents parameter). This will speed up
70               writes because fewer meta-data writes will be necessary, but
71               the entire device needs to be resynchronized opon recovery of a
72               failed primary node. The default value for al-updates is yes.
73
74           --disk-barrier,
75           --disk-flushes,
76           --disk-drain
77               DRBD has three methods of handling the ordering of dependent
78               write requests:
79
80               disk-barrier
81                   Use disk barriers to make sure that requests are written to
82                   disk in the right order. Barriers ensure that all requests
83                   submitted before a barrier make it to the disk before any
84                   requests submitted after the barrier. This is implemented
85                   using 'tagged command queuing' on SCSI devices and 'native
86                   command queuing' on SATA devices. Only some devices and
87                   device stacks support this method. The device mapper (LVM)
88                   only supports barriers in some configurations.
89
90                   Note that on systems which do not support disk barriers,
91                   enabling this option can lead to data loss or corruption.
92                   Until DRBD 8.4.1, disk-barrier was turned on if the I/O
93                   stack below DRBD did support barriers. Kernels since
94                   linux-2.6.36 (or 2.6.32 RHEL6) no longer allow to detect if
95                   barriers are supported. Since drbd-8.4.2, this option is
96                   off by default and needs to be enabled explicitly.
97
98               disk-flushes
99                   Use disk flushes between dependent write requests, also
100                   referred to as 'force unit access' by drive vendors. This
101                   forces all data to disk. This option is enabled by default.
102
103               disk-drain
104                   Wait for the request queue to "drain" (that is, wait for
105                   the requests to finish) before submitting a dependent write
106                   request. This method requires that requests are stable on
107                   disk when they finish. Before DRBD 8.0.9, this was the only
108                   method implemented. This option is enabled by default. Do
109                   not disable in production environments.
110
111               From these three methods, drbd will use the first that is
112               enabled and supported by the backing storage device. If all
113               three of these options are turned off, DRBD will submit write
114               requests without bothering about dependencies. Depending on the
115               I/O stack, write requests can be reordered, and they can be
116               submitted in a different order on different cluster nodes. This
117               can result in data loss or corruption. Therefore, turning off
118               all three methods of controlling write ordering is strongly
119               discouraged.
120
121               A general guideline for configuring write ordering is to use
122               disk barriers or disk flushes when using ordinary disks (or an
123               ordinary disk array) with a volatile write cache. On storage
124               without cache or with a battery backed write cache, disk
125               draining can be a reasonable choice.
126
127           --disk-timeout
128               If the lower-level device on which a DRBD device stores its
129               data does not finish an I/O request within the defined
130               disk-timeout, DRBD treats this as a failure. The lower-level
131               device is detached, and the device's disk state advances to
132               Diskless. If DRBD is connected to one or more peers, the failed
133               request is passed on to one of them.
134
135               This option is dangerous and may lead to kernel panic!
136
137               "Aborting" requests, or force-detaching the disk, is intended
138               for completely blocked/hung local backing devices which do no
139               longer complete requests at all, not even do error completions.
140               In this situation, usually a hard-reset and failover is the
141               only way out.
142
143               By "aborting", basically faking a local error-completion, we
144               allow for a more graceful swichover by cleanly migrating
145               services. Still the affected node has to be rebooted "soon".
146
147               By completing these requests, we allow the upper layers to
148               re-use the associated data pages.
149
150               If later the local backing device "recovers", and now DMAs some
151               data from disk into the original request pages, in the best
152               case it will just put random data into unused pages; but
153               typically it will corrupt meanwhile completely unrelated data,
154               causing all sorts of damage.
155
156               Which means delayed successful completion, especially for READ
157               requests, is a reason to panic(). We assume that a delayed
158               *error* completion is OK, though we still will complain noisily
159               about it.
160
161               The default value of disk-timeout is 0, which stands for an
162               infinite timeout. Timeouts are specified in units of 0.1
163               seconds. This option is available since DRBD 8.3.12.
164
165           --md-flushes
166               Enable disk flushes and disk barriers on the meta-data device.
167               This option is enabled by default. See the disk-flushes
168               parameter.
169
170           --on-io-error handler
171               Configure how DRBD reacts to I/O errors on a lower-level
172               device. The following policies are defined:
173
174               pass_on
175                   Change the disk status to Inconsistent, mark the failed
176                   block as inconsistent in the bitmap, and retry the I/O
177                   operation on a remote cluster node.
178
179               call-local-io-error
180                   Call the local-io-error handler (see the handlers section).
181
182               detach
183                   Detach the lower-level device and continue in diskless
184                   mode.
185
186
187           --read-balancing policy
188               Distribute read requests among cluster nodes as defined by
189               policy. The supported policies are prefer-local (the default),
190               prefer-remote, round-robin, least-pending,
191               when-congested-remote, 32K-striping, 64K-striping,
192               128K-striping, 256K-striping, 512K-striping and 1M-striping.
193
194               This option is available since DRBD 8.4.1.
195
196           resync-after minor
197               Define that a device should only resynchronize after the
198               specified other device. By default, no order between devices is
199               defined, and all devices will resynchronize in parallel.
200               Depending on the configuration of the lower-level devices, and
201               the available network and disk bandwidth, this can slow down
202               the overall resync process. This option can be used to form a
203               chain or tree of dependencies among devices.
204
205           --size size
206               Specify the size of the lower-level device explicitly instead
207               of determining it automatically. The device size must be
208               determined once and is remembered for the lifetime of the
209               device. In order to determine it automatically, all the
210               lower-level devices on all nodes must be attached, and all
211               nodes must be connected. If the size is specified explicitly,
212               this is not necessary. The size value is assumed to be in units
213               of sectors (512 bytes) by default.
214
215           --discard-zeroes-if-aligned {yes | no}
216               There are several aspects to discard/trim/unmap support on
217               linux block devices. Even if discard is supported in general,
218               it may fail silently, or may partially ignore discard requests.
219               Devices also announce whether reading from unmapped blocks
220               returns defined data (usually zeroes), or undefined data
221               (possibly old data, possibly garbage).
222
223               If on different nodes, DRBD is backed by devices with differing
224               discard characteristics, discards may lead to data divergence
225               (old data or garbage left over on one backend, zeroes due to
226               unmapped areas on the other backend). Online verify would now
227               potentially report tons of spurious differences. While probably
228               harmless for most use cases (fstrim on a file system), DRBD
229               cannot have that.
230
231               To play safe, we have to disable discard support, if our local
232               backend (on a Primary) does not support
233               "discard_zeroes_data=true". We also have to translate discards
234               to explicit zero-out on the receiving side, unless the
235               receiving side (Secondary) supports "discard_zeroes_data=true",
236               thereby allocating areas what were supposed to be unmapped.
237
238               There are some devices (notably the LVM/DM thin provisioning)
239               that are capable of discard, but announce
240               discard_zeroes_data=false. In the case of DM-thin, discards
241               aligned to the chunk size will be unmapped, and reading from
242               unmapped sectors will return zeroes. However, unaligned partial
243               head or tail areas of discard requests will be silently
244               ignored.
245
246               If we now add a helper to explicitly zero-out these unaligned
247               partial areas, while passing on the discard of the aligned full
248               chunks, we effectively achieve discard_zeroes_data=true on such
249               devices.
250
251               Setting discard-zeroes-if-aligned to yes will allow DRBD to use
252               discards, and to announce discard_zeroes_data=true, even on
253               backends that announce discard_zeroes_data=false.
254
255               Setting discard-zeroes-if-aligned to no will cause DRBD to
256               always fall-back to zero-out on the receiving side, and to not
257               even announce discard capabilities on the Primary, if the
258               respective backend announces discard_zeroes_data=false.
259
260               We used to ignore the discard_zeroes_data setting completely.
261               To not break established and expected behaviour, and suddenly
262               cause fstrim on thin-provisioned LVs to run out-of-space
263               instead of freeing up space, the default value is yes.
264
265               This option is available since 8.4.7.
266
267           --disable-write-same {yes | no}
268               Some disks announce WRITE_SAME support to the kernel but fail
269               with an I/O error upon actually receiving such a request. This
270               mostly happens when using virtualized disks -- notably, this
271               behavior has been observed with VMware's virtual disks.
272
273               When disable-write-same is set to yes, WRITE_SAME detection is
274               manually overriden and support is disabled.
275
276               The default value of disable-write-same is no. This option is
277               available since 8.4.7.
278
279           --rs-discard-granularity byte
280               When rs-discard-granularity is set to a non zero, positive
281               value then DRBD tries to do a resync operation in requests of
282               this size. In case such a block contains only zero bytes on the
283               sync source node, the sync target node will issue a
284               discard/trim/unmap command for the area.
285
286               The value is constrained by the discard granularity of the
287               backing block device. In case rs-discard-granularity is not a
288               multiplier of the discard granularity of the backing block
289               device DRBD rounds it up. The feature only gets active if the
290               backing block device reads back zeroes after a discard command.
291
292               The usage of rs-discard-granularity may cause c-max-rate to be
293               exceeded. In particular, the resync rate may reach 10x the
294               value of rs-discard-granularity per second.
295
296               The default value of rs-discard-granularity is 0. This option
297               is available since 8.4.7.
298
299       drbdsetup peer-device-options resource peer_node_id volume
300           These are options that affect the peer's device.
301
302           --c-delay-target delay_target,
303           --c-fill-target fill_target,
304           --c-max-rate max_rate,
305           --c-plan-ahead plan_time
306               Dynamically control the resync speed. The following modes are
307               available:
308
309               •   Dynamic control with fill target (default). Enabled when
310                   c-plan-ahead is non-zero and c-fill-target is non-zero. The
311                   goal is to fill the buffers along the data path with a
312                   defined amount of data. This mode is recommended when
313                   DRBD-proxy is used. Configured with c-plan-ahead,
314                   c-fill-target and c-max-rate.
315
316               •   Dynamic control with delay target. Enabled when
317                   c-plan-ahead is non-zero (default) and c-fill-target is
318                   zero. The goal is to have a defined delay along the path.
319                   Configured with c-plan-ahead, c-delay-target and
320                   c-max-rate.
321
322               •   Fixed resync rate. Enabled when c-plan-ahead is zero. DRBD
323                   will try to perform resync I/O at a fixed rate. Configured
324                   with resync-rate.
325
326               The c-plan-ahead parameter defines how fast DRBD adapts to
327               changes in the resync speed. It should be set to five times the
328               network round-trip time or more. The default value of
329               c-plan-ahead is 20, in units of 0.1 seconds.
330
331               The c-fill-target parameter defines the how much resync data
332               DRBD should aim to have in-flight at all times. Common values
333               for "normal" data paths range from 4K to 100K. The default
334               value of c-fill-target is 100, in units of sectors
335
336               The c-delay-target parameter defines the delay in the resync
337               path that DRBD should aim for. This should be set to five times
338               the network round-trip time or more. The default value of
339               c-delay-target is 10, in units of 0.1 seconds.
340
341               The c-max-rate parameter limits the maximum bandwidth used by
342               dynamically controlled resyncs. Setting this to zero removes
343               the limitation (since DRBD 9.0.28). It should be set to either
344               the bandwidth available between the DRBD hosts and the machines
345               hosting DRBD-proxy, or to the available disk bandwidth. The
346               default value of c-max-rate is 102400, in units of KiB/s.
347
348               Dynamic resync speed control is available since DRBD 8.3.9.
349
350           --c-min-rate min_rate
351               A node which is primary and sync-source has to schedule
352               application I/O requests and resync I/O requests. The
353               c-min-rate parameter limits how much bandwidth is available for
354               resync I/O; the remaining bandwidth is used for application
355               I/O.
356
357               A c-min-rate value of 0 means that there is no limit on the
358               resync I/O bandwidth. This can slow down application I/O
359               significantly. Use a value of 1 (1 KiB/s) for the lowest
360               possible resync rate.
361
362               The default value of c-min-rate is 250, in units of KiB/s.
363
364           --resync-rate rate
365               Define how much bandwidth DRBD may use for resynchronizing.
366               DRBD allows "normal" application I/O even during a resync. If
367               the resync takes up too much bandwidth, application I/O can
368               become very slow. This parameter allows to avoid that. Please
369               note this is option only works when the dynamic resync
370               controller is disabled.
371
372       drbdsetup check-resize minor
373           Remember the current size of the lower-level device of the
374           specified replicated device. Used by drbdadm. The size information
375           is stored in file /var/lib/drbd/drbd-minor-minor.lkbd.
376
377       drbdsetup new-peer resource peer_node_id,
378       drbdsetup net-options resource peer_node_id
379           The new-peer command creates a connection within a resource. The
380           resource must have been created with drbdsetup new-resource. The
381           net-options command changes the network options of an existing
382           connection. Before a connection can be activated with the connect
383           command, at least one path need to added with the new-path command.
384           Available options:
385
386           --after-sb-0pri policy
387               Define how to react if a split-brain scenario is detected and
388               none of the two nodes is in primary role. (We detect
389               split-brain scenarios when two nodes connect; split-brain
390               decisions are always between two nodes.) The defined policies
391               are:
392
393               disconnect
394                   No automatic resynchronization; simply disconnect.
395
396               discard-younger-primary,
397               discard-older-primary
398                   Resynchronize from the node which became primary first
399                   (discard-younger-primary) or last (discard-older-primary).
400                   If both nodes became primary independently, the
401                   discard-least-changes policy is used.
402
403               discard-zero-changes
404                   If only one of the nodes wrote data since the split brain
405                   situation was detected, resynchronize from this node to the
406                   other. If both nodes wrote data, disconnect.
407
408               discard-least-changes
409                   Resynchronize from the node with more modified blocks.
410
411               discard-node-nodename
412                   Always resynchronize to the named node.
413
414           --after-sb-1pri policy
415               Define how to react if a split-brain scenario is detected, with
416               one node in primary role and one node in secondary role. (We
417               detect split-brain scenarios when two nodes connect, so
418               split-brain decisions are always among two nodes.) The defined
419               policies are:
420
421               disconnect
422                   No automatic resynchronization, simply disconnect.
423
424               consensus
425                   Discard the data on the secondary node if the after-sb-0pri
426                   algorithm would also discard the data on the secondary
427                   node. Otherwise, disconnect.
428
429               violently-as0p
430                   Always take the decision of the after-sb-0pri algorithm,
431                   even if it causes an erratic change of the primary's view
432                   of the data. This is only useful if a single-node file
433                   system (i.e., not OCFS2 or GFS) with the
434                   allow-two-primaries flag is used. This option can cause the
435                   primary node to crash, and should not be used.
436
437               discard-secondary
438                   Discard the data on the secondary node.
439
440               call-pri-lost-after-sb
441                   Always take the decision of the after-sb-0pri algorithm. If
442                   the decision is to discard the data on the primary node,
443                   call the pri-lost-after-sb handler on the primary node.
444
445           --after-sb-2pri policy
446               Define how to react if a split-brain scenario is detected and
447               both nodes are in primary role. (We detect split-brain
448               scenarios when two nodes connect, so split-brain decisions are
449               always among two nodes.) The defined policies are:
450
451               disconnect
452                   No automatic resynchronization, simply disconnect.
453
454               violently-as0p
455                   See the violently-as0p policy for after-sb-1pri.
456
457               call-pri-lost-after-sb
458                   Call the pri-lost-after-sb helper program on one of the
459                   machines unless that machine can demote to secondary. The
460                   helper program is expected to reboot the machine, which
461                   brings the node into a secondary role. Which machine runs
462                   the helper program is determined by the after-sb-0pri
463                   strategy.
464
465           --allow-two-primaries
466               The most common way to configure DRBD devices is to allow only
467               one node to be primary (and thus writable) at a time.
468
469               In some scenarios it is preferable to allow two nodes to be
470               primary at once; a mechanism outside of DRBD then must make
471               sure that writes to the shared, replicated device happen in a
472               coordinated way. This can be done with a shared-storage cluster
473               file system like OCFS2 and GFS, or with virtual machine images
474               and a virtual machine manager that can migrate virtual machines
475               between physical machines.
476
477               The allow-two-primaries parameter tells DRBD to allow two nodes
478               to be primary at the same time. Never enable this option when
479               using a non-distributed file system; otherwise, data corruption
480               and node crashes will result!
481
482           --always-asbp
483               Normally the automatic after-split-brain policies are only used
484               if current states of the UUIDs do not indicate the presence of
485               a third node.
486
487               With this option you request that the automatic
488               after-split-brain policies are used as long as the data sets of
489               the nodes are somehow related. This might cause a full sync, if
490               the UUIDs indicate the presence of a third node. (Or double
491               faults led to strange UUID sets.)
492
493           --connect-int time
494               As soon as a connection between two nodes is configured with
495               drbdsetup connect, DRBD immediately tries to establish the
496               connection. If this fails, DRBD waits for connect-int seconds
497               and then repeats. The default value of connect-int is 10
498               seconds.
499
500           --cram-hmac-alg hash-algorithm
501               Configure the hash-based message authentication code (HMAC) or
502               secure hash algorithm to use for peer authentication. The
503               kernel supports a number of different algorithms, some of which
504               may be loadable as kernel modules. See the shash algorithms
505               listed in /proc/crypto. By default, cram-hmac-alg is unset.
506               Peer authentication also requires a shared-secret to be
507               configured.
508
509           --csums-alg hash-algorithm
510               Normally, when two nodes resynchronize, the sync target
511               requests a piece of out-of-sync data from the sync source, and
512               the sync source sends the data. With many usage patterns, a
513               significant number of those blocks will actually be identical.
514
515               When a csums-alg algorithm is specified, when requesting a
516               piece of out-of-sync data, the sync target also sends along a
517               hash of the data it currently has. The sync source compares
518               this hash with its own version of the data. It sends the sync
519               target the new data if the hashes differ, and tells it that the
520               data are the same otherwise. This reduces the network bandwidth
521               required, at the cost of higher cpu utilization and possibly
522               increased I/O on the sync target.
523
524               The csums-alg can be set to one of the secure hash algorithms
525               supported by the kernel; see the shash algorithms listed in
526               /proc/crypto. By default, csums-alg is unset.
527
528           --csums-after-crash-only
529               Enabling this option (and csums-alg, above) makes it possible
530               to use the checksum based resync only for the first resync
531               after primary crash, but not for later "network hickups".
532
533               In most cases, block that are marked as need-to-be-resynced are
534               in fact changed, so calculating checksums, and both reading and
535               writing the blocks on the resync target is all effective
536               overhead.
537
538               The advantage of checksum based resync is mostly after primary
539               crash recovery, where the recovery marked larger areas (those
540               covered by the activity log) as need-to-be-resynced, just in
541               case. Introduced in 8.4.5.
542
543           --data-integrity-alg  alg
544               DRBD normally relies on the data integrity checks built into
545               the TCP/IP protocol, but if a data integrity algorithm is
546               configured, it will additionally use this algorithm to make
547               sure that the data received over the network match what the
548               sender has sent. If a data integrity error is detected, DRBD
549               will close the network connection and reconnect, which will
550               trigger a resync.
551
552               The data-integrity-alg can be set to one of the secure hash
553               algorithms supported by the kernel; see the shash algorithms
554               listed in /proc/crypto. By default, this mechanism is turned
555               off.
556
557               Because of the CPU overhead involved, we recommend not to use
558               this option in production environments. Also see the notes on
559               data integrity below.
560
561           --fencing fencing_policy
562               Fencing is a preventive measure to avoid situations where both
563               nodes are primary and disconnected. This is also known as a
564               split-brain situation. DRBD supports the following fencing
565               policies:
566
567               dont-care
568                   No fencing actions are taken. This is the default policy.
569
570               resource-only
571                   If a node becomes a disconnected primary, it tries to fence
572                   the peer. This is done by calling the fence-peer handler.
573                   The handler is supposed to reach the peer over an
574                   alternative communication path and call 'drbdadm outdate
575                   minor' there.
576
577               resource-and-stonith
578                   If a node becomes a disconnected primary, it freezes all
579                   its IO operations and calls its fence-peer handler. The
580                   fence-peer handler is supposed to reach the peer over an
581                   alternative communication path and call 'drbdadm outdate
582                   minor' there. In case it cannot do that, it should stonith
583                   the peer. IO is resumed as soon as the situation is
584                   resolved. In case the fence-peer handler fails, I/O can be
585                   resumed manually with 'drbdadm resume-io'.
586
587           --ko-count number
588               If a secondary node fails to complete a write request in
589               ko-count times the timeout parameter, it is excluded from the
590               cluster. The primary node then sets the connection to this
591               secondary node to Standalone. To disable this feature, you
592               should explicitly set it to 0; defaults may change between
593               versions.
594
595           --max-buffers number
596               Limits the memory usage per DRBD minor device on the receiving
597               side, or for internal buffers during resync or online-verify.
598               Unit is PAGE_SIZE, which is 4 KiB on most systems. The minimum
599               possible setting is hard coded to 32 (=128 KiB). These buffers
600               are used to hold data blocks while they are written to/read
601               from disk. To avoid possible distributed deadlocks on
602               congestion, this setting is used as a throttle threshold rather
603               than a hard limit. Once more than max-buffers pages are in use,
604               further allocation from this pool is throttled. You want to
605               increase max-buffers if you cannot saturate the IO backend on
606               the receiving side.
607
608           --max-epoch-size number
609               Define the maximum number of write requests DRBD may issue
610               before issuing a write barrier. The default value is 2048, with
611               a minimum of 1 and a maximum of 20000. Setting this parameter
612               to a value below 10 is likely to decrease performance.
613
614           --on-congestion policy,
615           --congestion-fill threshold,
616           --congestion-extents threshold
617               By default, DRBD blocks when the TCP send queue is full. This
618               prevents applications from generating further write requests
619               until more buffer space becomes available again.
620
621               When DRBD is used together with DRBD-proxy, it can be better to
622               use the pull-ahead on-congestion policy, which can switch DRBD
623               into ahead/behind mode before the send queue is full. DRBD then
624               records the differences between itself and the peer in its
625               bitmap, but it no longer replicates them to the peer. When
626               enough buffer space becomes available again, the node
627               resynchronizes with the peer and switches back to normal
628               replication.
629
630               This has the advantage of not blocking application I/O even
631               when the queues fill up, and the disadvantage that peer nodes
632               can fall behind much further. Also, while resynchronizing, peer
633               nodes will become inconsistent.
634
635               The available congestion policies are block (the default) and
636               pull-ahead. The congestion-fill parameter defines how much data
637               is allowed to be "in flight" in this connection. The default
638               value is 0, which disables this mechanism of congestion
639               control, with a maximum of 10 GiBytes. The congestion-extents
640               parameter defines how many bitmap extents may be active before
641               switching into ahead/behind mode, with the same default and
642               limits as the al-extents parameter. The congestion-extents
643               parameter is effective only when set to a value smaller than
644               al-extents.
645
646               Ahead/behind mode is available since DRBD 8.3.10.
647
648           --ping-int interval
649               When the TCP/IP connection to a peer is idle for more than
650               ping-int seconds, DRBD will send a keep-alive packet to make
651               sure that a failed peer or network connection is detected
652               reasonably soon. The default value is 10 seconds, with a
653               minimum of 1 and a maximum of 120 seconds. The unit is seconds.
654
655           --ping-timeout timeout
656               Define the timeout for replies to keep-alive packets. If the
657               peer does not reply within ping-timeout, DRBD will close and
658               try to reestablish the connection. The default value is 0.5
659               seconds, with a minimum of 0.1 seconds and a maximum of 30
660               seconds. The unit is tenths of a second.
661
662           --socket-check-timeout timeout
663               In setups involving a DRBD-proxy and connections that
664               experience a lot of buffer-bloat it might be necessary to set
665               ping-timeout to an unusual high value. By default DRBD uses the
666               same value to wait if a newly established TCP-connection is
667               stable. Since the DRBD-proxy is usually located in the same
668               data center such a long wait time may hinder DRBD's connect
669               process.
670
671               In such setups socket-check-timeout should be set to at least
672               to the round trip time between DRBD and DRBD-proxy. I.e. in
673               most cases to 1.
674
675               The default unit is tenths of a second, the default value is 0
676               (which causes DRBD to use the value of ping-timeout instead).
677               Introduced in 8.4.5.
678
679           --protocol name
680               Use the specified protocol on this connection. The supported
681               protocols are:
682
683               A
684                   Writes to the DRBD device complete as soon as they have
685                   reached the local disk and the TCP/IP send buffer.
686
687               B
688                   Writes to the DRBD device complete as soon as they have
689                   reached the local disk, and all peers have acknowledged the
690                   receipt of the write requests.
691
692               C
693                   Writes to the DRBD device complete as soon as they have
694                   reached the local and all remote disks.
695
696
697           --rcvbuf-size size
698               Configure the size of the TCP/IP receive buffer. A value of 0
699               (the default) causes the buffer size to adjust dynamically.
700               This parameter usually does not need to be set, but it can be
701               set to a value up to 10 MiB. The default unit is bytes.
702
703           --rr-conflict policy
704               This option helps to solve the cases when the outcome of the
705               resync decision is incompatible with the current role
706               assignment in the cluster. The defined policies are:
707
708               disconnect
709                   No automatic resynchronization, simply disconnect.
710
711               retry-connect
712                   Disconnect now, and retry to connect immediatly afterwards.
713
714               violently
715                   Resync to the primary node is allowed, violating the
716                   assumption that data on a block device are stable for one
717                   of the nodes.  Do not use this option, it is dangerous.
718
719               call-pri-lost
720                   Call the pri-lost handler on one of the machines. The
721                   handler is expected to reboot the machine, which puts it
722                   into secondary role.
723
724               auto-discard
725                   Auto-discard reverses the resync direction, so that DRBD
726                   resyncs the current primary to the current secondary.
727                   Auto-discard only applies when protocol A is in use and the
728                   resync decision is based on the principle that a crashed
729                   primary should be the source of a resync. When a primary
730                   node crashes, it might have written some last updates to
731                   its disk, which were not received by a protocol A
732                   secondary. By promoting the secondary in the meantime the
733                   user accepted that those last updates have been lost. By
734                   using auto-discard you consent that the last updates
735                   (before the crash of the primary) should be rolled back
736                   automatically.
737
738           --shared-secret secret
739               Configure the shared secret used for peer authentication. The
740               secret is a string of up to 64 characters. Peer authentication
741               also requires the cram-hmac-alg parameter to be set.
742
743           --sndbuf-size size
744               Configure the size of the TCP/IP send buffer. Since DRBD 8.0.13
745               / 8.2.7, a value of 0 (the default) causes the buffer size to
746               adjust dynamically. Values below 32 KiB are harmful to the
747               throughput on this connection. Large buffer sizes can be useful
748               especially when protocol A is used over high-latency networks;
749               the maximum value supported is 10 MiB.
750
751           --tcp-cork
752               By default, DRBD uses the TCP_CORK socket option to prevent the
753               kernel from sending partial messages; this results in fewer and
754               bigger packets on the network. Some network stacks can perform
755               worse with this optimization. On these, the tcp-cork parameter
756               can be used to turn this optimization off.
757
758           --timeout time
759               Define the timeout for replies over the network: if a peer node
760               does not send an expected reply within the specified timeout,
761               it is considered dead and the TCP/IP connection is closed. The
762               timeout value must be lower than connect-int and lower than
763               ping-int. The default is 6 seconds; the value is specified in
764               tenths of a second.
765
766           --use-rle
767               Each replicated device on a cluster node has a separate bitmap
768               for each of its peer devices. The bitmaps are used for tracking
769               the differences between the local and peer device: depending on
770               the cluster state, a disk range can be marked as different from
771               the peer in the device's bitmap, in the peer device's bitmap,
772               or in both bitmaps. When two cluster nodes connect, they
773               exchange each other's bitmaps, and they each compute the union
774               of the local and peer bitmap to determine the overall
775               differences.
776
777               Bitmaps of very large devices are also relatively large, but
778               they usually compress very well using run-length encoding. This
779               can save time and bandwidth for the bitmap transfers.
780
781               The use-rle parameter determines if run-length encoding should
782               be used. It is on by default since DRBD 8.4.0.
783
784           --verify-alg hash-algorithm
785               Online verification (drbdadm verify) computes and compares
786               checksums of disk blocks (i.e., hash values) in order to detect
787               if they differ. The verify-alg parameter determines which
788               algorithm to use for these checksums. It must be set to one of
789               the secure hash algorithms supported by the kernel before
790               online verify can be used; see the shash algorithms listed in
791               /proc/crypto.
792
793               We recommend to schedule online verifications regularly during
794               low-load periods, for example once a month. Also see the notes
795               on data integrity below.
796
797       drbdsetup new-path resource peer_node_id local-addr remote-addr
798           The new-path command creates a path within a connection. The
799           connection must have been created with drbdsetup new-peer.
800           Local_addr and remote_addr refer to the local and remote protocol,
801           network address, and port in the format
802           [address-family:]address[:port]. The address families ipv4, ipv6,
803           ssocks (Dolphin Interconnect Solutions' "super sockets"), sdp
804           (Infiniband Sockets Direct Protocol), and sci are supported (sci is
805           an alias for ssocks). If no address family is specified, ipv4 is
806           assumed. For all address families except ipv6, the address uses
807           IPv4 address notation (for example, 1.2.3.4). For ipv6, the address
808           is enclosed in brackets and uses IPv6 address notation (for
809           example, [fd01:2345:6789:abcd::1]). The port defaults to 7788.
810
811       drbdsetup connect resource peer_node_id
812           The connect command activates a connection. That means that the
813           DRBD driver will bind and listen on all local addresses of the
814           connection-'s paths. It will begin to try to establish one or more
815           paths of the connection. Available options:
816
817           --tentative
818               Only determine if a connection to the peer can be established
819               and if a resync is necessary (and in which direction) without
820               actually establishing the connection or starting the resync.
821               Check the system log to see what DRBD would do without the
822               --tentative option.
823
824           --discard-my-data
825               Discard the local data and resynchronize with the peer that has
826               the most up-to-data data. Use this option to manually recover
827               from a split-brain situation.
828
829       drbdsetup del-peer resource peer_node_id
830           The del-peer command removes a connection from a resource.
831
832       drbdsetup del-path resource peer_node_id local-addr remote-addr
833           The del-path command removes a path from a connection. Please note
834           that it fails if the path is necessary to keep a connected
835           connection in tact. In order to remove all paths, disconnect the
836           connection first.
837
838       drbdsetup cstate resource peer_node_id
839           Show the current state of a connection. The connection is
840           identified by the node-id of the peer; see the drbdsetup connect
841           command.
842
843       drbdsetup del-minor minor
844           Remove a replicated device. No lower-level device may be attached;
845           see drbdsetup detach.
846
847       drbdsetup del-resource resource
848           Remove a resource. All volumes and connections must be removed
849           first (drbdsetup del-minor, drbdsetup disconnect). Alternatively,
850           drbdsetup down can be used to remove a resource together with all
851           its volumes and connections.
852
853       drbdsetup detach minor
854           Detach the lower-level device of a replicated device. Available
855           options:
856
857           --force
858               Force the detach and return immediately. This puts the
859               lower-level device into failed state until all pending I/O has
860               completed, and then detaches the device. Any I/O not yet
861               submitted to the lower-level device (for example, because I/O
862               on the device was suspended) is assumed to have failed.
863
864
865       drbdsetup disconnect resource peer_node_id
866           Remove a connection to a peer host. The connection is identified by
867           the node-id of the peer; see the drbdsetup connect command.
868
869       drbdsetup down {resource | all}
870           Take a resource down by removing all volumes, connections, and the
871           resource itself.
872
873       drbdsetup dstate minor
874           Show the current disk state of a lower-level device.
875
876       drbdsetup events2 {resource | all}
877           Show the current state of all configured DRBD objects, followed by
878           all changes to the state.
879
880           The output format is meant to be human as well as machine readable.
881           The line starts with a word that indicates the kind of event:
882           exists for an existing object; create, destroy, and change if an
883           object is created, destroyed, or changed; call or response if an
884           event handler is called or it returns; or rename when the name of
885           an object is changed. The second word indicates the object the
886           event applies to: resource, device, connection, peer-device, path,
887           helper, or a dash (-) to indicate that the current state has been
888           dumped completely.
889
890           The remaining words identify the object and describe the state that
891           the object is in. Some special keys are worth mentioning:
892
893           resource may_promote:{yes|no}
894               Whether promoting to primary is expected to succeed. When
895               quorum is enabled, this can be used to trigger failover. When
896               may_promote:yes is reported on this node, then no writes are
897               possible on any other node, which generally means that the
898               application can be started on this node, even when it has been
899               running on another.
900
901           resource promotion_score:score
902               An integer heuristic indicating the relative preference for
903               promoting this resource. A higher score is better in terms of
904               having local disks and having access to up-to-date data. The
905               score may be positive even when some node is primary. It will
906               be zero when promotion is impossible due to quorum or lack of
907               any access to up-to-date data.
908
909           Available options:
910
911           --now
912               Terminate after reporting the current state. The default is to
913               continuously listen and report state changes.
914
915           --poll
916               Read from stdin and update when n is read. Newlines are
917               ignored. Every other input terminates the command.
918
919               Without --now, changes are printed as usual. On each n the
920               current state is fetched, but only changed objects are printed.
921               This is useful with --statistics or --full because DRBD does
922               not otherwise send updates when only the statistics change.
923
924               In combination with --now the full state is printed on each n.
925               No other changes are printed.
926
927           --statistics
928               Include statistics in the output.
929
930           --diff
931               Write information in form of a diff between old and new state.
932               This helps simple tools to avoid (old) state tracking on their
933               own.
934
935           --full
936               Write complete state information, especially on change events.
937               This enables --statistics and --verbose.
938
939
940       drbdsetup get-gi resource peer_node_id volume
941           Show the data generation identifiers for a device on a particular
942           connection. The device is identified by its volume number. The
943           connection is identified by its endpoints; see the drbdsetup
944           connect command.
945
946           The output consists of the current UUID, bitmap UUID, and the first
947           two history UUIDS, folowed by a set of flags. The current UUID and
948           history UUIDs are device specific; the bitmap UUID and flags are
949           peer device specific. This command only shows the first two history
950           UUIDs. Internally, DRBD maintains one history UUID for each
951           possible peer device.
952
953       drbdsetup invalidate minor
954           Replace the local data of a device with that of a peer. All the
955           local data will be marked out-of-sync, and a resync with the
956           specified peer device will be initialted.
957
958           Available options:
959
960           --reset-bitmap=no
961               Usually an invalidate operation sets all bits in the bitmap to
962               out-of-sync before beginning the resync from the peer. By
963               giving --reset-bitmap=no DRBD will use the bitmap as it is.
964               Usually this is used after an online verify operation found
965               differences in the backing devices.
966
967               The --reset-bitmap option is available since DRBD kernel driver
968               9.0.29 and drbd-utils 9.17.
969
970           --sync-from-peer-node-id
971               This option allows the caller to select the node to resync
972               from. if it is not gives, DRBD selects a suitable source node
973               itself.
974
975
976       drbdsetup invalidate-remote resource peer_node_id volume
977           Replace a peer device's data of a resource with the local data. The
978           peer device's data will be marked out-of-sync, and a resync from
979           the local node to the specified peer will be initiated.
980
981           Available options:
982
983           --reset-bitmap=no
984               Usually an invalidate remote operation sets all bits in the
985               bitmap to out-of-sync before beginning the resync to the peer.
986               By giving --reset-bitmap=no DRBD will use the bitmap as it is.
987               Usually this is used after an online verify operation found
988               differences in the backing devices.
989
990               The --reset-bitmap option is available since DRBD kernel driver
991               9.0.29 and drbd-utils 9.17.
992
993
994       drbdsetup new-current-uuid minor
995           Generate a new current UUID and rotates all other UUID values. This
996           has three use cases: start the initial resync; skip the initial
997           resync; bootstrap a single node cluster.
998
999           Available options:
1000
1001           --force-resync
1002               Start an initial resync. A precondition is that the volume is
1003               in disk state Inconsistent on all nodes. This command updates
1004               the disk state on the current node to UpToDate and makes it
1005               source of the resync operations to the peers.
1006
1007           --clear-bitmap
1008               Clears the sync bitmap in addition to generating a new current
1009               UUID. This skips the initial resync. As a consqeuence this
1010               volume's disk state changes to UpToDate on all nodes in this
1011               resource.
1012
1013           Both operations require a "Just Created" meta data. Here is the
1014           complete sequence step by step how to skip the initial resync:
1015
1016            1. On both nodes, initialize meta data and configure the device.
1017
1018               drbdadm create-md --force res/volume-number
1019
1020            2. They need to do the initial handshake, so they know their
1021               sizes.
1022
1023               drbdadm up res
1024
1025            3. They are now Connected Secondary/Secondary
1026               Inconsistent/Inconsistent. Generate a new current-uuid and
1027               clear the dirty bitmap.
1028
1029               drbdadm --clear-bitmap new-current-uuid res
1030
1031            4. They are now Connected Secondary/Secondary UpToDate/UpToDate.
1032               Make one side primary and create a file system.
1033
1034               drbdadm primary res
1035
1036               mkfs -t fs-type $(drbdadm sh-dev res/vol)
1037
1038           One obvious side-effect is that the replica is full of old garbage
1039           (unless you made them identical using other means), so any
1040           online-verify is expected to find any number of out-of-sync blocks.
1041
1042           You must not use this on pre-existing data!  Even though it may
1043           appear to work at first glance, once you switch to the other node,
1044           your data is toast, as it never got replicated. So do not leave out
1045           the mkfs (or equivalent).
1046
1047           Bootstraping a single node cluster
1048               This can also be used to shorten the initial resync of a
1049               cluster where the second node is added after the first node is
1050               gone into production, by means of disk shipping. This use-case
1051               works on disconnected devices only, the device may be in
1052               primary or secondary role.
1053
1054               The necessary steps on the current active server are:
1055
1056                1. drbdsetup new-current-uuid --clear-bitmap minor
1057
1058                2. Take the copy of the current active server. E.g. by pulling
1059                   a disk out of the RAID1 controller, or by copying with dd.
1060                   You need to copy the actual data, and the meta data.
1061
1062                3. drbdsetup new-current-uuid minor
1063
1064               Now add the disk to the new secondary node, and join it to the
1065               cluster. You will get a resync of that parts that were changed
1066               since the first call to drbdsetup in step 1.
1067
1068       drbdsetup new-minor resource minor volume
1069           Create a new replicated device within a resource. The command
1070           creates a block device inode for the replicated device (by default,
1071           /dev/drbdminor). The volume number identifies the device within the
1072           resource.
1073
1074       drbdsetup new-resource resource node_id,
1075       drbdsetup resource-options resource
1076           The new-resource command creates a new resource. The
1077           resource-options command changes the resource options of an
1078           existing resource. Available options:
1079
1080           --auto-promote bool-value
1081               A resource must be promoted to primary role before any of its
1082               devices can be mounted or opened for writing.
1083
1084               Before DRBD 9, this could only be done explicitly ("drbdadm
1085               primary"). Since DRBD 9, the auto-promote parameter allows to
1086               automatically promote a resource to primary role when one of
1087               its devices is mounted or opened for writing. As soon as all
1088               devices are unmounted or closed with no more remaining users,
1089               the role of the resource changes back to secondary.
1090
1091               Automatic promotion only succeeds if the cluster state allows
1092               it (that is, if an explicit drbdadm primary command would
1093               succeed). Otherwise, mounting or opening the device fails as it
1094               already did before DRBD 9: the mount(2) system call fails with
1095               errno set to EROFS (Read-only file system); the open(2) system
1096               call fails with errno set to EMEDIUMTYPE (wrong medium type).
1097
1098               Irrespective of the auto-promote parameter, if a device is
1099               promoted explicitly (drbdadm primary), it also needs to be
1100               demoted explicitly (drbdadm secondary).
1101
1102               The auto-promote parameter is available since DRBD 9.0.0, and
1103               defaults to yes.
1104
1105           --cpu-mask cpu-mask
1106               Set the cpu affinity mask for DRBD kernel threads. The cpu mask
1107               is specified as a hexadecimal number. The default value is 0,
1108               which lets the scheduler decide which kernel threads run on
1109               which CPUs. CPU numbers in cpu-mask which do not exist in the
1110               system are ignored.
1111
1112           --on-no-data-accessible policy
1113               Determine how to deal with I/O requests when the requested data
1114               is not available locally or remotely (for example, when all
1115               disks have failed). When quorum is enabled,
1116               on-no-data-accessible should be set to the same value as
1117               on-no-quorum. The defined policies are:
1118
1119               io-error
1120                   System calls fail with errno set to EIO.
1121
1122               suspend-io
1123                   The resource suspends I/O. I/O can be resumed by
1124                   (re)attaching the lower-level device, by connecting to a
1125                   peer which has access to the data, or by forcing DRBD to
1126                   resume I/O with drbdadm resume-io res. When no data is
1127                   available, forcing I/O to resume will result in the same
1128                   behavior as the io-error policy.
1129
1130               This setting is available since DRBD 8.3.9; the default policy
1131               is io-error.
1132
1133           --peer-ack-window value
1134               On each node and for each device, DRBD maintains a bitmap of
1135               the differences between the local and remote data for each peer
1136               device. For example, in a three-node setup (nodes A, B, C) each
1137               with a single device, every node maintains one bitmap for each
1138               of its peers.
1139
1140               When nodes receive write requests, they know how to update the
1141               bitmaps for the writing node, but not how to update the bitmaps
1142               between themselves. In this example, when a write request
1143               propagates from node A to B and C, nodes B and C know that they
1144               have the same data as node A, but not whether or not they both
1145               have the same data.
1146
1147               As a remedy, the writing node occasionally sends peer-ack
1148               packets to its peers which tell them which state they are in
1149               relative to each other.
1150
1151               The peer-ack-window parameter specifies how much data a primary
1152               node may send before sending a peer-ack packet. A low value
1153               causes increased network traffic; a high value causes less
1154               network traffic but higher memory consumption on secondary
1155               nodes and higher resync times between the secondary nodes after
1156               primary node failures. (Note: peer-ack packets may be sent due
1157               to other reasons as well, e.g. membership changes or expiry of
1158               the peer-ack-delay timer.)
1159
1160               The default value for peer-ack-window is 2 MiB, the default
1161               unit is sectors. This option is available since 9.0.0.
1162
1163           --peer-ack-delay expiry-time
1164               If after the last finished write request no new write request
1165               gets issued for expiry-time, then a peer-ack packet is sent. If
1166               a new write request is issued before the timer expires, the
1167               timer gets reset to expiry-time. (Note: peer-ack packets may be
1168               sent due to other reasons as well, e.g. membership changes or
1169               the peer-ack-window option.)
1170
1171               This parameter may influence resync behavior on remote nodes.
1172               Peer nodes need to wait until they receive an peer-ack for
1173               releasing a lock on an AL-extent. Resync operations between
1174               peers may need to wait for for these locks.
1175
1176               The default value for peer-ack-delay is 100 milliseconds, the
1177               default unit is milliseconds. This option is available since
1178               9.0.0.
1179
1180           --quorum value
1181               When activated, a cluster partition requires quorum in order to
1182               modify the replicated data set. That means a node in the
1183               cluster partition can only be promoted to primary if the
1184               cluster partition has quorum. Every node with a disk directly
1185               connected to the node that should be promoted counts. If a
1186               primary node should execute a write request, but the cluster
1187               partition has lost quorum, it will freeze IO or reject the
1188               write request with an error (depending on the on-no-quorum
1189               setting). Upon loosing quorum a primary always invokes the
1190               quorum-lost handler. The handler is intended for notification
1191               purposes, its return code is ignored.
1192
1193               The option's value might be set to off, majority, all or a
1194               numeric value. If you set it to a numeric value, make sure that
1195               the value is greater than half of your number of nodes. Quorum
1196               is a mechanism to avoid data divergence, it might be used
1197               instead of fencing when there are more than two repicas. It
1198               defaults to off
1199
1200               If all missing nodes are marked as outdated, a partition always
1201               has quorum, no matter how small it is. I.e. If you disconnect
1202               all secondary nodes gracefully a single primary continues to
1203               operate. In the moment a single secondary is lost, it has to be
1204               assumed that it forms a partition with all the missing outdated
1205               nodes. In case my partition might be smaller than the other,
1206               quorum is lost in this moment.
1207
1208               In case you want to allow permanently diskless nodes to gain
1209               quorum it is recommendet to not use majority or all. It is
1210               recommended to specify an absolute number, since DBRD's
1211               heuristic to determine the complete number of diskfull nodes in
1212               the cluster is unreliable.
1213
1214               The quorum implementation is available starting with the DRBD
1215               kernel driver version 9.0.7.
1216
1217           --quorum-minimum-redundancy value
1218               This option sets the minimal required number of nodes with an
1219               UpToDate disk to allow the partition to gain quorum. This is a
1220               different requirement than the plain quorum option expresses.
1221
1222               The option's value might be set to off, majority, all or a
1223               numeric value. If you set it to a numeric value, make sure that
1224               the value is greater than half of your number of nodes.
1225
1226               In case you want to allow permanently diskless nodes to gain
1227               quorum it is recommendet to not use majority or all. It is
1228               recommended to specify an absolute number, since DBRD's
1229               heuristic to determine the complete number of diskfull nodes in
1230               the cluster is unreliable.
1231
1232               This option is available starting with the DRBD kernel driver
1233               version 9.0.10.
1234
1235           --on-no-quorum {io-error | suspend-io}
1236               By default DRBD freezes IO on a device, that lost quorum. By
1237               setting the on-no-quorum to io-error it completes all IO
1238               operations with an error if quorum is lost.
1239
1240               Usually, the on-no-data-accessible should be set to the same
1241               value as on-no-quorum, as it has precedence.
1242
1243               The on-no-quorum options is available starting with the DRBD
1244               kernel driver version 9.0.8.
1245
1246           --on-suspended-primary-outdated {disconnect | force-secondary}
1247               This setting is only relevant when on-no-quorum is set to
1248               suspend-io. It is relevant in the following scenario. A primary
1249               node loses quorum hence has all IO requests frozen. This
1250               primary node then connects to another, quorate partition. It
1251               detects that a node in this quorate partition was promoted to
1252               primary, and started a newer data-generation there. As a
1253               result, the first primary learns that it has to consider itself
1254               outdated.
1255
1256               When it is set to force-secondary then it will demote to
1257               secondary immediately, and fail all pending (and new) IO
1258               requests with IO errors. It will refuse to allow any process to
1259               open the DRBD devices until all openers closed the device. This
1260               state is visible in status and events2 under the name
1261               force-io-failures.
1262
1263               The disconnect setting simply causes that node to reject
1264               connect attempts and stay isolated.
1265
1266               The on-suspended-primary-outdated option is available starting
1267               with the DRBD kernel driver version 9.1.7. It has a default
1268               value of disconnect.
1269
1270
1271       drbdsetup outdate minor
1272           Mark the data on a lower-level device as outdated. This is used for
1273           fencing, and prevents the resource the device is part of from
1274           becoming primary in the future. See the --fencing disk option.
1275
1276       drbdsetup pause-sync resource peer_node_id volume
1277           Stop resynchronizing between a local and a peer device by setting
1278           the local pause flag. The resync can only resume if the pause flags
1279           on both sides of a connection are cleared.
1280
1281       drbdsetup primary resource
1282           Change the role of a node in a resource to primary. This allows the
1283           replicated devices in this resource to be mounted or opened for
1284           writing. Available options:
1285
1286           --overwrite-data-of-peer
1287               This option is an alias for the --force option.
1288
1289           --force
1290               Force the resource to become primary even if some devices are
1291               not guaranteed to have up-to-date data. This option is used to
1292               turn one of the nodes in a newly created cluster into the
1293               primary node, or when manually recovering from a disaster.
1294
1295               Note that this can lead to split-brain scenarios. Also, when
1296               forcefully turning an inconsistent device into an up-to-date
1297               device, it is highly recommended to use any integrity checks
1298               available (such as a filesystem check) to make sure that the
1299               device can at least be used without crashing the system.
1300
1301           Note that DRBD usually only allows one node in a cluster to be in
1302           primary role at any time; this allows DRBD to coordinate access to
1303           the devices in a resource across nodes. The --allow-two-primaries
1304           network option changes this; in that case, a mechanism outside of
1305           DRBD needs to coordinate device access.
1306
1307       drbdsetup resize minor
1308           Reexamine the size of the lower-level devices of a replicated
1309           device on all nodes. This command is called after the lower-level
1310           devices on all nodes have been grown to adjust the size of the
1311           replicated device. Available options:
1312
1313           --assume-peer-has-space
1314               Resize the device even if some of the peer devices are not
1315               connected at the moment. DRBD will try to resize the peer
1316               devices when they next connect. It will refuse to connect to a
1317               peer device which is too small.
1318
1319           --assume-clean
1320               Do not resynchronize the added disk space; instead, assume that
1321               it is identical on all nodes. This option can be used when the
1322               disk space is uninitialized and differences do not matter, or
1323               when it is known to be identical on all nodes. See the
1324               drbdsetup verify command.
1325
1326           --size val
1327               This option can be used to online shrink the usable size of a
1328               drbd device. It's the users responsibility to make sure that a
1329               file system on the device is not truncated by that operation.
1330
1331           --al-stripes val --al-stripes val
1332               These options may be used to change the layout of the activity
1333               log online. In case of internal meta data this may invovle
1334               shrinking the user visible size at the same time (unsing the
1335               --size) or increasing the avalable space on the backing
1336               devices.
1337
1338
1339       drbdsetup resume-io minor
1340           Resume I/O on a replicated device. See the --fencing net option.
1341
1342       drbdsetup resume-sync resource peer_node_id volume
1343           Allow resynchronization to resume by clearing the local sync pause
1344           flag.
1345
1346       drbdsetup role resource
1347           Show the current role of a resource.
1348
1349       drbdsetup secondary resource
1350           Change the role of a node in a resource to secondary. This command
1351           fails if the replicated device is in use.
1352
1353           --force
1354               A forced demotion to secondary causes all pending and new IO
1355               requests to terminate with IO errors.
1356
1357               Please note that a forced demotion returns immediately. The
1358               user should unmount any filesystem that might be mounted on the
1359               DRBD device. The device can be used again when
1360               force-io-failures has a value of no. (See drbdsetup status and
1361               drbdsetup events2).
1362
1363       drbdsetup show {resource | all}
1364           Show the current configuration of a resource, or of all resources.
1365           Available options:
1366
1367           --show-defaults
1368               Show all configuration parameters, even the ones with default
1369               values. Normally, parameters with default values are not shown.
1370
1371
1372       drbdsetup show-gi resource peer_node_id volume
1373           Show the data generation identifiers for a device on a particular
1374           connection. In addition, explain the output. The output otherwise
1375           is the same as in the drbdsetup get-gi command.
1376
1377       drbdsetup state
1378           This is an alias for drbdsetup role. Deprecated.
1379
1380       drbdsetup status {resource | all}
1381           Show the status of a resource, or of all resources. The output
1382           consists of one paragraph for each configured resource. Each
1383           paragraph contains one line for each resource, followed by one line
1384           for each device, and one line for each connection. The device and
1385           connection lines are indented. The connection lines are followed by
1386           one line for each peer device; these lines are indented against the
1387           connection line.
1388
1389           Long lines are wrapped around at terminal width, and indented to
1390           indicate how the lines belongs together. Available options:
1391
1392           --verbose
1393               Include more information in the output even when it is likely
1394               redundant or irrelevant.
1395
1396           --statistics
1397               Include data transfer statistics in the output.
1398
1399           --color={always | auto | never}
1400               Colorize the output. With --color=auto, drbdsetup emits color
1401               codes only when standard output is connected to a terminal.
1402
1403           For example, the non-verbose output for a resource with only one
1404           connection and only one volume could look like this:
1405
1406               drbd0 role:Primary
1407                 disk:UpToDate
1408                 host2.example.com role:Secondary
1409                   disk:UpToDate
1410
1411
1412           With the --verbose option, the same resource could be reported as:
1413
1414               drbd0 node-id:1 role:Primary suspended:no
1415                 volume:0 minor:1 disk:UpToDate blocked:no
1416                 host2.example.com local:ipv4:192.168.123.4:7788
1417                     peer:ipv4:192.168.123.2:7788 node-id:0 connection:WFReportParams
1418                     role:Secondary congested:no
1419                   volume:0 replication:Connected disk:UpToDate resync-suspended:no
1420
1421
1422
1423       drbdsetup suspend-io minor
1424           Suspend I/O on a replicated device. It is not usually necessary to
1425           use this command.
1426
1427       drbdsetup verify resource peer_node_id volume
1428           Start online verification, change which part of the device will be
1429           verified, or stop online verification. The command requires the
1430           specified peer to be connected.
1431
1432           Online verification compares each disk block on the local and peer
1433           node. Blocks which differ between the nodes are marked as
1434           out-of-sync, but they are not automatically brought back into sync.
1435           To bring them into sync, the drbdsetup invalidate or drbdsetup
1436           invalidate-remote with the --reset-bitmap=no option can be used.
1437           Progress can be monitored in the output of drbdsetup status
1438           --statistics. Available options:
1439
1440           --start position
1441               Define where online verification should start. This parameter
1442               is ignored if online verification is already in progress. If
1443               the start parameter is not specified, online verification will
1444               continue where it was interrupted (if the connection to the
1445               peer was lost while verifying), after the previous stop sector
1446               (if the previous online verification has finished), or at the
1447               beginning of the device (if the end of the device was reached,
1448               or online verify has not run before).
1449
1450               The position on disk is specified in disk sectors (512 bytes)
1451               by default.
1452
1453           --stop position
1454               Define where online verification should stop. If online
1455               verification is already in progress, the stop position of the
1456               active online verification process is changed. Use this to stop
1457               online verification.
1458
1459               The position on disk is specified in disk sectors (512 bytes)
1460               by default.
1461
1462           Also see the notes on data integrity in the drbd.conf(5) manual
1463           page.
1464
1465       drbdsetup wait-connect-volume resource peer_node_id volume,
1466       drbdsetup wait-connect-connection resource peer_node_id,
1467       drbdsetup wait-connect-resource resource,
1468       drbdsetup wait-sync-volume resource peer_node_id volume,
1469       drbdsetup wait-sync-connection resource peer_node_id,
1470       drbdsetup wait-sync-resource resource
1471           The wait-connect-* commands waits until a device on a peer is
1472           visible. The wait-sync-* commands waits until a device on a peer is
1473           up to date. Available options for both commands:
1474
1475           --degr-wfc-timeout timeout
1476               Define how long to wait until all peers are connected in case
1477               the cluster consisted of a single node only when the system
1478               went down. This parameter is usually set to a value smaller
1479               than wfc-timeout. The assumption here is that peers which were
1480               unreachable before a reboot are less likely to be reachable
1481               after the reboot, so waiting is less likely to help.
1482
1483               The timeout is specified in seconds. The default value is 0,
1484               which stands for an infinite timeout. Also see the wfc-timeout
1485               parameter.
1486
1487           --outdated-wfc-timeout timeout
1488               Define how long to wait until all peers are connected if all
1489               peers were outdated when the system went down. This parameter
1490               is usually set to a value smaller than wfc-timeout. The
1491               assumption here is that an outdated peer cannot have become
1492               primary in the meantime, so we don't need to wait for it as
1493               long as for a node which was alive before.
1494
1495               The timeout is specified in seconds. The default value is 0,
1496               which stands for an infinite timeout. Also see the wfc-timeout
1497               parameter.
1498
1499           --wait-after-sb
1500               This parameter causes DRBD to continue waiting in the init
1501               script even when a split-brain situation has been detected, and
1502               the nodes therefore refuse to connect to each other.
1503
1504           --wfc-timeout timeout
1505               Define how long the init script waits until all peers are
1506               connected. This can be useful in combination with a cluster
1507               manager which cannot manage DRBD resources: when the cluster
1508               manager starts, the DRBD resources will already be up and
1509               running. With a more capable cluster manager such as Pacemaker,
1510               it makes more sense to let the cluster manager control DRBD
1511               resources. The timeout is specified in seconds. The default
1512               value is 0, which stands for an infinite timeout. Also see the
1513               degr-wfc-timeout parameter.
1514
1515
1516       drbdsetup forget-peer resource peer_node_id
1517           The forget-peer command removes all traces of a peer node from the
1518           meta-data. It frees a bitmap slot in the meta-data and make it
1519           avalable for futher bitmap slot allocation in case a so-far never
1520           seen node connects.
1521
1522           The connection must be taken down before this command may be used.
1523           In case the peer re-connects at a later point a bit-map based
1524           resync will be turned into a full-sync.
1525
1526       drbdsetup rename-resource resource new_name
1527           Change the name of resource to new_name on the local node. Note
1528           that, since there is no concept of resource names in DRBD's network
1529           protocol, it is technically possible to have different names for a
1530           resource on different nodes. However, it is strongly recommended to
1531           issue the same rename-resource command on all nodes to have
1532           consistent naming across the cluster.
1533
1534           A rename event will be issued on the events2 stream to notify users
1535           of the new name.
1536

EXAMPLES

1538       Please see the DRBD User's Guide[1] for examples.
1539

VERSION

1541       This document was revised for version 9.0.0 of the DRBD distribution.
1542

AUTHOR

1544       Written by Philipp Reisner <philipp.reisner@linbit.com> and Lars
1545       Ellenberg <lars.ellenberg@linbit.com>.
1546

REPORTING BUGS

1548       Report bugs to <drbd-user@lists.linbit.com>.
1549
1551       Copyright 2001-2018 LINBIT Information Technologies, Philipp Reisner,
1552       Lars Ellenberg. This is free software; see the source for copying
1553       conditions. There is NO warranty; not even for MERCHANTABILITY or
1554       FITNESS FOR A PARTICULAR PURPOSE.
1555

SEE ALSO

1557       drbd.conf(5), drbd(8), drbdadm(8), DRBD User's Guide[1], DRBD Web
1558       Site[2]
1559

NOTES

1561        1. DRBD User's Guide
1562           http://www.drbd.org/users-guide/
1563
1564        2. DRBD Web Site
1565           http://www.drbd.org/
1566
1567
1568
1569DRBD 9.0.x                      17 January 2018                   DRBDSETUP(8)
Impressum