1DRBDSETUP(8)                 System Administration                DRBDSETUP(8)
2
3
4

NAME

6       drbdsetup - Configure the DRBD kernel module
7

SYNOPSIS

9       drbdsetup command {argument...} [option...]
10

DESCRIPTION

12       The drbdsetup utility serves to configure the DRBD kernel module and to
13       show its current configuration. Users usually interact with the drbdadm
14       utility, which provides a more high-level interface to DRBD than
15       drbdsetup. (See drbdadm's --dry-run option to see how drbdadm uses
16       drbdsetup.)
17
18       Some option arguments have a default scale which applies when a plain
19       number is specified (for example Kilo, or 1024 times the numeric
20       value). Such default scales can be overridden by using a suffix (for
21       example, M for Mega). The common suffixes K = 2^10 = 1024, M = 1024 K,
22       and G = 1024 M are supported.
23

COMMANDS

25       drbdsetup attach minor lower_dev meta_data_dev meta_data_index,
26       drbdsetup disk-options minor
27           The attach command attaches a lower-level device to an existing
28           replicated device. The disk-options command changes the disk
29           options of an attached lower-level device. In either case, the
30           replicated device must have been created with drbdsetup new-minor.
31
32           Both commands refer to the replicated device by its minor number.
33           lower_dev is the name of the lower-level device.  meta_data_dev is
34           the name of the device containing the metadata, and may be the same
35           as lower_dev.  meta_data_index is either a numeric metadata index,
36           or the keyword internal for internal metadata, or the keyword
37           flexible for variable-size external metadata. Available options:
38
39           --al-extents extents
40               DRBD automatically maintains a "hot" or "active" disk area
41               likely to be written to again soon based on the recent write
42               activity. The "active" disk area can be written to immediately,
43               while "inactive" disk areas must be "activated" first, which
44               requires a meta-data write. We also refer to this active disk
45               area as the "activity log".
46
47               The activity log saves meta-data writes, but the whole log must
48               be resynced upon recovery of a failed node. The size of the
49               activity log is a major factor of how long a resync will take
50               and how fast a replicated disk will become consistent after a
51               crash.
52
53               The activity log consists of a number of 4-Megabyte segments;
54               the al-extents parameter determines how many of those segments
55               can be active at the same time. The default value for
56               al-extents is 1237, with a minimum of 7 and a maximum of 65536.
57
58               Note that the effective maximum may be smaller, depending on
59               how you created the device meta data, see also drbdmeta(8) The
60               effective maximum is 919 * (available on-disk activity-log
61               ring-buffer area/4kB -1), the default 32kB ring-buffer effects
62               a maximum of 6433 (covers more than 25 GiB of data) We
63               recommend to keep this well within the amount your backend
64               storage and replication link are able to resync inside of about
65               5 minutes.
66
67           --al-updates {yes | no}
68               With this parameter, the activity log can be turned off
69               entirely (see the al-extents parameter). This will speed up
70               writes because fewer meta-data writes will be necessary, but
71               the entire device needs to be resynchronized opon recovery of a
72               failed primary node. The default value for al-updates is yes.
73
74           --disk-barrier,
75           --disk-flushes,
76           --disk-drain
77               DRBD has three methods of handling the ordering of dependent
78               write requests:
79
80               disk-barrier
81                   Use disk barriers to make sure that requests are written to
82                   disk in the right order. Barriers ensure that all requests
83                   submitted before a barrier make it to the disk before any
84                   requests submitted after the barrier. This is implemented
85                   using 'tagged command queuing' on SCSI devices and 'native
86                   command queuing' on SATA devices. Only some devices and
87                   device stacks support this method. The device mapper (LVM)
88                   only supports barriers in some configurations.
89
90                   Note that on systems which do not support disk barriers,
91                   enabling this option can lead to data loss or corruption.
92                   Until DRBD 8.4.1, disk-barrier was turned on if the I/O
93                   stack below DRBD did support barriers. Kernels since
94                   linux-2.6.36 (or 2.6.32 RHEL6) no longer allow to detect if
95                   barriers are supported. Since drbd-8.4.2, this option is
96                   off by default and needs to be enabled explicitly.
97
98               disk-flushes
99                   Use disk flushes between dependent write requests, also
100                   referred to as 'force unit access' by drive vendors. This
101                   forces all data to disk. This option is enabled by default.
102
103               disk-drain
104                   Wait for the request queue to "drain" (that is, wait for
105                   the requests to finish) before submitting a dependent write
106                   request. This method requires that requests are stable on
107                   disk when they finish. Before DRBD 8.0.9, this was the only
108                   method implemented. This option is enabled by default. Do
109                   not disable in production environments.
110
111               From these three methods, drbd will use the first that is
112               enabled and supported by the backing storage device. If all
113               three of these options are turned off, DRBD will submit write
114               requests without bothering about dependencies. Depending on the
115               I/O stack, write requests can be reordered, and they can be
116               submitted in a different order on different cluster nodes. This
117               can result in data loss or corruption. Therefore, turning off
118               all three methods of controlling write ordering is strongly
119               discouraged.
120
121               A general guideline for configuring write ordering is to use
122               disk barriers or disk flushes when using ordinary disks (or an
123               ordinary disk array) with a volatile write cache. On storage
124               without cache or with a battery backed write cache, disk
125               draining can be a reasonable choice.
126
127           --disk-timeout
128               If the lower-level device on which a DRBD device stores its
129               data does not finish an I/O request within the defined
130               disk-timeout, DRBD treats this as a failure. The lower-level
131               device is detached, and the device's disk state advances to
132               Diskless. If DRBD is connected to one or more peers, the failed
133               request is passed on to one of them.
134
135               This option is dangerous and may lead to kernel panic!
136
137               "Aborting" requests, or force-detaching the disk, is intended
138               for completely blocked/hung local backing devices which do no
139               longer complete requests at all, not even do error completions.
140               In this situation, usually a hard-reset and failover is the
141               only way out.
142
143               By "aborting", basically faking a local error-completion, we
144               allow for a more graceful swichover by cleanly migrating
145               services. Still the affected node has to be rebooted "soon".
146
147               By completing these requests, we allow the upper layers to
148               re-use the associated data pages.
149
150               If later the local backing device "recovers", and now DMAs some
151               data from disk into the original request pages, in the best
152               case it will just put random data into unused pages; but
153               typically it will corrupt meanwhile completely unrelated data,
154               causing all sorts of damage.
155
156               Which means delayed successful completion, especially for READ
157               requests, is a reason to panic(). We assume that a delayed
158               *error* completion is OK, though we still will complain noisily
159               about it.
160
161               The default value of disk-timeout is 0, which stands for an
162               infinite timeout. Timeouts are specified in units of 0.1
163               seconds. This option is available since DRBD 8.3.12.
164
165           --md-flushes
166               Enable disk flushes and disk barriers on the meta-data device.
167               This option is enabled by default. See the disk-flushes
168               parameter.
169
170           --on-io-error handler
171               Configure how DRBD reacts to I/O errors on a lower-level
172               device. The following policies are defined:
173
174               pass_on
175                   Change the disk status to Inconsistent, mark the failed
176                   block as inconsistent in the bitmap, and retry the I/O
177                   operation on a remote cluster node.
178
179               call-local-io-error
180                   Call the local-io-error handler (see the handlers section).
181
182               detach
183                   Detach the lower-level device and continue in diskless
184                   mode.
185
186
187           --read-balancing policy
188               Distribute read requests among cluster nodes as defined by
189               policy. The supported policies are prefer-local (the default),
190               prefer-remote, round-robin, least-pending,
191               when-congested-remote, 32K-striping, 64K-striping,
192               128K-striping, 256K-striping, 512K-striping and 1M-striping.
193
194               This option is available since DRBD 8.4.1.
195
196           resync-after minor
197               Define that a device should only resynchronize after the
198               specified other device. By default, no order between devices is
199               defined, and all devices will resynchronize in parallel.
200               Depending on the configuration of the lower-level devices, and
201               the available network and disk bandwidth, this can slow down
202               the overall resync process. This option can be used to form a
203               chain or tree of dependencies among devices.
204
205           --size size
206               Specify the size of the lower-level device explicitly instead
207               of determining it automatically. The device size must be
208               determined once and is remembered for the lifetime of the
209               device. In order to determine it automatically, all the
210               lower-level devices on all nodes must be attached, and all
211               nodes must be connected. If the size is specified explicitly,
212               this is not necessary. The size value is assumed to be in units
213               of sectors (512 bytes) by default.
214
215           --discard-zeroes-if-aligned {yes | no}
216               There are several aspects to discard/trim/unmap support on
217               linux block devices. Even if discard is supported in general,
218               it may fail silently, or may partially ignore discard requests.
219               Devices also announce whether reading from unmapped blocks
220               returns defined data (usually zeroes), or undefined data
221               (possibly old data, possibly garbage).
222
223               If on different nodes, DRBD is backed by devices with differing
224               discard characteristics, discards may lead to data divergence
225               (old data or garbage left over on one backend, zeroes due to
226               unmapped areas on the other backend). Online verify would now
227               potentially report tons of spurious differences. While probably
228               harmless for most use cases (fstrim on a file system), DRBD
229               cannot have that.
230
231               To play safe, we have to disable discard support, if our local
232               backend (on a Primary) does not support
233               "discard_zeroes_data=true". We also have to translate discards
234               to explicit zero-out on the receiving side, unless the
235               receiving side (Secondary) supports "discard_zeroes_data=true",
236               thereby allocating areas what were supposed to be unmapped.
237
238               There are some devices (notably the LVM/DM thin provisioning)
239               that are capable of discard, but announce
240               discard_zeroes_data=false. In the case of DM-thin, discards
241               aligned to the chunk size will be unmapped, and reading from
242               unmapped sectors will return zeroes. However, unaligned partial
243               head or tail areas of discard requests will be silently
244               ignored.
245
246               If we now add a helper to explicitly zero-out these unaligned
247               partial areas, while passing on the discard of the aligned full
248               chunks, we effectively achieve discard_zeroes_data=true on such
249               devices.
250
251               Setting discard-zeroes-if-aligned to yes will allow DRBD to use
252               discards, and to announce discard_zeroes_data=true, even on
253               backends that announce discard_zeroes_data=false.
254
255               Setting discard-zeroes-if-aligned to no will cause DRBD to
256               always fall-back to zero-out on the receiving side, and to not
257               even announce discard capabilities on the Primary, if the
258               respective backend announces discard_zeroes_data=false.
259
260               We used to ignore the discard_zeroes_data setting completely.
261               To not break established and expected behaviour, and suddenly
262               cause fstrim on thin-provisioned LVs to run out-of-space
263               instead of freeing up space, the default value is yes.
264
265               This option is available since 8.4.7.
266
267           --rs-discard-granularity byte
268               When rs-discard-granularity is set to a non zero, positive
269               value then DRBD tries to do a resync operation in requests of
270               this size. In case such a block contains only zero bytes on the
271               sync source node, the sync target node will issue a
272               discard/trim/unmap command for the area.
273
274               The value is constrained by the discard granularity of the
275               backing block device. In case rs-discard-granularity is not a
276               multiplier of the discard granularity of the backing block
277               device DRBD rounds it up. The feature only gets active if the
278               backing block device reads back zeroes after a discard command.
279
280               The default value of is 0. This option is available since
281               8.4.7.
282
283       drbdsetup peer-device-options resource peer_node_id volume
284           These are options that affect the peer's device.
285
286           --c-delay-target delay_target,
287           --c-fill-target fill_target,
288           --c-max-rate max_rate,
289           --c-plan-ahead plan_time
290               Dynamically control the resync speed. The following modes are
291               available:
292
293               ·   Dynamic control with fill target (default). Enabled when
294                   c-plan-ahead is non-zero and c-fill-target is non-zero. The
295                   goal is to fill the buffers along the data path with a
296                   defined amount of data. This mode is recommended when
297                   DRBD-proxy is used. Configured with c-plan-ahead,
298                   c-fill-target and c-max-rate.
299
300               ·   Dynamic control with delay target. Enabled when
301                   c-plan-ahead is non-zero (default) and c-fill-target is
302                   zero. The goal is to have a defined delay along the path.
303                   Configured with c-plan-ahead, c-delay-target and
304                   c-max-rate.
305
306               ·   Fixed resync rate. Enabled when c-plan-ahead is zero. DRBD
307                   will try to perform resync I/O at a fixed rate. Configured
308                   with resync-rate.
309
310               The c-plan-ahead parameter defines how fast DRBD adapts to
311               changes in the resync speed. It should be set to five times the
312               network round-trip time or more. The default value of
313               c-plan-ahead is 20, in units of 0.1 seconds.
314
315               The c-fill-target parameter defines the how much resync data
316               DRBD should aim to have in-flight at all times. Common values
317               for "normal" data paths range from 4K to 100K. The default
318               value of c-fill-target is 100, in units of sectors
319
320               The c-delay-target parameter defines the delay in the resync
321               path that DRBD should aim for. This should be set to five times
322               the network round-trip time or more. The default value of
323               c-delay-target is 10, in units of 0.1 seconds.
324
325               The c-max-rate parameter limits the maximum bandwidth used by
326               dynamically controlled resyncs. Setting this to zero removes
327               the limitation (since DRBD 9.0.28). It should be set to either
328               the bandwidth available between the DRBD hosts and the machines
329               hosting DRBD-proxy, or to the available disk bandwidth. The
330               default value of c-max-rate is 102400, in units of KiB/s.
331
332               Dynamic resync speed control is available since DRBD 8.3.9.
333
334           --c-min-rate min_rate
335               A node which is primary and sync-source has to schedule
336               application I/O requests and resync I/O requests. The
337               c-min-rate parameter limits how much bandwidth is available for
338               resync I/O; the remaining bandwidth is used for application
339               I/O.
340
341               A c-min-rate value of 0 means that there is no limit on the
342               resync I/O bandwidth. This can slow down application I/O
343               significantly. Use a value of 1 (1 KiB/s) for the lowest
344               possible resync rate.
345
346               The default value of c-min-rate is 250, in units of KiB/s.
347
348           --resync-rate rate
349               Define how much bandwidth DRBD may use for resynchronizing.
350               DRBD allows "normal" application I/O even during a resync. If
351               the resync takes up too much bandwidth, application I/O can
352               become very slow. This parameter allows to avoid that. Please
353               note this is option only works when the dynamic resync
354               controller is disabled.
355
356       drbdsetup check-resize minor
357           Remember the current size of the lower-level device of the
358           specified replicated device. Used by drbdadm. The size information
359           is stored in file /var/lib/drbd/drbd-minor-minor.lkbd.
360
361       drbdsetup new-peer resource peer_node_id,
362       drbdsetup net-options resource peer_node_id
363           The new-peer command creates a connection within a resource. The
364           resource must have been created with drbdsetup new-resource. The
365           net-options command changes the network options of an existing
366           connection. Before a connection can be activated with the connect
367           command, at least one path need to added with the new-path command.
368           Available options:
369
370           --after-sb-0pri policy
371               Define how to react if a split-brain scenario is detected and
372               none of the two nodes is in primary role. (We detect
373               split-brain scenarios when two nodes connect; split-brain
374               decisions are always between two nodes.) The defined policies
375               are:
376
377               disconnect
378                   No automatic resynchronization; simply disconnect.
379
380               discard-younger-primary,
381               discard-older-primary
382                   Resynchronize from the node which became primary first
383                   (discard-younger-primary) or last (discard-older-primary).
384                   If both nodes became primary independently, the
385                   discard-least-changes policy is used.
386
387               discard-zero-changes
388                   If only one of the nodes wrote data since the split brain
389                   situation was detected, resynchronize from this node to the
390                   other. If both nodes wrote data, disconnect.
391
392               discard-least-changes
393                   Resynchronize from the node with more modified blocks.
394
395               discard-node-nodename
396                   Always resynchronize to the named node.
397
398           --after-sb-1pri policy
399               Define how to react if a split-brain scenario is detected, with
400               one node in primary role and one node in secondary role. (We
401               detect split-brain scenarios when two nodes connect, so
402               split-brain decisions are always among two nodes.) The defined
403               policies are:
404
405               disconnect
406                   No automatic resynchronization, simply disconnect.
407
408               consensus
409                   Discard the data on the secondary node if the after-sb-0pri
410                   algorithm would also discard the data on the secondary
411                   node. Otherwise, disconnect.
412
413               violently-as0p
414                   Always take the decision of the after-sb-0pri algorithm,
415                   even if it causes an erratic change of the primary's view
416                   of the data. This is only useful if a single-node file
417                   system (i.e., not OCFS2 or GFS) with the
418                   allow-two-primaries flag is used. This option can cause the
419                   primary node to crash, and should not be used.
420
421               discard-secondary
422                   Discard the data on the secondary node.
423
424               call-pri-lost-after-sb
425                   Always take the decision of the after-sb-0pri algorithm. If
426                   the decision is to discard the data on the primary node,
427                   call the pri-lost-after-sb handler on the primary node.
428
429           --after-sb-2pri policy
430               Define how to react if a split-brain scenario is detected and
431               both nodes are in primary role. (We detect split-brain
432               scenarios when two nodes connect, so split-brain decisions are
433               always among two nodes.) The defined policies are:
434
435               disconnect
436                   No automatic resynchronization, simply disconnect.
437
438               violently-as0p
439                   See the violently-as0p policy for after-sb-1pri.
440
441               call-pri-lost-after-sb
442                   Call the pri-lost-after-sb helper program on one of the
443                   machines unless that machine can demote to secondary. The
444                   helper program is expected to reboot the machine, which
445                   brings the node into a secondary role. Which machine runs
446                   the helper program is determined by the after-sb-0pri
447                   strategy.
448
449           --allow-two-primaries
450               The most common way to configure DRBD devices is to allow only
451               one node to be primary (and thus writable) at a time.
452
453               In some scenarios it is preferable to allow two nodes to be
454               primary at once; a mechanism outside of DRBD then must make
455               sure that writes to the shared, replicated device happen in a
456               coordinated way. This can be done with a shared-storage cluster
457               file system like OCFS2 and GFS, or with virtual machine images
458               and a virtual machine manager that can migrate virtual machines
459               between physical machines.
460
461               The allow-two-primaries parameter tells DRBD to allow two nodes
462               to be primary at the same time. Never enable this option when
463               using a non-distributed file system; otherwise, data corruption
464               and node crashes will result!
465
466           --always-asbp
467               Normally the automatic after-split-brain policies are only used
468               if current states of the UUIDs do not indicate the presence of
469               a third node.
470
471               With this option you request that the automatic
472               after-split-brain policies are used as long as the data sets of
473               the nodes are somehow related. This might cause a full sync, if
474               the UUIDs indicate the presence of a third node. (Or double
475               faults led to strange UUID sets.)
476
477           --connect-int time
478               As soon as a connection between two nodes is configured with
479               drbdsetup connect, DRBD immediately tries to establish the
480               connection. If this fails, DRBD waits for connect-int seconds
481               and then repeats. The default value of connect-int is 10
482               seconds.
483
484           --cram-hmac-alg hash-algorithm
485               Configure the hash-based message authentication code (HMAC) or
486               secure hash algorithm to use for peer authentication. The
487               kernel supports a number of different algorithms, some of which
488               may be loadable as kernel modules. See the shash algorithms
489               listed in /proc/crypto. By default, cram-hmac-alg is unset.
490               Peer authentication also requires a shared-secret to be
491               configured.
492
493           --csums-alg hash-algorithm
494               Normally, when two nodes resynchronize, the sync target
495               requests a piece of out-of-sync data from the sync source, and
496               the sync source sends the data. With many usage patterns, a
497               significant number of those blocks will actually be identical.
498
499               When a csums-alg algorithm is specified, when requesting a
500               piece of out-of-sync data, the sync target also sends along a
501               hash of the data it currently has. The sync source compares
502               this hash with its own version of the data. It sends the sync
503               target the new data if the hashes differ, and tells it that the
504               data are the same otherwise. This reduces the network bandwidth
505               required, at the cost of higher cpu utilization and possibly
506               increased I/O on the sync target.
507
508               The csums-alg can be set to one of the secure hash algorithms
509               supported by the kernel; see the shash algorithms listed in
510               /proc/crypto. By default, csums-alg is unset.
511
512           --csums-after-crash-only
513               Enabling this option (and csums-alg, above) makes it possible
514               to use the checksum based resync only for the first resync
515               after primary crash, but not for later "network hickups".
516
517               In most cases, block that are marked as need-to-be-resynced are
518               in fact changed, so calculating checksums, and both reading and
519               writing the blocks on the resync target is all effective
520               overhead.
521
522               The advantage of checksum based resync is mostly after primary
523               crash recovery, where the recovery marked larger areas (those
524               covered by the activity log) as need-to-be-resynced, just in
525               case. Introduced in 8.4.5.
526
527           --data-integrity-alg  alg
528               DRBD normally relies on the data integrity checks built into
529               the TCP/IP protocol, but if a data integrity algorithm is
530               configured, it will additionally use this algorithm to make
531               sure that the data received over the network match what the
532               sender has sent. If a data integrity error is detected, DRBD
533               will close the network connection and reconnect, which will
534               trigger a resync.
535
536               The data-integrity-alg can be set to one of the secure hash
537               algorithms supported by the kernel; see the shash algorithms
538               listed in /proc/crypto. By default, this mechanism is turned
539               off.
540
541               Because of the CPU overhead involved, we recommend not to use
542               this option in production environments. Also see the notes on
543               data integrity below.
544
545           --fencing fencing_policy
546               Fencing is a preventive measure to avoid situations where both
547               nodes are primary and disconnected. This is also known as a
548               split-brain situation. DRBD supports the following fencing
549               policies:
550
551               dont-care
552                   No fencing actions are taken. This is the default policy.
553
554               resource-only
555                   If a node becomes a disconnected primary, it tries to fence
556                   the peer. This is done by calling the fence-peer handler.
557                   The handler is supposed to reach the peer over an
558                   alternative communication path and call 'drbdadm outdate
559                   minor' there.
560
561               resource-and-stonith
562                   If a node becomes a disconnected primary, it freezes all
563                   its IO operations and calls its fence-peer handler. The
564                   fence-peer handler is supposed to reach the peer over an
565                   alternative communication path and call 'drbdadm outdate
566                   minor' there. In case it cannot do that, it should stonith
567                   the peer. IO is resumed as soon as the situation is
568                   resolved. In case the fence-peer handler fails, I/O can be
569                   resumed manually with 'drbdadm resume-io'.
570
571           --ko-count number
572               If a secondary node fails to complete a write request in
573               ko-count times the timeout parameter, it is excluded from the
574               cluster. The primary node then sets the connection to this
575               secondary node to Standalone. To disable this feature, you
576               should explicitly set it to 0; defaults may change between
577               versions.
578
579           --max-buffers number
580               Limits the memory usage per DRBD minor device on the receiving
581               side, or for internal buffers during resync or online-verify.
582               Unit is PAGE_SIZE, which is 4 KiB on most systems. The minimum
583               possible setting is hard coded to 32 (=128 KiB). These buffers
584               are used to hold data blocks while they are written to/read
585               from disk. To avoid possible distributed deadlocks on
586               congestion, this setting is used as a throttle threshold rather
587               than a hard limit. Once more than max-buffers pages are in use,
588               further allocation from this pool is throttled. You want to
589               increase max-buffers if you cannot saturate the IO backend on
590               the receiving side.
591
592           --max-epoch-size number
593               Define the maximum number of write requests DRBD may issue
594               before issuing a write barrier. The default value is 2048, with
595               a minimum of 1 and a maximum of 20000. Setting this parameter
596               to a value below 10 is likely to decrease performance.
597
598           --on-congestion policy,
599           --congestion-fill threshold,
600           --congestion-extents threshold
601               By default, DRBD blocks when the TCP send queue is full. This
602               prevents applications from generating further write requests
603               until more buffer space becomes available again.
604
605               When DRBD is used together with DRBD-proxy, it can be better to
606               use the pull-ahead on-congestion policy, which can switch DRBD
607               into ahead/behind mode before the send queue is full. DRBD then
608               records the differences between itself and the peer in its
609               bitmap, but it no longer replicates them to the peer. When
610               enough buffer space becomes available again, the node
611               resynchronizes with the peer and switches back to normal
612               replication.
613
614               This has the advantage of not blocking application I/O even
615               when the queues fill up, and the disadvantage that peer nodes
616               can fall behind much further. Also, while resynchronizing, peer
617               nodes will become inconsistent.
618
619               The available congestion policies are block (the default) and
620               pull-ahead. The congestion-fill parameter defines how much data
621               is allowed to be "in flight" in this connection. The default
622               value is 0, which disables this mechanism of congestion
623               control, with a maximum of 10 GiBytes. The congestion-extents
624               parameter defines how many bitmap extents may be active before
625               switching into ahead/behind mode, with the same default and
626               limits as the al-extents parameter. The congestion-extents
627               parameter is effective only when set to a value smaller than
628               al-extents.
629
630               Ahead/behind mode is available since DRBD 8.3.10.
631
632           --ping-int interval
633               When the TCP/IP connection to a peer is idle for more than
634               ping-int seconds, DRBD will send a keep-alive packet to make
635               sure that a failed peer or network connection is detected
636               reasonably soon. The default value is 10 seconds, with a
637               minimum of 1 and a maximum of 120 seconds. The unit is seconds.
638
639           --ping-timeout timeout
640               Define the timeout for replies to keep-alive packets. If the
641               peer does not reply within ping-timeout, DRBD will close and
642               try to reestablish the connection. The default value is 0.5
643               seconds, with a minimum of 0.1 seconds and a maximum of 3
644               seconds. The unit is tenths of a second.
645
646           --socket-check-timeout timeout
647               In setups involving a DRBD-proxy and connections that
648               experience a lot of buffer-bloat it might be necessary to set
649               ping-timeout to an unusual high value. By default DRBD uses the
650               same value to wait if a newly established TCP-connection is
651               stable. Since the DRBD-proxy is usually located in the same
652               data center such a long wait time may hinder DRBD's connect
653               process.
654
655               In such setups socket-check-timeout should be set to at least
656               to the round trip time between DRBD and DRBD-proxy. I.e. in
657               most cases to 1.
658
659               The default unit is tenths of a second, the default value is 0
660               (which causes DRBD to use the value of ping-timeout instead).
661               Introduced in 8.4.5.
662
663           --protocol name
664               Use the specified protocol on this connection. The supported
665               protocols are:
666
667               A
668                   Writes to the DRBD device complete as soon as they have
669                   reached the local disk and the TCP/IP send buffer.
670
671               B
672                   Writes to the DRBD device complete as soon as they have
673                   reached the local disk, and all peers have acknowledged the
674                   receipt of the write requests.
675
676               C
677                   Writes to the DRBD device complete as soon as they have
678                   reached the local and all remote disks.
679
680
681           --rcvbuf-size size
682               Configure the size of the TCP/IP receive buffer. A value of 0
683               (the default) causes the buffer size to adjust dynamically.
684               This parameter usually does not need to be set, but it can be
685               set to a value up to 10 MiB. The default unit is bytes.
686
687           --rr-conflict policy
688               This option helps to solve the cases when the outcome of the
689               resync decision is incompatible with the current role
690               assignment in the cluster. The defined policies are:
691
692               disconnect
693                   No automatic resynchronization, simply disconnect.
694
695               retry-connect
696                   Disconnect now, and retry to connect immediatly afterwards.
697
698               violently
699                   Resync to the primary node is allowed, violating the
700                   assumption that data on a block device are stable for one
701                   of the nodes.  Do not use this option, it is dangerous.
702
703               call-pri-lost
704                   Call the pri-lost handler on one of the machines. The
705                   handler is expected to reboot the machine, which puts it
706                   into secondary role.
707
708           --shared-secret secret
709               Configure the shared secret used for peer authentication. The
710               secret is a string of up to 64 characters. Peer authentication
711               also requires the cram-hmac-alg parameter to be set.
712
713           --sndbuf-size size
714               Configure the size of the TCP/IP send buffer. Since DRBD 8.0.13
715               / 8.2.7, a value of 0 (the default) causes the buffer size to
716               adjust dynamically. Values below 32 KiB are harmful to the
717               throughput on this connection. Large buffer sizes can be useful
718               especially when protocol A is used over high-latency networks;
719               the maximum value supported is 10 MiB.
720
721           --tcp-cork
722               By default, DRBD uses the TCP_CORK socket option to prevent the
723               kernel from sending partial messages; this results in fewer and
724               bigger packets on the network. Some network stacks can perform
725               worse with this optimization. On these, the tcp-cork parameter
726               can be used to turn this optimization off.
727
728           --timeout time
729               Define the timeout for replies over the network: if a peer node
730               does not send an expected reply within the specified timeout,
731               it is considered dead and the TCP/IP connection is closed. The
732               timeout value must be lower than connect-int and lower than
733               ping-int. The default is 6 seconds; the value is specified in
734               tenths of a second.
735
736           --use-rle
737               Each replicated device on a cluster node has a separate bitmap
738               for each of its peer devices. The bitmaps are used for tracking
739               the differences between the local and peer device: depending on
740               the cluster state, a disk range can be marked as different from
741               the peer in the device's bitmap, in the peer device's bitmap,
742               or in both bitmaps. When two cluster nodes connect, they
743               exchange each other's bitmaps, and they each compute the union
744               of the local and peer bitmap to determine the overall
745               differences.
746
747               Bitmaps of very large devices are also relatively large, but
748               they usually compress very well using run-length encoding. This
749               can save time and bandwidth for the bitmap transfers.
750
751               The use-rle parameter determines if run-length encoding should
752               be used. It is on by default since DRBD 8.4.0.
753
754           --verify-alg hash-algorithm
755               Online verification (drbdadm verify) computes and compares
756               checksums of disk blocks (i.e., hash values) in order to detect
757               if they differ. The verify-alg parameter determines which
758               algorithm to use for these checksums. It must be set to one of
759               the secure hash algorithms supported by the kernel before
760               online verify can be used; see the shash algorithms listed in
761               /proc/crypto.
762
763               We recommend to schedule online verifications regularly during
764               low-load periods, for example once a month. Also see the notes
765               on data integrity below.
766
767       drbdsetup new-path resource peer_node_id local-addr remote-addr
768           The new-path command creates a path within a connection. The
769           connection must have been created with drbdsetup new-peer.
770           Local_addr and remote_addr refer to the local and remote protocol,
771           network address, and port in the format
772           [address-family:]address[:port]. The address families ipv4, ipv6,
773           ssocks (Dolphin Interconnect Solutions' "super sockets"), sdp
774           (Infiniband Sockets Direct Protocol), and sci are supported (sci is
775           an alias for ssocks). If no address family is specified, ipv4 is
776           assumed. For all address families except ipv6, the address uses
777           IPv4 address notation (for example, 1.2.3.4). For ipv6, the address
778           is enclosed in brackets and uses IPv6 address notation (for
779           example, [fd01:2345:6789:abcd::1]). The port defaults to 7788.
780
781       drbdsetup connect resource peer_node_id
782           The connect command activates a connection. That means that the
783           DRBD driver will bind and listen on all local addresses of the
784           connection-'s paths. It will begin to try to establish one or more
785           paths of the connection. Available options:
786
787           --tentative
788               Only determine if a connection to the peer can be established
789               and if a resync is necessary (and in which direction) without
790               actually establishing the connection or starting the resync.
791               Check the system log to see what DRBD would do without the
792               --tentative option.
793
794           --discard-my-data
795               Discard the local data and resynchronize with the peer that has
796               the most up-to-data data. Use this option to manually recover
797               from a split-brain situation.
798
799       drbdsetup del-peer resource peer_node_id
800           The del-peer command removes a connection from a resource.
801
802       drbdsetup del-path resource peer_node_id local-addr remote-addr
803           The del-path command removes a path from a connection. Please note
804           that it fails if the path is necessary to keep a connected
805           connection in tact. In order to remove all paths, disconnect the
806           connection first.
807
808       drbdsetup cstate resource peer_node_id
809           Show the current state of a connection. The connection is
810           identified by the node-id of the peer; see the drbdsetup connect
811           command.
812
813       drbdsetup del-minor minor
814           Remove a replicated device. No lower-level device may be attached;
815           see drbdsetup detach.
816
817       drbdsetup del-resource resource
818           Remove a resource. All volumes and connections must be removed
819           first (drbdsetup del-minor, drbdsetup disconnect). Alternatively,
820           drbdsetup down can be used to remove a resource together with all
821           its volumes and connections.
822
823       drbdsetup detach minor
824           Detach the lower-level device of a replicated device. Available
825           options:
826
827           --force
828               Force the detach and return immediately. This puts the
829               lower-level device into failed state until all pending I/O has
830               completed, and then detaches the device. Any I/O not yet
831               submitted to the lower-level device (for example, because I/O
832               on the device was suspended) is assumed to have failed.
833
834
835       drbdsetup disconnect resource peer_node_id
836           Remove a connection to a peer host. The connection is identified by
837           the node-id of the peer; see the drbdsetup connect command.
838
839       drbdsetup down {resource | all}
840           Take a resource down by removing all volumes, connections, and the
841           resource itself.
842
843       drbdsetup dstate minor
844           Show the current disk state of a lower-level device.
845
846       drbdsetup events2 {resource | all}
847           Show the current state of all configured DRBD objects, followed by
848           all changes to the state.
849
850           The output format is meant to be human as well as machine readable.
851           The line starts with a word that indicates the kind of event:
852           exists for an existing object; create, destroy, and change if an
853           object is created, destroyed, or changed; call or response if an
854           event handler is called or it returns; or rename when the name of
855           an object is changed. The second word indicates the object the
856           event applies to: resource, device, connection, peer-device, path,
857           helper, or a dash (-) to indicate that the current state has been
858           dumped completely.
859
860           The remaining words identify the object and describe the state that
861           the object is in. Some special keys are worth mentioning:
862
863           resource may_promote:{yes|no}
864               Whether promoting to primary is expected to succeed. When
865               quorum is enabled, this can be used to trigger failover. When
866               may_promote:yes is reported on this node, then no writes are
867               possible on any other node, which generally means that the
868               application can be started on this node, even when it has been
869               running on another.
870
871           resource promotion_score:score
872               An integer heuristic indicating the relative preference for
873               promoting this resource. A higher score is better in terms of
874               having local disks and having access to up-to-date data. The
875               score may be positive even when some node is primary. It will
876               be zero when promotion is impossible due to quorum or lack of
877               any access to up-to-date data.
878
879           Available options:
880
881           --now
882               Terminate after reporting the current state. The default is to
883               continuously listen and report state changes.
884
885           --poll
886               This is completely ignored if --now is not given. In
887               combination with --now it prints the current state once and
888               then reads on stdin. If a n is read, this triggers another run.
889               Newlines are ignored. Every other input terminates the command.
890
891           --statistics
892               Include statistics in the output.
893
894           --diff
895               Write information in form of a diff between old and new state.
896               This helps simple tools to avoid (old) state tracking on their
897               own.
898
899           --full
900               Write complete state information, especially on change events.
901               This enables --statistics and --verbose.
902
903
904       drbdsetup get-gi resource peer_node_id volume
905           Show the data generation identifiers for a device on a particular
906           connection. The device is identified by its volume number. The
907           connection is identified by its endpoints; see the drbdsetup
908           connect command.
909
910           The output consists of the current UUID, bitmap UUID, and the first
911           two history UUIDS, folowed by a set of flags. The current UUID and
912           history UUIDs are device specific; the bitmap UUID and flags are
913           peer device specific. This command only shows the first two history
914           UUIDs. Internally, DRBD maintains one history UUID for each
915           possible peer device.
916
917       drbdsetup invalidate minor
918           Replace the local data of a device with that of a peer. All the
919           local data will be marked out-of-sync, and a resync with the
920           specified peer device will be initialted.
921
922       drbdsetup invalidate-remote resource peer_node_id volume
923           Replace a peer device's data of a resource with the local data. The
924           peer device's data will be marked out-of-sync, and a resync from
925           the local node to the specified peer will be initiated.
926
927       drbdsetup new-current-uuid minor
928           Generate a new current UUID and rotates all other UUID values. This
929           has at least two use cases, namely to skip the initial sync, and to
930           reduce network bandwidth when starting in a single node
931           configuration and then later (re-)integrating a remote site.
932
933           Available option:
934
935           --clear-bitmap
936               Clears the sync bitmap in addition to generating a new current
937               UUID.
938
939           This can be used to skip the initial sync, if you want to start
940           from scratch. This use-case does only work on "Just Created" meta
941           data. Necessary steps:
942
943            1. On both nodes, initialize meta data and configure the device.
944
945               drbdadm create-md --force res/volume-number
946
947            2. They need to do the initial handshake, so they know their
948               sizes.
949
950               drbdadm up res
951
952            3. They are now Connected Secondary/Secondary
953               Inconsistent/Inconsistent. Generate a new current-uuid and
954               clear the dirty bitmap.
955
956               drbdadm --clear-bitmap new-current-uuid res
957
958            4. They are now Connected Secondary/Secondary UpToDate/UpToDate.
959               Make one side primary and create a file system.
960
961               drbdadm primary res
962
963               mkfs -t fs-type $(drbdadm sh-dev res)
964
965           One obvious side-effect is that the replica is full of old garbage
966           (unless you made them identical using other means), so any
967           online-verify is expected to find any number of out-of-sync blocks.
968
969           You must not use this on pre-existing data!  Even though it may
970           appear to work at first glance, once you switch to the other node,
971           your data is toast, as it never got replicated. So do not leave out
972           the mkfs (or equivalent).
973
974           This can also be used to shorten the initial resync of a cluster
975           where the second node is added after the first node is gone into
976           production, by means of disk shipping. This use-case works on
977           disconnected devices only, the device may be in primary or
978           secondary role.
979
980           The necessary steps on the current active server are:
981
982            1. drbdsetup new-current-uuid --clear-bitmap minor
983
984            2. Take the copy of the current active server. E.g. by pulling a
985               disk out of the RAID1 controller, or by copying with dd. You
986               need to copy the actual data, and the meta data.
987
988            3. drbdsetup new-current-uuid minor
989
990           Now add the disk to the new secondary node, and join it to the
991           cluster. You will get a resync of that parts that were changed
992           since the first call to drbdsetup in step 1.
993
994       drbdsetup new-minor resource minor volume
995           Create a new replicated device within a resource. The command
996           creates a block device inode for the replicated device (by default,
997           /dev/drbdminor). The volume number identifies the device within the
998           resource.
999
1000       drbdsetup new-resource resource node_id,
1001       drbdsetup resource-options resource
1002           The new-resource command creates a new resource. The
1003           resource-options command changes the resource options of an
1004           existing resource. Available options:
1005
1006           --auto-promote bool-value
1007               A resource must be promoted to primary role before any of its
1008               devices can be mounted or opened for writing.
1009
1010               Before DRBD 9, this could only be done explicitly ("drbdadm
1011               primary"). Since DRBD 9, the auto-promote parameter allows to
1012               automatically promote a resource to primary role when one of
1013               its devices is mounted or opened for writing. As soon as all
1014               devices are unmounted or closed with no more remaining users,
1015               the role of the resource changes back to secondary.
1016
1017               Automatic promotion only succeeds if the cluster state allows
1018               it (that is, if an explicit drbdadm primary command would
1019               succeed). Otherwise, mounting or opening the device fails as it
1020               already did before DRBD 9: the mount(2) system call fails with
1021               errno set to EROFS (Read-only file system); the open(2) system
1022               call fails with errno set to EMEDIUMTYPE (wrong medium type).
1023
1024               Irrespective of the auto-promote parameter, if a device is
1025               promoted explicitly (drbdadm primary), it also needs to be
1026               demoted explicitly (drbdadm secondary).
1027
1028               The auto-promote parameter is available since DRBD 9.0.0, and
1029               defaults to yes.
1030
1031           --cpu-mask cpu-mask
1032               Set the cpu affinity mask for DRBD kernel threads. The cpu mask
1033               is specified as a hexadecimal number. The default value is 0,
1034               which lets the scheduler decide which kernel threads run on
1035               which CPUs. CPU numbers in cpu-mask which do not exist in the
1036               system are ignored.
1037
1038           --on-no-data-accessible policy
1039               Determine how to deal with I/O requests when the requested data
1040               is not available locally or remotely (for example, when all
1041               disks have failed). The defined policies are:
1042
1043               io-error
1044                   System calls fail with errno set to EIO.
1045
1046               suspend-io
1047                   The resource suspends I/O. I/O can be resumed by
1048                   (re)attaching the lower-level device, by connecting to a
1049                   peer which has access to the data, or by forcing DRBD to
1050                   resume I/O with drbdadm resume-io res. When no data is
1051                   available, forcing I/O to resume will result in the same
1052                   behavior as the io-error policy.
1053
1054               This setting is available since DRBD 8.3.9; the default policy
1055               is io-error.
1056
1057           --peer-ack-window value
1058               On each node and for each device, DRBD maintains a bitmap of
1059               the differences between the local and remote data for each peer
1060               device. For example, in a three-node setup (nodes A, B, C) each
1061               with a single device, every node maintains one bitmap for each
1062               of its peers.
1063
1064               When nodes receive write requests, they know how to update the
1065               bitmaps for the writing node, but not how to update the bitmaps
1066               between themselves. In this example, when a write request
1067               propagates from node A to B and C, nodes B and C know that they
1068               have the same data as node A, but not whether or not they both
1069               have the same data.
1070
1071               As a remedy, the writing node occasionally sends peer-ack
1072               packets to its peers which tell them which state they are in
1073               relative to each other.
1074
1075               The peer-ack-window parameter specifies how much data a primary
1076               node may send before sending a peer-ack packet. A low value
1077               causes increased network traffic; a high value causes less
1078               network traffic but higher memory consumption on secondary
1079               nodes and higher resync times between the secondary nodes after
1080               primary node failures. (Note: peer-ack packets may be sent due
1081               to other reasons as well, e.g. membership changes or expiry of
1082               the peer-ack-delay timer.)
1083
1084               The default value for peer-ack-window is 2 MiB, the default
1085               unit is sectors. This option is available since 9.0.0.
1086
1087           --peer-ack-delay expiry-time
1088               If after the last finished write request no new write request
1089               gets issued for expiry-time, then a peer-ack packet is sent. If
1090               a new write request is issued before the timer expires, the
1091               timer gets reset to expiry-time. (Note: peer-ack packets may be
1092               sent due to other reasons as well, e.g. membership changes or
1093               the peer-ack-window option.)
1094
1095               This parameter may influence resync behavior on remote nodes.
1096               Peer nodes need to wait until they receive an peer-ack for
1097               releasing a lock on an AL-extent. Resync operations between
1098               peers may need to wait for for these locks.
1099
1100               The default value for peer-ack-delay is 100 milliseconds, the
1101               default unit is milliseconds. This option is available since
1102               9.0.0.
1103
1104           --quorum value
1105               When activated, a cluster partition requires quorum in order to
1106               modify the replicated data set. That means a node in the
1107               cluster partition can only be promoted to primary if the
1108               cluster partition has quorum. Every node with a disk directly
1109               connected to the node that should be promoted counts. If a
1110               primary node should execute a write request, but the cluster
1111               partition has lost quorum, it will freeze IO or reject the
1112               write request with an error (depending on the on-no-quorum
1113               setting). Upon loosing quorum a primary always invokes the
1114               quorum-lost handler. The handler is intended for notification
1115               purposes, its return code is ignored.
1116
1117               The option's value might be set to off, majority, all or a
1118               numeric value. If you set it to a numeric value, make sure that
1119               the value is greater than half of your number of nodes. Quorum
1120               is a mechanism to avoid data divergence, it might be used
1121               instead of fencing when there are more than two repicas. It
1122               defaults to off
1123
1124               If all missing nodes are marked as outdated, a partition always
1125               has quorum, no matter how small it is. I.e. If you disconnect
1126               all secondary nodes gracefully a single primary continues to
1127               operate. In the moment a single secondary is lost, it has to be
1128               assumed that it forms a partition with all the missing outdated
1129               nodes. In case my partition might be smaller than the other,
1130               quorum is lost in this moment.
1131
1132               In case you want to allow permanently diskless nodes to gain
1133               quorum it is recommendet to not use majority or all. It is
1134               recommended to specify an absolute number, since DBRD's
1135               heuristic to determine the complete number of diskfull nodes in
1136               the cluster is unreliable.
1137
1138               The quorum implementation is available starting with the DRBD
1139               kernel driver version 9.0.7.
1140
1141           --quorum-minimum-redundancy value
1142               This option sets the minimal required number of nodes with an
1143               UpToDate disk to allow the partition to gain quorum. This is a
1144               different requirement than the plain quorum option expresses.
1145
1146               The option's value might be set to off, majority, all or a
1147               numeric value. If you set it to a numeric value, make sure that
1148               the value is greater than half of your number of nodes.
1149
1150               In case you want to allow permanently diskless nodes to gain
1151               quorum it is recommendet to not use majority or all. It is
1152               recommended to specify an absolute number, since DBRD's
1153               heuristic to determine the complete number of diskfull nodes in
1154               the cluster is unreliable.
1155
1156               This option is available starting with the DRBD kernel driver
1157               version 9.0.10.
1158
1159           --on-no-quorum {io-error | suspend-io}
1160               By default DRBD freezes IO on a device, that lost quorum. By
1161               setting the on-no-quorum to io-error it completes all IO
1162               operations with an error if quorum ist lost.
1163
1164               The on-no-quorum options is available starting with the DRBD
1165               kernel driver version 9.0.8.
1166
1167
1168       drbdsetup outdate minor
1169           Mark the data on a lower-level device as outdated. This is used for
1170           fencing, and prevents the resource the device is part of from
1171           becoming primary in the future. See the --fencing disk option.
1172
1173       drbdsetup pause-sync resource peer_node_id volume
1174           Stop resynchronizing between a local and a peer device by setting
1175           the local pause flag. The resync can only resume if the pause flags
1176           on both sides of a connection are cleared.
1177
1178       drbdsetup primary resource
1179           Change the role of a node in a resource to primary. This allows the
1180           replicated devices in this resource to be mounted or opened for
1181           writing. Available options:
1182
1183           --overwrite-data-of-peer
1184               This option is an alias for the --force option.
1185
1186           --force
1187               Force the resource to become primary even if some devices are
1188               not guaranteed to have up-to-date data. This option is used to
1189               turn one of the nodes in a newly created cluster into the
1190               primary node, or when manually recovering from a disaster.
1191
1192               Note that this can lead to split-brain scenarios. Also, when
1193               forcefully turning an inconsistent device into an up-to-date
1194               device, it is highly recommended to use any integrity checks
1195               available (such as a filesystem check) to make sure that the
1196               device can at least be used without crashing the system.
1197
1198           Note that DRBD usually only allows one node in a cluster to be in
1199           primary role at any time; this allows DRBD to coordinate access to
1200           the devices in a resource across nodes. The --allow-two-primaries
1201           network option changes this; in that case, a mechanism outside of
1202           DRBD needs to coordinate device access.
1203
1204       drbdsetup resize minor
1205           Reexamine the size of the lower-level devices of a replicated
1206           device on all nodes. This command is called after the lower-level
1207           devices on all nodes have been grown to adjust the size of the
1208           replicated device. Available options:
1209
1210           --assume-peer-has-space
1211               Resize the device even if some of the peer devices are not
1212               connected at the moment. DRBD will try to resize the peer
1213               devices when they next connect. It will refuse to connect to a
1214               peer device which is too small.
1215
1216           --assume-clean
1217               Do not resynchronize the added disk space; instead, assume that
1218               it is identical on all nodes. This option can be used when the
1219               disk space is uninitialized and differences do not matter, or
1220               when it is known to be identical on all nodes. See the
1221               drbdsetup verify command.
1222
1223           --size val
1224               This option can be used to online shrink the usable size of a
1225               drbd device. It's the users responsibility to make sure that a
1226               file system on the device is not truncated by that operation.
1227
1228           --al-stripes val --al-stripes val
1229               These options may be used to change the layout of the activity
1230               log online. In case of internal meta data this may invovle
1231               shrinking the user visible size at the same time (unsing the
1232               --size) or increasing the avalable space on the backing
1233               devices.
1234
1235
1236       drbdsetup resume-io minor
1237           Resume I/O on a replicated device. See the --fencing net option.
1238
1239       drbdsetup resume-sync resource peer_node_id volume
1240           Allow resynchronization to resume by clearing the local sync pause
1241           flag.
1242
1243       drbdsetup role resource
1244           Show the current role of a resource.
1245
1246       drbdsetup secondary resource
1247           Change the role of a node in a resource to secondary. This command
1248           fails if the replicated device is in use.
1249
1250       drbdsetup show {resource | all}
1251           Show the current configuration of a resource, or of all resources.
1252           Available options:
1253
1254           --show-defaults
1255               Show all configuration parameters, even the ones with default
1256               values. Normally, parameters with default values are not shown.
1257
1258
1259       drbdsetup show-gi resource peer_node_id volume
1260           Show the data generation identifiers for a device on a particular
1261           connection. In addition, explain the output. The output otherwise
1262           is the same as in the drbdsetup get-gi command.
1263
1264       drbdsetup state
1265           This is an alias for drbdsetup role. Deprecated.
1266
1267       drbdsetup status {resource | all}
1268           Show the status of a resource, or of all resources. The output
1269           consists of one paragraph for each configured resource. Each
1270           paragraph contains one line for each resource, followed by one line
1271           for each device, and one line for each connection. The device and
1272           connection lines are indented. The connection lines are followed by
1273           one line for each peer device; these lines are indented against the
1274           connection line.
1275
1276           Long lines are wrapped around at terminal width, and indented to
1277           indicate how the lines belongs together. Available options:
1278
1279           --verbose
1280               Include more information in the output even when it is likely
1281               redundant or irrelevant.
1282
1283           --statistics
1284               Include data transfer statistics in the output.
1285
1286           --color={always | auto | never}
1287               Colorize the output. With --color=auto, drbdsetup emits color
1288               codes only when standard output is connected to a terminal.
1289
1290           For example, the non-verbose output for a resource with only one
1291           connection and only one volume could look like this:
1292
1293               drbd0 role:Primary
1294                 disk:UpToDate
1295                 host2.example.com role:Secondary
1296                   disk:UpToDate
1297
1298
1299           With the --verbose option, the same resource could be reported as:
1300
1301               drbd0 node-id:1 role:Primary suspended:no
1302                 volume:0 minor:1 disk:UpToDate blocked:no
1303                 host2.example.com local:ipv4:192.168.123.4:7788
1304                     peer:ipv4:192.168.123.2:7788 node-id:0 connection:WFReportParams
1305                     role:Secondary congested:no
1306                   volume:0 replication:Connected disk:UpToDate resync-suspended:no
1307
1308
1309
1310       drbdsetup suspend-io minor
1311           Suspend I/O on a replicated device. It is not usually necessary to
1312           use this command.
1313
1314       drbdsetup verify resource peer_node_id volume
1315           Start online verification, change which part of the device will be
1316           verified, or stop online verification. The command requires the
1317           specified peer to be connected.
1318
1319           Online verification compares each disk block on the local and peer
1320           node. Blocks which differ between the nodes are marked as
1321           out-of-sync, but they are not automatically brought back into sync.
1322           To bring them into sync, the resource must be disconnected and
1323           reconnected. Progress can be monitored in the output of drbdsetup
1324           status --statistics. Available options:
1325
1326           --start position
1327               Define where online verification should start. This parameter
1328               is ignored if online verification is already in progress. If
1329               the start parameter is not specified, online verification will
1330               continue where it was interrupted (if the connection to the
1331               peer was lost while verifying), after the previous stop sector
1332               (if the previous online verification has finished), or at the
1333               beginning of the device (if the end of the device was reached,
1334               or online verify has not run before).
1335
1336               The position on disk is specified in disk sectors (512 bytes)
1337               by default.
1338
1339           --stop position
1340               Define where online verification should stop. If online
1341               verification is already in progress, the stop position of the
1342               active online verification process is changed. Use this to stop
1343               online verification.
1344
1345               The position on disk is specified in disk sectors (512 bytes)
1346               by default.
1347
1348           Also see the notes on data integrity in the drbd.conf(5) manual
1349           page.
1350
1351       drbdsetup wait-connect-volume resource peer_node_id volume,
1352       drbdsetup wait-connect-connection resource peer_node_id,
1353       drbdsetup wait-connect-resource resource,
1354       drbdsetup wait-sync-volume resource peer_node_id volume,
1355       drbdsetup wait-sync-connection resource peer_node_id,
1356       drbdsetup wait-sync-resource resource
1357           The wait-connect-* commands waits until a device on a peer is
1358           visible. The wait-sync-* commands waits until a device on a peer is
1359           up to date. Available options for both commands:
1360
1361           --degr-wfc-timeout timeout
1362               Define how long to wait until all peers are connected in case
1363               the cluster consisted of a single node only when the system
1364               went down. This parameter is usually set to a value smaller
1365               than wfc-timeout. The assumption here is that peers which were
1366               unreachable before a reboot are less likely to be reachable
1367               after the reboot, so waiting is less likely to help.
1368
1369               The timeout is specified in seconds. The default value is 0,
1370               which stands for an infinite timeout. Also see the wfc-timeout
1371               parameter.
1372
1373           --outdated-wfc-timeout timeout
1374               Define how long to wait until all peers are connected if all
1375               peers were outdated when the system went down. This parameter
1376               is usually set to a value smaller than wfc-timeout. The
1377               assumption here is that an outdated peer cannot have become
1378               primary in the meantime, so we don't need to wait for it as
1379               long as for a node which was alive before.
1380
1381               The timeout is specified in seconds. The default value is 0,
1382               which stands for an infinite timeout. Also see the wfc-timeout
1383               parameter.
1384
1385           --wait-after-sb
1386               This parameter causes DRBD to continue waiting in the init
1387               script even when a split-brain situation has been detected, and
1388               the nodes therefore refuse to connect to each other.
1389
1390           --wfc-timeout timeout
1391               Define how long the init script waits until all peers are
1392               connected. This can be useful in combination with a cluster
1393               manager which cannot manage DRBD resources: when the cluster
1394               manager starts, the DRBD resources will already be up and
1395               running. With a more capable cluster manager such as Pacemaker,
1396               it makes more sense to let the cluster manager control DRBD
1397               resources. The timeout is specified in seconds. The default
1398               value is 0, which stands for an infinite timeout. Also see the
1399               degr-wfc-timeout parameter.
1400
1401
1402       drbdsetup forget-peer resource peer_node_id
1403           The forget-peer command removes all traces of a peer node from the
1404           meta-data. It frees a bitmap slot in the meta-data and make it
1405           avalable for futher bitmap slot allocation in case a so-far never
1406           seen node connects.
1407
1408           The connection must be taken down before this command may be used.
1409           In case the peer re-connects at a later point a bit-map based
1410           resync will be turned into a full-sync.
1411
1412       drbdsetup rename-resource resource new_name
1413           Change the name of resource to new_name on the local node. Note
1414           that, since there is no concept of resource names in DRBD's network
1415           protocol, it is technically possible to have different names for a
1416           resource on different nodes. However, it is strongly recommended to
1417           issue the same rename-resource command on all nodes to have
1418           consistent naming across the cluster.
1419
1420           A rename event will be issued on the events2 stream to notify users
1421           of the new name.
1422

EXAMPLES

1424       Please see the DRBD User's Guide[1] for examples.
1425

VERSION

1427       This document was revised for version 9.0.0 of the DRBD distribution.
1428

AUTHOR

1430       Written by Philipp Reisner <philipp.reisner@linbit.com> and Lars
1431       Ellenberg <lars.ellenberg@linbit.com>.
1432

REPORTING BUGS

1434       Report bugs to <drbd-user@lists.linbit.com>.
1435
1437       Copyright 2001-2018 LINBIT Information Technologies, Philipp Reisner,
1438       Lars Ellenberg. This is free software; see the source for copying
1439       conditions. There is NO warranty; not even for MERCHANTABILITY or
1440       FITNESS FOR A PARTICULAR PURPOSE.
1441

SEE ALSO

1443       drbd.conf(5), drbd(8), drbdadm(8), DRBD User's Guide[1], DRBD Web
1444       Site[2]
1445

NOTES

1447        1. DRBD User's Guide
1448           http://www.drbd.org/users-guide/
1449
1450        2. DRBD Web Site
1451           http://www.drbd.org/
1452
1453
1454
1455DRBD 9.0.x                      17 January 2018                   DRBDSETUP(8)
Impressum