1DRBDSETUP(8) System Administration DRBDSETUP(8)
2
3
4
6 drbdsetup - Configure the DRBD kernel module
7
9 drbdsetup command {argument...} [option...]
10
12 The drbdsetup utility serves to configure the DRBD kernel module and to
13 show its current configuration. Users usually interact with the drbdadm
14 utility, which provides a more high-level interface to DRBD than
15 drbdsetup. (See drbdadm's --dry-run option to see how drbdadm uses
16 drbdsetup.)
17
18 Some option arguments have a default scale which applies when a plain
19 number is specified (for example Kilo, or 1024 times the numeric
20 value). Such default scales can be overridden by using a suffix (for
21 example, M for Mega). The common suffixes K = 2^10 = 1024, M = 1024 K,
22 and G = 1024 M are supported.
23
25 drbdsetup attach minor lower_dev meta_data_dev meta_data_index,
26 drbdsetup disk-options minor
27 The attach command attaches a lower-level device to an existing
28 replicated device. The disk-options command changes the disk
29 options of an attached lower-level device. In either case, the
30 replicated device must have been created with drbdsetup new-minor.
31
32 Both commands refer to the replicated device by its minor number.
33 lower_dev is the name of the lower-level device. meta_data_dev is
34 the name of the device containing the metadata, and may be the same
35 as lower_dev. meta_data_index is either a numeric metadata index,
36 or the keyword internal for internal metadata, or the keyword
37 flexible for variable-size external metadata. Available options:
38
39 --al-extents extents
40 DRBD automatically maintains a "hot" or "active" disk area
41 likely to be written to again soon based on the recent write
42 activity. The "active" disk area can be written to immediately,
43 while "inactive" disk areas must be "activated" first, which
44 requires a meta-data write. We also refer to this active disk
45 area as the "activity log".
46
47 The activity log saves meta-data writes, but the whole log must
48 be resynced upon recovery of a failed node. The size of the
49 activity log is a major factor of how long a resync will take
50 and how fast a replicated disk will become consistent after a
51 crash.
52
53 The activity log consists of a number of 4-Megabyte segments;
54 the al-extents parameter determines how many of those segments
55 can be active at the same time. The default value for
56 al-extents is 1237, with a minimum of 7 and a maximum of 65536.
57
58 Note that the effective maximum may be smaller, depending on
59 how you created the device meta data, see also drbdmeta(8) The
60 effective maximum is 919 * (available on-disk activity-log
61 ring-buffer area/4kB -1), the default 32kB ring-buffer effects
62 a maximum of 6433 (covers more than 25 GiB of data) We
63 recommend to keep this well within the amount your backend
64 storage and replication link are able to resync inside of about
65 5 minutes.
66
67 --al-updates {yes | no}
68 With this parameter, the activity log can be turned off
69 entirely (see the al-extents parameter). This will speed up
70 writes because fewer meta-data writes will be necessary, but
71 the entire device needs to be resynchronized opon recovery of a
72 failed primary node. The default value for al-updates is yes.
73
74 --disk-barrier,
75 --disk-flushes,
76 --disk-drain
77 DRBD has three methods of handling the ordering of dependent
78 write requests:
79
80 disk-barrier
81 Use disk barriers to make sure that requests are written to
82 disk in the right order. Barriers ensure that all requests
83 submitted before a barrier make it to the disk before any
84 requests submitted after the barrier. This is implemented
85 using 'tagged command queuing' on SCSI devices and 'native
86 command queuing' on SATA devices. Only some devices and
87 device stacks support this method. The device mapper (LVM)
88 only supports barriers in some configurations.
89
90 Note that on systems which do not support disk barriers,
91 enabling this option can lead to data loss or corruption.
92 Until DRBD 8.4.1, disk-barrier was turned on if the I/O
93 stack below DRBD did support barriers. Kernels since
94 linux-2.6.36 (or 2.6.32 RHEL6) no longer allow to detect if
95 barriers are supported. Since drbd-8.4.2, this option is
96 off by default and needs to be enabled explicitly.
97
98 disk-flushes
99 Use disk flushes between dependent write requests, also
100 referred to as 'force unit access' by drive vendors. This
101 forces all data to disk. This option is enabled by default.
102
103 disk-drain
104 Wait for the request queue to "drain" (that is, wait for
105 the requests to finish) before submitting a dependent write
106 request. This method requires that requests are stable on
107 disk when they finish. Before DRBD 8.0.9, this was the only
108 method implemented. This option is enabled by default. Do
109 not disable in production environments.
110
111 From these three methods, drbd will use the first that is
112 enabled and supported by the backing storage device. If all
113 three of these options are turned off, DRBD will submit write
114 requests without bothering about dependencies. Depending on the
115 I/O stack, write requests can be reordered, and they can be
116 submitted in a different order on different cluster nodes. This
117 can result in data loss or corruption. Therefore, turning off
118 all three methods of controlling write ordering is strongly
119 discouraged.
120
121 A general guideline for configuring write ordering is to use
122 disk barriers or disk flushes when using ordinary disks (or an
123 ordinary disk array) with a volatile write cache. On storage
124 without cache or with a battery backed write cache, disk
125 draining can be a reasonable choice.
126
127 --disk-timeout
128 If the lower-level device on which a DRBD device stores its
129 data does not finish an I/O request within the defined
130 disk-timeout, DRBD treats this as a failure. The lower-level
131 device is detached, and the device's disk state advances to
132 Diskless. If DRBD is connected to one or more peers, the failed
133 request is passed on to one of them.
134
135 This option is dangerous and may lead to kernel panic!
136
137 "Aborting" requests, or force-detaching the disk, is intended
138 for completely blocked/hung local backing devices which do no
139 longer complete requests at all, not even do error completions.
140 In this situation, usually a hard-reset and failover is the
141 only way out.
142
143 By "aborting", basically faking a local error-completion, we
144 allow for a more graceful swichover by cleanly migrating
145 services. Still the affected node has to be rebooted "soon".
146
147 By completing these requests, we allow the upper layers to
148 re-use the associated data pages.
149
150 If later the local backing device "recovers", and now DMAs some
151 data from disk into the original request pages, in the best
152 case it will just put random data into unused pages; but
153 typically it will corrupt meanwhile completely unrelated data,
154 causing all sorts of damage.
155
156 Which means delayed successful completion, especially for READ
157 requests, is a reason to panic(). We assume that a delayed
158 *error* completion is OK, though we still will complain noisily
159 about it.
160
161 The default value of disk-timeout is 0, which stands for an
162 infinite timeout. Timeouts are specified in units of 0.1
163 seconds. This option is available since DRBD 8.3.12.
164
165 --md-flushes
166 Enable disk flushes and disk barriers on the meta-data device.
167 This option is enabled by default. See the disk-flushes
168 parameter.
169
170 --on-io-error handler
171 Configure how DRBD reacts to I/O errors on a lower-level
172 device. The following policies are defined:
173
174 pass_on
175 Change the disk status to Inconsistent, mark the failed
176 block as inconsistent in the bitmap, and retry the I/O
177 operation on a remote cluster node.
178
179 call-local-io-error
180 Call the local-io-error handler (see the handlers section).
181
182 detach
183 Detach the lower-level device and continue in diskless
184 mode.
185
186
187 --read-balancing policy
188 Distribute read requests among cluster nodes as defined by
189 policy. The supported policies are prefer-local (the default),
190 prefer-remote, round-robin, least-pending,
191 when-congested-remote, 32K-striping, 64K-striping,
192 128K-striping, 256K-striping, 512K-striping and 1M-striping.
193
194 This option is available since DRBD 8.4.1.
195
196 resync-after minor
197 Define that a device should only resynchronize after the
198 specified other device. By default, no order between devices is
199 defined, and all devices will resynchronize in parallel.
200 Depending on the configuration of the lower-level devices, and
201 the available network and disk bandwidth, this can slow down
202 the overall resync process. This option can be used to form a
203 chain or tree of dependencies among devices.
204
205 --size size
206 Specify the size of the lower-level device explicitly instead
207 of determining it automatically. The device size must be
208 determined once and is remembered for the lifetime of the
209 device. In order to determine it automatically, all the
210 lower-level devices on all nodes must be attached, and all
211 nodes must be connected. If the size is specified explicitly,
212 this is not necessary. The size value is assumed to be in units
213 of sectors (512 bytes) by default.
214
215 --discard-zeroes-if-aligned {yes | no}
216 There are several aspects to discard/trim/unmap support on
217 linux block devices. Even if discard is supported in general,
218 it may fail silently, or may partially ignore discard requests.
219 Devices also announce whether reading from unmapped blocks
220 returns defined data (usually zeroes), or undefined data
221 (possibly old data, possibly garbage).
222
223 If on different nodes, DRBD is backed by devices with differing
224 discard characteristics, discards may lead to data divergence
225 (old data or garbage left over on one backend, zeroes due to
226 unmapped areas on the other backend). Online verify would now
227 potentially report tons of spurious differences. While probably
228 harmless for most use cases (fstrim on a file system), DRBD
229 cannot have that.
230
231 To play safe, we have to disable discard support, if our local
232 backend (on a Primary) does not support
233 "discard_zeroes_data=true". We also have to translate discards
234 to explicit zero-out on the receiving side, unless the
235 receiving side (Secondary) supports "discard_zeroes_data=true",
236 thereby allocating areas what were supposed to be unmapped.
237
238 There are some devices (notably the LVM/DM thin provisioning)
239 that are capable of discard, but announce
240 discard_zeroes_data=false. In the case of DM-thin, discards
241 aligned to the chunk size will be unmapped, and reading from
242 unmapped sectors will return zeroes. However, unaligned partial
243 head or tail areas of discard requests will be silently
244 ignored.
245
246 If we now add a helper to explicitly zero-out these unaligned
247 partial areas, while passing on the discard of the aligned full
248 chunks, we effectively achieve discard_zeroes_data=true on such
249 devices.
250
251 Setting discard-zeroes-if-aligned to yes will allow DRBD to use
252 discards, and to announce discard_zeroes_data=true, even on
253 backends that announce discard_zeroes_data=false.
254
255 Setting discard-zeroes-if-aligned to no will cause DRBD to
256 always fall-back to zero-out on the receiving side, and to not
257 even announce discard capabilities on the Primary, if the
258 respective backend announces discard_zeroes_data=false.
259
260 We used to ignore the discard_zeroes_data setting completely.
261 To not break established and expected behaviour, and suddenly
262 cause fstrim on thin-provisioned LVs to run out-of-space
263 instead of freeing up space, the default value is yes.
264
265 This option is available since 8.4.7.
266
267 --rs-discard-granularity byte
268 When rs-discard-granularity is set to a non zero, positive
269 value then DRBD tries to do a resync operation in requests of
270 this size. In case such a block contains only zero bytes on the
271 sync source node, the sync target node will issue a
272 discard/trim/unmap command for the area.
273
274 The value is constrained by the discard granularity of the
275 backing block device. In case rs-discard-granularity is not a
276 multiplier of the discard granularity of the backing block
277 device DRBD rounds it up. The feature only gets active if the
278 backing block device reads back zeroes after a discard command.
279
280 The default value of is 0. This option is available since
281 8.4.7.
282
283 drbdsetup peer-device-options resource peer_node_id volume
284 These are options that affect the peer's device.
285
286 --c-delay-target delay_target,
287 --c-fill-target fill_target,
288 --c-max-rate max_rate,
289 --c-plan-ahead plan_time
290 Dynamically control the resync speed. This mechanism is enabled
291 by setting the c-plan-ahead parameter to a positive value. The
292 goal is to either fill the buffers along the data path with a
293 defined amount of data if c-fill-target is defined, or to have
294 a defined delay along the path if c-delay-target is defined.
295 The maximum bandwidth is limited by the c-max-rate parameter.
296
297 The c-plan-ahead parameter defines how fast drbd adapts to
298 changes in the resync speed. It should be set to five times the
299 network round-trip time or more. Common values for
300 c-fill-target for "normal" data paths range from 4K to 100K. If
301 drbd-proxy is used, it is advised to use c-delay-target instead
302 of c-fill-target. The c-delay-target parameter is used if the
303 c-fill-target parameter is undefined or set to 0. The
304 c-delay-target parameter should be set to five times the
305 network round-trip time or more. The c-max-rate option should
306 be set to either the bandwidth available between the DRBD-hosts
307 and the machines hosting DRBD-proxy, or to the available disk
308 bandwidth.
309
310 The default values of these parameters are: c-plan-ahead = 20
311 (in units of 0.1 seconds), c-fill-target = 0 (in units of
312 sectors), c-delay-target = 1 (in units of 0.1 seconds), and
313 c-max-rate = 102400 (in units of KiB/s).
314
315 Dynamic resync speed control is available since DRBD 8.3.9.
316
317 --c-min-rate min_rate
318 A node which is primary and sync-source has to schedule
319 application I/O requests and resync I/O requests. The
320 c-min-rate parameter limits how much bandwidth is available for
321 resync I/O; the remaining bandwidth is used for application
322 I/O.
323
324 A c-min-rate value of 0 means that there is no limit on the
325 resync I/O bandwidth. This can slow down application I/O
326 significantly. Use a value of 1 (1 KiB/s) for the lowest
327 possible resync rate.
328
329 The default value of c-min-rate is 4096, in units of KiB/s.
330
331 --resync-rate rate
332 Define how much bandwidth DRBD may use for resynchronizing.
333 DRBD allows "normal" application I/O even during a resync. If
334 the resync takes up too much bandwidth, application I/O can
335 become very slow. This parameter allows to avoid that. Please
336 note this is option only works when the dynamic resync
337 controller is disabled.
338
339 drbdsetup check-resize minor
340 Remember the current size of the lower-level device of the
341 specified replicated device. Used by drbdadm. The size information
342 is stored in file /var/lib/drbd/drbd-minor-minor.lkbd.
343
344 drbdsetup new-peer resource peer_node_id,
345 drbdsetup net-options resource peer_node_id
346 The new-peer command creates a connection within a resource. The
347 resource must have been created with drbdsetup new-resource. The
348 net-options command changes the network options of an existing
349 connection. Before a connection can be activated with the connect
350 command, at least one path need to added with the new-path command.
351 Available options:
352
353 --after-sb-0pri policy
354 Define how to react if a split-brain scenario is detected and
355 none of the two nodes is in primary role. (We detect
356 split-brain scenarios when two nodes connect; split-brain
357 decisions are always between two nodes.) The defined policies
358 are:
359
360 disconnect
361 No automatic resynchronization; simply disconnect.
362
363 discard-younger-primary,
364 discard-older-primary
365 Resynchronize from the node which became primary first
366 (discard-younger-primary) or last (discard-older-primary).
367 If both nodes became primary independently, the
368 discard-least-changes policy is used.
369
370 discard-zero-changes
371 If only one of the nodes wrote data since the split brain
372 situation was detected, resynchronize from this node to the
373 other. If both nodes wrote data, disconnect.
374
375 discard-least-changes
376 Resynchronize from the node with more modified blocks.
377
378 discard-node-nodename
379 Always resynchronize to the named node.
380
381 --after-sb-1pri policy
382 Define how to react if a split-brain scenario is detected, with
383 one node in primary role and one node in secondary role. (We
384 detect split-brain scenarios when two nodes connect, so
385 split-brain decisions are always among two nodes.) The defined
386 policies are:
387
388 disconnect
389 No automatic resynchronization, simply disconnect.
390
391 consensus
392 Discard the data on the secondary node if the after-sb-0pri
393 algorithm would also discard the data on the secondary
394 node. Otherwise, disconnect.
395
396 violently-as0p
397 Always take the decision of the after-sb-0pri algorithm,
398 even if it causes an erratic change of the primary's view
399 of the data. This is only useful if a single-node file
400 system (i.e., not OCFS2 or GFS) with the
401 allow-two-primaries flag is used. This option can cause the
402 primary node to crash, and should not be used.
403
404 discard-secondary
405 Discard the data on the secondary node.
406
407 call-pri-lost-after-sb
408 Always take the decision of the after-sb-0pri algorithm. If
409 the decision is to discard the data on the primary node,
410 call the pri-lost-after-sb handler on the primary node.
411
412 --after-sb-2pri policy
413 Define how to react if a split-brain scenario is detected and
414 both nodes are in primary role. (We detect split-brain
415 scenarios when two nodes connect, so split-brain decisions are
416 always among two nodes.) The defined policies are:
417
418 disconnect
419 No automatic resynchronization, simply disconnect.
420
421 violently-as0p
422 See the violently-as0p policy for after-sb-1pri.
423
424 call-pri-lost-after-sb
425 Call the pri-lost-after-sb helper program on one of the
426 machines unless that machine can demote to secondary. The
427 helper program is expected to reboot the machine, which
428 brings the node into a secondary role. Which machine runs
429 the helper program is determined by the after-sb-0pri
430 strategy.
431
432 --allow-two-primaries
433 The most common way to configure DRBD devices is to allow only
434 one node to be primary (and thus writable) at a time.
435
436 In some scenarios it is preferable to allow two nodes to be
437 primary at once; a mechanism outside of DRBD then must make
438 sure that writes to the shared, replicated device happen in a
439 coordinated way. This can be done with a shared-storage cluster
440 file system like OCFS2 and GFS, or with virtual machine images
441 and a virtual machine manager that can migrate virtual machines
442 between physical machines.
443
444 The allow-two-primaries parameter tells DRBD to allow two nodes
445 to be primary at the same time. Never enable this option when
446 using a non-distributed file system; otherwise, data corruption
447 and node crashes will result!
448
449 --always-asbp
450 Normally the automatic after-split-brain policies are only used
451 if current states of the UUIDs do not indicate the presence of
452 a third node.
453
454 With this option you request that the automatic
455 after-split-brain policies are used as long as the data sets of
456 the nodes are somehow related. This might cause a full sync, if
457 the UUIDs indicate the presence of a third node. (Or double
458 faults led to strange UUID sets.)
459
460 --connect-int time
461 As soon as a connection between two nodes is configured with
462 drbdsetup connect, DRBD immediately tries to establish the
463 connection. If this fails, DRBD waits for connect-int seconds
464 and then repeats. The default value of connect-int is 10
465 seconds.
466
467 --cram-hmac-alg hash-algorithm
468 Configure the hash-based message authentication code (HMAC) or
469 secure hash algorithm to use for peer authentication. The
470 kernel supports a number of different algorithms, some of which
471 may be loadable as kernel modules. See the shash algorithms
472 listed in /proc/crypto. By default, cram-hmac-alg is unset.
473 Peer authentication also requires a shared-secret to be
474 configured.
475
476 --csums-alg hash-algorithm
477 Normally, when two nodes resynchronize, the sync target
478 requests a piece of out-of-sync data from the sync source, and
479 the sync source sends the data. With many usage patterns, a
480 significant number of those blocks will actually be identical.
481
482 When a csums-alg algorithm is specified, when requesting a
483 piece of out-of-sync data, the sync target also sends along a
484 hash of the data it currently has. The sync source compares
485 this hash with its own version of the data. It sends the sync
486 target the new data if the hashes differ, and tells it that the
487 data are the same otherwise. This reduces the network bandwidth
488 required, at the cost of higher cpu utilization and possibly
489 increased I/O on the sync target.
490
491 The csums-alg can be set to one of the secure hash algorithms
492 supported by the kernel; see the shash algorithms listed in
493 /proc/crypto. By default, csums-alg is unset.
494
495 --csums-after-crash-only
496 Enabling this option (and csums-alg, above) makes it possible
497 to use the checksum based resync only for the first resync
498 after primary crash, but not for later "network hickups".
499
500 In most cases, block that are marked as need-to-be-resynced are
501 in fact changed, so calculating checksums, and both reading and
502 writing the blocks on the resync target is all effective
503 overhead.
504
505 The advantage of checksum based resync is mostly after primary
506 crash recovery, where the recovery marked larger areas (those
507 covered by the activity log) as need-to-be-resynced, just in
508 case. Introduced in 8.4.5.
509
510 --data-integrity-alg alg
511 DRBD normally relies on the data integrity checks built into
512 the TCP/IP protocol, but if a data integrity algorithm is
513 configured, it will additionally use this algorithm to make
514 sure that the data received over the network match what the
515 sender has sent. If a data integrity error is detected, DRBD
516 will close the network connection and reconnect, which will
517 trigger a resync.
518
519 The data-integrity-alg can be set to one of the secure hash
520 algorithms supported by the kernel; see the shash algorithms
521 listed in /proc/crypto. By default, this mechanism is turned
522 off.
523
524 Because of the CPU overhead involved, we recommend not to use
525 this option in production environments. Also see the notes on
526 data integrity below.
527
528 --fencing fencing_policy
529 Fencing is a preventive measure to avoid situations where both
530 nodes are primary and disconnected. This is also known as a
531 split-brain situation. DRBD supports the following fencing
532 policies:
533
534 dont-care
535 No fencing actions are taken. This is the default policy.
536
537 resource-only
538 If a node becomes a disconnected primary, it tries to fence
539 the peer. This is done by calling the fence-peer handler.
540 The handler is supposed to reach the peer over an
541 alternative communication path and call 'drbdadm outdate
542 minor' there.
543
544 resource-and-stonith
545 If a node becomes a disconnected primary, it freezes all
546 its IO operations and calls its fence-peer handler. The
547 fence-peer handler is supposed to reach the peer over an
548 alternative communication path and call 'drbdadm outdate
549 minor' there. In case it cannot do that, it should stonith
550 the peer. IO is resumed as soon as the situation is
551 resolved. In case the fence-peer handler fails, I/O can be
552 resumed manually with 'drbdadm resume-io'.
553
554 --ko-count number
555 If a secondary node fails to complete a write request in
556 ko-count times the timeout parameter, it is excluded from the
557 cluster. The primary node then sets the connection to this
558 secondary node to Standalone. To disable this feature, you
559 should explicitly set it to 0; defaults may change between
560 versions.
561
562 --max-buffers number
563 Limits the memory usage per DRBD minor device on the receiving
564 side, or for internal buffers during resync or online-verify.
565 Unit is PAGE_SIZE, which is 4 KiB on most systems. The minimum
566 possible setting is hard coded to 32 (=128 KiB). These buffers
567 are used to hold data blocks while they are written to/read
568 from disk. To avoid possible distributed deadlocks on
569 congestion, this setting is used as a throttle threshold rather
570 than a hard limit. Once more than max-buffers pages are in use,
571 further allocation from this pool is throttled. You want to
572 increase max-buffers if you cannot saturate the IO backend on
573 the receiving side.
574
575 --max-epoch-size number
576 Define the maximum number of write requests DRBD may issue
577 before issuing a write barrier. The default value is 2048, with
578 a minimum of 1 and a maximum of 20000. Setting this parameter
579 to a value below 10 is likely to decrease performance.
580
581 --on-congestion policy,
582 --congestion-fill threshold,
583 --congestion-extents threshold
584 By default, DRBD blocks when the TCP send queue is full. This
585 prevents applications from generating further write requests
586 until more buffer space becomes available again.
587
588 When DRBD is used together with DRBD-proxy, it can be better to
589 use the pull-ahead on-congestion policy, which can switch DRBD
590 into ahead/behind mode before the send queue is full. DRBD then
591 records the differences between itself and the peer in its
592 bitmap, but it no longer replicates them to the peer. When
593 enough buffer space becomes available again, the node
594 resynchronizes with the peer and switches back to normal
595 replication.
596
597 This has the advantage of not blocking application I/O even
598 when the queues fill up, and the disadvantage that peer nodes
599 can fall behind much further. Also, while resynchronizing, peer
600 nodes will become inconsistent.
601
602 The available congestion policies are block (the default) and
603 pull-ahead. The congestion-fill parameter defines how much data
604 is allowed to be "in flight" in this connection. The default
605 value is 0, which disables this mechanism of congestion
606 control, with a maximum of 10 GiBytes. The congestion-extents
607 parameter defines how many bitmap extents may be active before
608 switching into ahead/behind mode, with the same default and
609 limits as the al-extents parameter. The congestion-extents
610 parameter is effective only when set to a value smaller than
611 al-extents.
612
613 Ahead/behind mode is available since DRBD 8.3.10.
614
615 --ping-int interval
616 When the TCP/IP connection to a peer is idle for more than
617 ping-int seconds, DRBD will send a keep-alive packet to make
618 sure that a failed peer or network connection is detected
619 reasonably soon. The default value is 10 seconds, with a
620 minimum of 1 and a maximum of 120 seconds. The unit is seconds.
621
622 --ping-timeout timeout
623 Define the timeout for replies to keep-alive packets. If the
624 peer does not reply within ping-timeout, DRBD will close and
625 try to reestablish the connection. The default value is 0.5
626 seconds, with a minimum of 0.1 seconds and a maximum of 3
627 seconds. The unit is tenths of a second.
628
629 --socket-check-timeout timeout
630 In setups involving a DRBD-proxy and connections that
631 experience a lot of buffer-bloat it might be necessary to set
632 ping-timeout to an unusual high value. By default DRBD uses the
633 same value to wait if a newly established TCP-connection is
634 stable. Since the DRBD-proxy is usually located in the same
635 data center such a long wait time may hinder DRBD's connect
636 process.
637
638 In such setups socket-check-timeout should be set to at least
639 to the round trip time between DRBD and DRBD-proxy. I.e. in
640 most cases to 1.
641
642 The default unit is tenths of a second, the default value is 0
643 (which causes DRBD to use the value of ping-timeout instead).
644 Introduced in 8.4.5.
645
646 --protocol name
647 Use the specified protocol on this connection. The supported
648 protocols are:
649
650 A
651 Writes to the DRBD device complete as soon as they have
652 reached the local disk and the TCP/IP send buffer.
653
654 B
655 Writes to the DRBD device complete as soon as they have
656 reached the local disk, and all peers have acknowledged the
657 receipt of the write requests.
658
659 C
660 Writes to the DRBD device complete as soon as they have
661 reached the local and all remote disks.
662
663
664 --rcvbuf-size size
665 Configure the size of the TCP/IP receive buffer. A value of 0
666 (the default) causes the buffer size to adjust dynamically.
667 This parameter usually does not need to be set, but it can be
668 set to a value up to 10 MiB. The default unit is bytes.
669
670 --rr-conflict policy
671 This option helps to solve the cases when the outcome of the
672 resync decision is incompatible with the current role
673 assignment in the cluster. The defined policies are:
674
675 disconnect
676 No automatic resynchronization, simply disconnect.
677
678 violently
679 Resync to the primary node is allowed, violating the
680 assumption that data on a block device are stable for one
681 of the nodes. Do not use this option, it is dangerous.
682
683 call-pri-lost
684 Call the pri-lost handler on one of the machines. The
685 handler is expected to reboot the machine, which puts it
686 into secondary role.
687
688 --shared-secret secret
689 Configure the shared secret used for peer authentication. The
690 secret is a string of up to 64 characters. Peer authentication
691 also requires the cram-hmac-alg parameter to be set.
692
693 --sndbuf-size size
694 Configure the size of the TCP/IP send buffer. Since DRBD 8.0.13
695 / 8.2.7, a value of 0 (the default) causes the buffer size to
696 adjust dynamically. Values below 32 KiB are harmful to the
697 throughput on this connection. Large buffer sizes can be useful
698 especially when protocol A is used over high-latency networks;
699 the maximum value supported is 10 MiB.
700
701 --tcp-cork
702 By default, DRBD uses the TCP_CORK socket option to prevent the
703 kernel from sending partial messages; this results in fewer and
704 bigger packets on the network. Some network stacks can perform
705 worse with this optimization. On these, the tcp-cork parameter
706 can be used to turn this optimization off.
707
708 --timeout time
709 Define the timeout for replies over the network: if a peer node
710 does not send an expected reply within the specified timeout,
711 it is considered dead and the TCP/IP connection is closed. The
712 timeout value must be lower than connect-int and lower than
713 ping-int. The default is 6 seconds; the value is specified in
714 tenths of a second.
715
716 --use-rle
717 Each replicated device on a cluster node has a separate bitmap
718 for each of its peer devices. The bitmaps are used for tracking
719 the differences between the local and peer device: depending on
720 the cluster state, a disk range can be marked as different from
721 the peer in the device's bitmap, in the peer device's bitmap,
722 or in both bitmaps. When two cluster nodes connect, they
723 exchange each other's bitmaps, and they each compute the union
724 of the local and peer bitmap to determine the overall
725 differences.
726
727 Bitmaps of very large devices are also relatively large, but
728 they usually compress very well using run-length encoding. This
729 can save time and bandwidth for the bitmap transfers.
730
731 The use-rle parameter determines if run-length encoding should
732 be used. It is on by default since DRBD 8.4.0.
733
734 --verify-alg hash-algorithm
735 Online verification (drbdadm verify) computes and compares
736 checksums of disk blocks (i.e., hash values) in order to detect
737 if they differ. The verify-alg parameter determines which
738 algorithm to use for these checksums. It must be set to one of
739 the secure hash algorithms supported by the kernel before
740 online verify can be used; see the shash algorithms listed in
741 /proc/crypto.
742
743 We recommend to schedule online verifications regularly during
744 low-load periods, for example once a month. Also see the notes
745 on data integrity below.
746
747 drbdsetup new-path resource peer_node_id local-addr remote-addr
748 The new-path command creates a path within a connection. The
749 connection must have been created with drbdsetup new-peer.
750 Local_addr and remote_addr refer to the local and remote protocol,
751 network address, and port in the format
752 [address-family:]address[:port]. The address families ipv4, ipv6,
753 ssocks (Dolphin Interconnect Solutions' "super sockets"), sdp
754 (Infiniband Sockets Direct Protocol), and sci are supported (sci is
755 an alias for ssocks). If no address family is specified, ipv4 is
756 assumed. For all address families except ipv6, the address uses
757 IPv4 address notation (for example, 1.2.3.4). For ipv6, the address
758 is enclosed in brackets and uses IPv6 address notation (for
759 example, [fd01:2345:6789:abcd::1]). The port defaults to 7788.
760
761 drbdsetup connect resource peer_node_id
762 The connect command activates a connection. That means that the
763 DRBD driver will bind and listen on all local addresses of the
764 connection-'s paths. It will begin to try to establish one or more
765 paths of the connection. Available options:
766
767 --tentative
768 Only determine if a connection to the peer can be established
769 and if a resync is necessary (and in which direction) without
770 actually establishing the connection or starting the resync.
771 Check the system log to see what DRBD would do without the
772 --tentative option.
773
774 --discard-my-data
775 Discard the local data and resynchronize with the peer that has
776 the most up-to-data data. Use this option to manually recover
777 from a split-brain situation.
778
779 drbdsetup del-peer resource peer_node_id
780 The del-peer command removes a connection from a resource.
781
782 drbdsetup del-path resource peer_node_id local-addr remote-addr
783 The del-path command removes a path from a connection. Please note
784 that it fails if the path is necessary to keep a connected
785 connection in tact. In order to remove all paths, disconnect the
786 connection first.
787
788 drbdsetup cstate resource peer_node_id
789 Show the current state of a connection. The connection is
790 identified by the node-id of the peer; see the drbdsetup connect
791 command.
792
793 drbdsetup del-minor minor
794 Remove a replicated device. No lower-level device may be attached;
795 see drbdsetup detach.
796
797 drbdsetup del-resource resource
798 Remove a resource. All volumes and connections must be removed
799 first (drbdsetup del-minor, drbdsetup disconnect). Alternatively,
800 drbdsetup down can be used to remove a resource together with all
801 its volumes and connections.
802
803 drbdsetup detach minor
804 Detach the lower-level device of a replicated device. Available
805 options:
806
807 --force
808 Force the detach and return immediately. This puts the
809 lower-level device into failed state until all pending I/O has
810 completed, and then detaches the device. Any I/O not yet
811 submitted to the lower-level device (for example, because I/O
812 on the device was suspended) is assumed to have failed.
813
814
815 drbdsetup disconnect resource peer_node_id
816 Remove a connection to a peer host. The connection is identified by
817 the node-id of the peer; see the drbdsetup connect command.
818
819 drbdsetup down {resource | all}
820 Take a resource down by removing all volumes, connections, and the
821 resource itself.
822
823 drbdsetup dstate minor
824 Show the current disk state of a lower-level device.
825
826 drbdsetup events2 {resource | all}
827 Show the current state of all configured DRBD objects, followed by
828 all changes to the state.
829
830 The output format is meant to be human as well as machine readable.
831 The line starts with a word that indicates the kind of event:
832 exists for an existing object; create, destroy, and change if an
833 object is created, destroyed, or changed; or call or response if an
834 event handler is called or it returns. The second word indicates
835 the object the event applies to: resource, device, connection,
836 peer-device, helper, or a dash (-) to indicate that the current
837 state has been dumped completely.
838
839 The remaining words identify the object and describe the state that
840 he object is in. Available options:
841
842 --now
843 Terminate after reporting the current state. The default is to
844 continuously listen and report state changes.
845
846 --statistics
847 Include statistics in the output.
848
849
850 drbdsetup get-gi resource peer_node_id volume
851 Show the data generation identifiers for a device on a particular
852 connection. The device is identified by its volume number. The
853 connection is identified by its endpoints; see the drbdsetup
854 connect command.
855
856 The output consists of the current UUID, bitmap UUID, and the first
857 two history UUIDS, folowed by a set of flags. The current UUID and
858 history UUIDs are device specific; the bitmap UUID and flags are
859 peer device specific. This command only shows the first two history
860 UUIDs. Internally, DRBD maintains one history UUID for each
861 possible peer device.
862
863 drbdsetup invalidate minor
864 Replace the local data of a device with that of a peer. All the
865 local data will be marked out-of-sync, and a resync with the
866 specified peer device will be initialted.
867
868 drbdsetup invalidate-remote resource peer_node_id volume
869 Replace a peer device's data of a resource with the local data. The
870 peer device's data will be marked out-of-sync, and a resync from
871 the local node to the specified peer will be initiated.
872
873 drbdsetup new-current-uuid minor
874 Generate a new current UUID and rotates all other UUID values. This
875 has at least two use cases, namely to skip the initial sync, and to
876 reduce network bandwidth when starting in a single node
877 configuration and then later (re-)integrating a remote site.
878
879 Available option:
880
881 --clear-bitmap
882 Clears the sync bitmap in addition to generating a new current
883 UUID.
884
885 This can be used to skip the initial sync, if you want to start
886 from scratch. This use-case does only work on "Just Created" meta
887 data. Necessary steps:
888
889 1. On both nodes, initialize meta data and configure the device.
890
891 drbdadm create-md --force res
892
893 2. They need to do the initial handshake, so they know their
894 sizes.
895
896 drbdadm up res
897
898 3. They are now Connected Secondary/Secondary
899 Inconsistent/Inconsistent. Generate a new current-uuid and
900 clear the dirty bitmap.
901
902 drbdadm --clear-bitmap new-current-uuid res
903
904 4. They are now Connected Secondary/Secondary UpToDate/UpToDate.
905 Make one side primary and create a file system.
906
907 drbdadm primary res
908
909 mkfs -t fs-type $(drbdadm sh-dev res)
910
911 One obvious side-effect is that the replica is full of old garbage
912 (unless you made them identical using other means), so any
913 online-verify is expected to find any number of out-of-sync blocks.
914
915 You must not use this on pre-existing data! Even though it may
916 appear to work at first glance, once you switch to the other node,
917 your data is toast, as it never got replicated. So do not leave out
918 the mkfs (or equivalent).
919
920 This can also be used to shorten the initial resync of a cluster
921 where the second node is added after the first node is gone into
922 production, by means of disk shipping. This use-case works on
923 disconnected devices only, the device may be in primary or
924 secondary role.
925
926 The necessary steps on the current active server are:
927
928 1. drbdsetup new-current-uuid --clear-bitmap minor
929
930 2. Take the copy of the current active server. E.g. by pulling a
931 disk out of the RAID1 controller, or by copying with dd. You
932 need to copy the actual data, and the meta data.
933
934 3. drbdsetup new-current-uuid minor
935
936 Now add the disk to the new secondary node, and join it to the
937 cluster. You will get a resync of that parts that were changed
938 since the first call to drbdsetup in step 1.
939
940 drbdsetup new-minor resource minor volume
941 Create a new replicated device within a resource. The command
942 creates a block device inode for the replicated device (by default,
943 /dev/drbdminor). The volume number identifies the device within the
944 resource.
945
946 drbdsetup new-resource resource node_id,
947 drbdsetup resource-options resource
948 The new-resource command creates a new resource. The
949 resource-options command changes the resource options of an
950 existing resource. Available options:
951
952 --auto-promote bool-value
953 A resource must be promoted to primary role before any of its
954 devices can be mounted or opened for writing.
955
956 Before DRBD 9, this could only be done explicitly ("drbdadm
957 primary"). Since DRBD 9, the auto-promote parameter allows to
958 automatically promote a resource to primary role when one of
959 its devices is mounted or opened for writing. As soon as all
960 devices are unmounted or closed with no more remaining users,
961 the role of the resource changes back to secondary.
962
963 Automatic promotion only succeeds if the cluster state allows
964 it (that is, if an explicit drbdadm primary command would
965 succeed). Otherwise, mounting or opening the device fails as it
966 already did before DRBD 9: the mount(2) system call fails with
967 errno set to EROFS (Read-only file system); the open(2) system
968 call fails with errno set to EMEDIUMTYPE (wrong medium type).
969
970 Irrespective of the auto-promote parameter, if a device is
971 promoted explicitly (drbdadm primary), it also needs to be
972 demoted explicitly (drbdadm secondary).
973
974 The auto-promote parameter is available since DRBD 9.0.0, and
975 defaults to yes.
976
977 --cpu-mask cpu-mask
978 Set the cpu affinity mask for DRBD kernel threads. The cpu mask
979 is specified as a hexadecimal number. The default value is 0,
980 which lets the scheduler decide which kernel threads run on
981 which CPUs. CPU numbers in cpu-mask which do not exist in the
982 system are ignored.
983
984 --on-no-data-accessible policy
985 Determine how to deal with I/O requests when the requested data
986 is not available locally or remotely (for example, when all
987 disks have failed). The defined policies are:
988
989 io-error
990 System calls fail with errno set to EIO.
991
992 suspend-io
993 The resource suspends I/O. I/O can be resumed by
994 (re)attaching the lower-level device, by connecting to a
995 peer which has access to the data, or by forcing DRBD to
996 resume I/O with drbdadm resume-io res. When no data is
997 available, forcing I/O to resume will result in the same
998 behavior as the io-error policy.
999
1000 This setting is available since DRBD 8.3.9; the default policy
1001 is io-error.
1002
1003 --peer-ack-window value
1004 On each node and for each device, DRBD maintains a bitmap of
1005 the differences between the local and remote data for each peer
1006 device. For example, in a three-node setup (nodes A, B, C) each
1007 with a single device, every node maintains one bitmap for each
1008 of its peers.
1009
1010 When nodes receive write requests, they know how to update the
1011 bitmaps for the writing node, but not how to update the bitmaps
1012 between themselves. In this example, when a write request
1013 propagates from node A to B and C, nodes B and C know that they
1014 have the same data as node A, but not whether or not they both
1015 have the same data.
1016
1017 As a remedy, the writing node occasionally sends peer-ack
1018 packets to its peers which tell them which state they are in
1019 relative to each other.
1020
1021 The peer-ack-window parameter specifies how much data a primary
1022 node may send before sending a peer-ack packet. A low value
1023 causes increased network traffic; a high value causes less
1024 network traffic but higher memory consumption on secondary
1025 nodes and higher resync times between the secondary nodes after
1026 primary node failures. (Note: peer-ack packets may be sent due
1027 to other reasons as well, e.g. membership changes or expiry of
1028 the peer-ack-delay timer.)
1029
1030 The default value for peer-ack-window is 2 MiB, the default
1031 unit is sectors. This option is available since 9.0.0.
1032
1033 --peer-ack-delay expiry-time
1034 If after the last finished write request no new write request
1035 gets issued for expiry-time, then a peer-ack packet is sent. If
1036 a new write request is issued before the timer expires, the
1037 timer gets reset to expiry-time. (Note: peer-ack packets may be
1038 sent due to other reasons as well, e.g. membership changes or
1039 the peer-ack-window option.)
1040
1041 This parameter may influence resync behavior on remote nodes.
1042 Peer nodes need to wait until they receive an peer-ack for
1043 releasing a lock on an AL-extent. Resync operations between
1044 peers may need to wait for for these locks.
1045
1046 The default value for peer-ack-delay is 100 milliseconds, the
1047 default unit is milliseconds. This option is available since
1048 9.0.0.
1049
1050 --quorum value
1051 When activated, a cluster partition requires quorum in order to
1052 modify the replicated data set. That means a node in the
1053 cluster partition can only be promoted to primary if the
1054 cluster partition has quorum. Every node with a disk directly
1055 connected to the node that should be promoted counts. If a
1056 primary node should execute a write request, but the cluster
1057 partition has lost quorum, it will freeze IO or reject the
1058 write request with an error (depending on the on-no-quorum
1059 setting). Upon loosing quorum a primary always invokes the
1060 quorum-lost handler. The handler is intended for notification
1061 purposes, its return code is ignored.
1062
1063 The option's value might be set to off, majority, all or a
1064 numeric value. If you set it to a numeric value, make sure that
1065 the value is greater than half of your number of nodes. Quorum
1066 is a mechanism to avoid data divergence, it might be used
1067 instead of fencing when there are more than two repicas. It
1068 defaults to off
1069
1070 If all missing nodes are marked as outdated, a partition always
1071 has quorum, no matter how small it is. I.e. If you disconnect
1072 all secondary nodes gracefully a single primary continues to
1073 operate. In the moment a single secondary is lost, it has to be
1074 assumed that it forms a partition with all the missing outdated
1075 nodes. In case my partition might be smaller than the other,
1076 quorum is lost in this moment.
1077
1078 In case you want to allow permanently diskless nodes to gain
1079 quorum it is recommendet to not use majority or all. It is
1080 recommended to specify an absolute number, since DBRD's
1081 heuristic to determine the complete number of diskfull nodes in
1082 the cluster is unreliable.
1083
1084 The quorum implementation is available starting with the DRBD
1085 kernel driver version 9.0.7.
1086
1087 --quorum-minimum-redundancy value
1088 This option sets the minimal required number of nodes with an
1089 UpToDate disk to allow the partition to gain quorum. This is a
1090 different requirement than the plain quorum option expresses.
1091
1092 The option's value might be set to off, majority, all or a
1093 numeric value. If you set it to a numeric value, make sure that
1094 the value is greater than half of your number of nodes.
1095
1096 In case you want to allow permanently diskless nodes to gain
1097 quorum it is recommendet to not use majority or all. It is
1098 recommended to specify an absolute number, since DBRD's
1099 heuristic to determine the complete number of diskfull nodes in
1100 the cluster is unreliable.
1101
1102 This option is available starting with the DRBD kernel driver
1103 version 9.0.10.
1104
1105 --on-no-quorum {io-error | suspend-io}
1106 By default DRBD freezes IO on a device, that lost quorum. By
1107 setting the on-no-quorum to io-error it completes all IO
1108 operations with an error if quorum ist lost.
1109
1110 The on-no-quorum options is available starting with the DRBD
1111 kernel driver version 9.0.8.
1112
1113
1114 drbdsetup outdate minor
1115 Mark the data on a lower-level device as outdated. This is used for
1116 fencing, and prevents the resource the device is part of from
1117 becoming primary in the future. See the --fencing disk option.
1118
1119 drbdsetup pause-sync resource peer_node_id volume
1120 Stop resynchronizing between a local and a peer device by setting
1121 the local pause flag. The resync can only resume if the pause flags
1122 on both sides of a connection are cleared.
1123
1124 drbdsetup primary resource
1125 Change the role of a node in a resource to primary. This allows the
1126 replicated devices in this resource to be mounted or opened for
1127 writing. Available options:
1128
1129 --overwrite-data-of-peer
1130 This option is an alias for the --force option.
1131
1132 --force
1133 Force the resource to become primary even if some devices are
1134 not guaranteed to have up-to-date data. This option is used to
1135 turn one of the nodes in a newly created cluster into the
1136 primary node, or when manually recovering from a disaster.
1137
1138 Note that this can lead to split-brain scenarios. Also, when
1139 forcefully turning an inconsistent device into an up-to-date
1140 device, it is highly recommended to use any integrity checks
1141 available (such as a filesystem check) to make sure that the
1142 device can at least be used without crashing the system.
1143
1144 Note that DRBD usually only allows one node in a cluster to be in
1145 primary role at any time; this allows DRBD to coordinate access to
1146 the devices in a resource across nodes. The --allow-two-primaries
1147 network option changes this; in that case, a mechanism outside of
1148 DRBD needs to coordinate device access.
1149
1150 drbdsetup resize minor
1151 Reexamine the size of the lower-level devices of a replicated
1152 device on all nodes. This command is called after the lower-level
1153 devices on all nodes have been grown to adjust the size of the
1154 replicated device. Available options:
1155
1156 --assume-peer-has-space
1157 Resize the device even if some of the peer devices are not
1158 connected at the moment. DRBD will try to resize the peer
1159 devices when they next connect. It will refuse to connect to a
1160 peer device which is too small.
1161
1162 --assume-clean
1163 Do not resynchronize the added disk space; instead, assume that
1164 it is identical on all nodes. This option can be used when the
1165 disk space is uninitialized and differences do not matter, or
1166 when it is known to be identical on all nodes. See the
1167 drbdsetup verify command.
1168
1169 --size val
1170 This option can be used to online shrink the usable size of a
1171 drbd device. It's the users responsibility to make sure that a
1172 file system on the device is not truncated by that operation.
1173
1174 --al-stripes val --al-stripes val
1175 These options may be used to change the layout of the activity
1176 log online. In case of internal meta data this may invovle
1177 shrinking the user visible size at the same time (unsing the
1178 --size) or increasing the avalable space on the backing
1179 devices.
1180
1181
1182 drbdsetup resume-io minor
1183 Resume I/O on a replicated device. See the --fencing net option.
1184
1185 drbdsetup resume-sync resource peer_node_id volume
1186 Allow resynchronization to resume by clearing the local sync pause
1187 flag.
1188
1189 drbdsetup role resource
1190 Show the current role of a resource.
1191
1192 drbdsetup secondary resource
1193 Change the role of a node in a resource to secondary. This command
1194 fails if the replicated device is in use.
1195
1196 drbdsetup show {resource | all}
1197 Show the current configuration of a resource, or of all resources.
1198 Available options:
1199
1200 --show-defaults
1201 Show all configuration parameters, even the ones with default
1202 values. Normally, parameters with default values are not shown.
1203
1204
1205 drbdsetup show-gi resource peer_node_id volume
1206 Show the data generation identifiers for a device on a particular
1207 connection. In addition, explain the output. The output otherwise
1208 is the same as in the drbdsetup get-gi command.
1209
1210 drbdsetup state
1211 This is an alias for drbdsetup role. Deprecated.
1212
1213 drbdsetup status {resource | all}
1214 Show the status of a resource, or of all resources. The output
1215 consists of one paragraph for each configured resource. Each
1216 paragraph contains one line for each resource, followed by one line
1217 for each device, and one line for each connection. The device and
1218 connection lines are indented. The connection lines are followed by
1219 one line for each peer device; these lines are indented against the
1220 connection line.
1221
1222 Long lines are wrapped around at terminal width, and indented to
1223 indicate how the lines belongs together. Available options:
1224
1225 --verbose
1226 Include more information in the output even when it is likely
1227 redundant or irrelevant.
1228
1229 --statistics
1230 Include data transfer statistics in the output.
1231
1232 --color={always | auto | never}
1233 Colorize the output. With --color=auto, drbdsetup emits color
1234 codes only when standard output is connected to a terminal.
1235
1236 For example, the non-verbose output for a resource with only one
1237 connection and only one volume could look like this:
1238
1239 drbd0 role:Primary
1240 disk:UpToDate
1241 host2.example.com role:Secondary
1242 disk:UpToDate
1243
1244
1245 With the --verbose option, the same resource could be reported as:
1246
1247 drbd0 node-id:1 role:Primary suspended:no
1248 volume:0 minor:1 disk:UpToDate blocked:no
1249 host2.example.com local:ipv4:192.168.123.4:7788
1250 peer:ipv4:192.168.123.2:7788 node-id:0 connection:WFReportParams
1251 role:Secondary congested:no
1252 volume:0 replication:Connected disk:UpToDate resync-suspended:no
1253
1254
1255
1256 drbdsetup suspend-io minor
1257 Suspend I/O on a replicated device. It is not usually necessary to
1258 use this command.
1259
1260 drbdsetup verify resource peer_node_id volume
1261 Start online verification, change which part of the device will be
1262 verified, or stop online verification. The command requires the
1263 specified peer to be connected.
1264
1265 Online verification compares each disk block on the local and peer
1266 node. Blocks which differ between the nodes are marked as
1267 out-of-sync, but they are not automatically brought back into sync.
1268 To bring them into sync, the resource must be disconnected and
1269 reconnected. Progress can be monitored in the output of drbdsetup
1270 status --statistics. Available options:
1271
1272 --start position
1273 Define where online verification should start. This parameter
1274 is ignored if online verification is already in progress. If
1275 the start parameter is not specified, online verification will
1276 continue where it was interrupted (if the connection to the
1277 peer was lost while verifying), after the previous stop sector
1278 (if the previous online verification has finished), or at the
1279 beginning of the device (if the end of the device was reached,
1280 or online verify has not run before).
1281
1282 The position on disk is specified in disk sectors (512 bytes)
1283 by default.
1284
1285 --stop position
1286 Define where online verification should stop. If online
1287 verification is already in progress, the stop position of the
1288 active online verification process is changed. Use this to stop
1289 online verification.
1290
1291 The position on disk is specified in disk sectors (512 bytes)
1292 by default.
1293
1294 Also see the notes on data integrity in the drbd.conf(5) manual
1295 page.
1296
1297 drbdsetup wait-connect-volume resource peer_node_id volume,
1298 drbdsetup wait-connect-connection resource peer_node_id,
1299 drbdsetup wait-connect-resource resource,
1300 drbdsetup wait-sync-volume resource peer_node_id volume,
1301 drbdsetup wait-sync-connection resource peer_node_id,
1302 drbdsetup wait-sync-resource resource
1303 The wait-connect-* commands waits until a device on a peer is
1304 visible. The wait-sync-* commands waits until a device on a peer is
1305 up to date. Available options for both commands:
1306
1307 --degr-wfc-timeout timeout
1308 Define how long to wait until all peers are connected in case
1309 the cluster consisted of a single node only when the system
1310 went down. This parameter is usually set to a value smaller
1311 than wfc-timeout. The assumption here is that peers which were
1312 unreachable before a reboot are less likely to be reachable
1313 after the reboot, so waiting is less likely to help.
1314
1315 The timeout is specified in seconds. The default value is 0,
1316 which stands for an infinite timeout. Also see the wfc-timeout
1317 parameter.
1318
1319 --outdated-wfc-timeout timeout
1320 Define how long to wait until all peers are connected if all
1321 peers were outdated when the system went down. This parameter
1322 is usually set to a value smaller than wfc-timeout. The
1323 assumption here is that an outdated peer cannot have become
1324 primary in the meantime, so we don't need to wait for it as
1325 long as for a node which was alive before.
1326
1327 The timeout is specified in seconds. The default value is 0,
1328 which stands for an infinite timeout. Also see the wfc-timeout
1329 parameter.
1330
1331 --wait-after-sb
1332 This parameter causes DRBD to continue waiting in the init
1333 script even when a split-brain situation has been detected, and
1334 the nodes therefore refuse to connect to each other.
1335
1336 --wfc-timeout timeout
1337 Define how long the init script waits until all peers are
1338 connected. This can be useful in combination with a cluster
1339 manager which cannot manage DRBD resources: when the cluster
1340 manager starts, the DRBD resources will already be up and
1341 running. With a more capable cluster manager such as Pacemaker,
1342 it makes more sense to let the cluster manager control DRBD
1343 resources. The timeout is specified in seconds. The default
1344 value is 0, which stands for an infinite timeout. Also see the
1345 degr-wfc-timeout parameter.
1346
1347
1348 drbdsetup forget-peer resource peer_node_id
1349 The forget-peer command removes all traces of a peer node from the
1350 meta-data. It frees a bitmap slot in the meta-data and make it
1351 avalable for futher bitmap slot allocation in case a so-far never
1352 seen node connects.
1353
1354 The connection must be taken down before this command may be used.
1355 In case the peer re-connects at a later point a bit-map based
1356 resync will be turned into a full-sync.
1357
1359 Please see the DRBD User's Guide[1] for examples.
1360
1362 This document was revised for version 9.0.0 of the DRBD distribution.
1363
1365 Written by Philipp Reisner <philipp.reisner@linbit.com> and Lars
1366 Ellenberg <lars.ellenberg@linbit.com>.
1367
1369 Report bugs to <drbd-user@lists.linbit.com>.
1370
1372 Copyright 2001-2018 LINBIT Information Technologies, Philipp Reisner,
1373 Lars Ellenberg. This is free software; see the source for copying
1374 conditions. There is NO warranty; not even for MERCHANTABILITY or
1375 FITNESS FOR A PARTICULAR PURPOSE.
1376
1378 drbd.conf(5), drbd(8), drbdadm(8), DRBD User's Guide[1], DRBD Web
1379 Site[2]
1380
1382 1. DRBD User's Guide
1383 http://www.drbd.org/users-guide/
1384
1385 2. DRBD Web Site
1386 http://www.drbd.org/
1387
1388
1389
1390DRBD 9.0.x 17 January 2018 DRBDSETUP(8)