1DRBDSETUP(8) System Administration DRBDSETUP(8)
2
3
4
6 drbdsetup - Configure the DRBD kernel module
7
9 drbdsetup command {argument...} [option...]
10
12 The drbdsetup utility serves to configure the DRBD kernel module and to
13 show its current configuration. Users usually interact with the drbdadm
14 utility, which provides a more high-level interface to DRBD than
15 drbdsetup. (See drbdadm's --dry-run option to see how drbdadm uses
16 drbdsetup.)
17
18 Some option arguments have a default scale which applies when a plain
19 number is specified (for example Kilo, or 1024 times the numeric
20 value). Such default scales can be overridden by using a suffix (for
21 example, M for Mega). The common suffixes K = 2^10 = 1024, M = 1024 K,
22 and G = 1024 M are supported.
23
25 drbdsetup attach minor lower_dev meta_data_dev meta_data_index,
26 drbdsetup disk-options minor
27 The attach command attaches a lower-level device to an existing
28 replicated device. The disk-options command changes the disk
29 options of an attached lower-level device. In either case, the
30 replicated device must have been created with drbdsetup new-minor.
31
32 Both commands refer to the replicated device by its minor number.
33 lower_dev is the name of the lower-level device. meta_data_dev is
34 the name of the device containing the metadata, and may be the same
35 as lower_dev. meta_data_index is either a numeric metadata index,
36 or the keyword internal for internal metadata, or the keyword
37 flexible for variable-size external metadata. Available options:
38
39 --al-extents extents
40 DRBD automatically maintains a "hot" or "active" disk area
41 likely to be written to again soon based on the recent write
42 activity. The "active" disk area can be written to immediately,
43 while "inactive" disk areas must be "activated" first, which
44 requires a meta-data write. We also refer to this active disk
45 area as the "activity log".
46
47 The activity log saves meta-data writes, but the whole log must
48 be resynced upon recovery of a failed node. The size of the
49 activity log is a major factor of how long a resync will take
50 and how fast a replicated disk will become consistent after a
51 crash.
52
53 The activity log consists of a number of 4-Megabyte segments;
54 the al-extents parameter determines how many of those segments
55 can be active at the same time. The default value for
56 al-extents is 1237, with a minimum of 7 and a maximum of 65536.
57
58 Note that the effective maximum may be smaller, depending on
59 how you created the device meta data, see also drbdmeta(8) The
60 effective maximum is 919 * (available on-disk activity-log
61 ring-buffer area/4kB -1), the default 32kB ring-buffer effects
62 a maximum of 6433 (covers more than 25 GiB of data) We
63 recommend to keep this well within the amount your backend
64 storage and replication link are able to resync inside of about
65 5 minutes.
66
67 --al-updates {yes | no}
68 With this parameter, the activity log can be turned off
69 entirely (see the al-extents parameter). This will speed up
70 writes because fewer meta-data writes will be necessary, but
71 the entire device needs to be resynchronized opon recovery of a
72 failed primary node. The default value for al-updates is yes.
73
74 --disk-barrier,
75 --disk-flushes,
76 --disk-drain
77 DRBD has three methods of handling the ordering of dependent
78 write requests:
79
80 disk-barrier
81 Use disk barriers to make sure that requests are written to
82 disk in the right order. Barriers ensure that all requests
83 submitted before a barrier make it to the disk before any
84 requests submitted after the barrier. This is implemented
85 using 'tagged command queuing' on SCSI devices and 'native
86 command queuing' on SATA devices. Only some devices and
87 device stacks support this method. The device mapper (LVM)
88 only supports barriers in some configurations.
89
90 Note that on systems which do not support disk barriers,
91 enabling this option can lead to data loss or corruption.
92 Until DRBD 8.4.1, disk-barrier was turned on if the I/O
93 stack below DRBD did support barriers. Kernels since
94 linux-2.6.36 (or 2.6.32 RHEL6) no longer allow to detect if
95 barriers are supported. Since drbd-8.4.2, this option is
96 off by default and needs to be enabled explicitly.
97
98 disk-flushes
99 Use disk flushes between dependent write requests, also
100 referred to as 'force unit access' by drive vendors. This
101 forces all data to disk. This option is enabled by default.
102
103 disk-drain
104 Wait for the request queue to "drain" (that is, wait for
105 the requests to finish) before submitting a dependent write
106 request. This method requires that requests are stable on
107 disk when they finish. Before DRBD 8.0.9, this was the only
108 method implemented. This option is enabled by default. Do
109 not disable in production environments.
110
111 From these three methods, drbd will use the first that is
112 enabled and supported by the backing storage device. If all
113 three of these options are turned off, DRBD will submit write
114 requests without bothering about dependencies. Depending on the
115 I/O stack, write requests can be reordered, and they can be
116 submitted in a different order on different cluster nodes. This
117 can result in data loss or corruption. Therefore, turning off
118 all three methods of controlling write ordering is strongly
119 discouraged.
120
121 A general guideline for configuring write ordering is to use
122 disk barriers or disk flushes when using ordinary disks (or an
123 ordinary disk array) with a volatile write cache. On storage
124 without cache or with a battery backed write cache, disk
125 draining can be a reasonable choice.
126
127 --disk-timeout
128 If the lower-level device on which a DRBD device stores its
129 data does not finish an I/O request within the defined
130 disk-timeout, DRBD treats this as a failure. The lower-level
131 device is detached, and the device's disk state advances to
132 Diskless. If DRBD is connected to one or more peers, the failed
133 request is passed on to one of them.
134
135 This option is dangerous and may lead to kernel panic!
136
137 "Aborting" requests, or force-detaching the disk, is intended
138 for completely blocked/hung local backing devices which do no
139 longer complete requests at all, not even do error completions.
140 In this situation, usually a hard-reset and failover is the
141 only way out.
142
143 By "aborting", basically faking a local error-completion, we
144 allow for a more graceful swichover by cleanly migrating
145 services. Still the affected node has to be rebooted "soon".
146
147 By completing these requests, we allow the upper layers to
148 re-use the associated data pages.
149
150 If later the local backing device "recovers", and now DMAs some
151 data from disk into the original request pages, in the best
152 case it will just put random data into unused pages; but
153 typically it will corrupt meanwhile completely unrelated data,
154 causing all sorts of damage.
155
156 Which means delayed successful completion, especially for READ
157 requests, is a reason to panic(). We assume that a delayed
158 *error* completion is OK, though we still will complain noisily
159 about it.
160
161 The default value of disk-timeout is 0, which stands for an
162 infinite timeout. Timeouts are specified in units of 0.1
163 seconds. This option is available since DRBD 8.3.12.
164
165 --md-flushes
166 Enable disk flushes and disk barriers on the meta-data device.
167 This option is enabled by default. See the disk-flushes
168 parameter.
169
170 --on-io-error handler
171 Configure how DRBD reacts to I/O errors on a lower-level
172 device. The following policies are defined:
173
174 pass_on
175 Change the disk status to Inconsistent, mark the failed
176 block as inconsistent in the bitmap, and retry the I/O
177 operation on a remote cluster node.
178
179 call-local-io-error
180 Call the local-io-error handler (see the handlers section).
181
182 detach
183 Detach the lower-level device and continue in diskless
184 mode.
185
186
187 --read-balancing policy
188 Distribute read requests among cluster nodes as defined by
189 policy. The supported policies are prefer-local (the default),
190 prefer-remote, round-robin, least-pending,
191 when-congested-remote, 32K-striping, 64K-striping,
192 128K-striping, 256K-striping, 512K-striping and 1M-striping.
193
194 This option is available since DRBD 8.4.1.
195
196 resync-after minor
197 Define that a device should only resynchronize after the
198 specified other device. By default, no order between devices is
199 defined, and all devices will resynchronize in parallel.
200 Depending on the configuration of the lower-level devices, and
201 the available network and disk bandwidth, this can slow down
202 the overall resync process. This option can be used to form a
203 chain or tree of dependencies among devices.
204
205 --size size
206 Specify the size of the lower-level device explicitly instead
207 of determining it automatically. The device size must be
208 determined once and is remembered for the lifetime of the
209 device. In order to determine it automatically, all the
210 lower-level devices on all nodes must be attached, and all
211 nodes must be connected. If the size is specified explicitly,
212 this is not necessary. The size value is assumed to be in units
213 of sectors (512 bytes) by default.
214
215 --discard-zeroes-if-aligned {yes | no}
216 There are several aspects to discard/trim/unmap support on
217 linux block devices. Even if discard is supported in general,
218 it may fail silently, or may partially ignore discard requests.
219 Devices also announce whether reading from unmapped blocks
220 returns defined data (usually zeroes), or undefined data
221 (possibly old data, possibly garbage).
222
223 If on different nodes, DRBD is backed by devices with differing
224 discard characteristics, discards may lead to data divergence
225 (old data or garbage left over on one backend, zeroes due to
226 unmapped areas on the other backend). Online verify would now
227 potentially report tons of spurious differences. While probably
228 harmless for most use cases (fstrim on a file system), DRBD
229 cannot have that.
230
231 To play safe, we have to disable discard support, if our local
232 backend (on a Primary) does not support
233 "discard_zeroes_data=true". We also have to translate discards
234 to explicit zero-out on the receiving side, unless the
235 receiving side (Secondary) supports "discard_zeroes_data=true",
236 thereby allocating areas what were supposed to be unmapped.
237
238 There are some devices (notably the LVM/DM thin provisioning)
239 that are capable of discard, but announce
240 discard_zeroes_data=false. In the case of DM-thin, discards
241 aligned to the chunk size will be unmapped, and reading from
242 unmapped sectors will return zeroes. However, unaligned partial
243 head or tail areas of discard requests will be silently
244 ignored.
245
246 If we now add a helper to explicitly zero-out these unaligned
247 partial areas, while passing on the discard of the aligned full
248 chunks, we effectively achieve discard_zeroes_data=true on such
249 devices.
250
251 Setting discard-zeroes-if-aligned to yes will allow DRBD to use
252 discards, and to announce discard_zeroes_data=true, even on
253 backends that announce discard_zeroes_data=false.
254
255 Setting discard-zeroes-if-aligned to no will cause DRBD to
256 always fall-back to zero-out on the receiving side, and to not
257 even announce discard capabilities on the Primary, if the
258 respective backend announces discard_zeroes_data=false.
259
260 We used to ignore the discard_zeroes_data setting completely.
261 To not break established and expected behaviour, and suddenly
262 cause fstrim on thin-provisioned LVs to run out-of-space
263 instead of freeing up space, the default value is yes.
264
265 This option is available since 8.4.7.
266
267 --disable-write-same {yes | no}
268 Some disks announce WRITE_SAME support to the kernel but fail
269 with an I/O error upon actually receiving such a request. This
270 mostly happens when using virtualized disks -- notably, this
271 behavior has been observed with VMware's virtual disks.
272
273 When disable-write-same is set to yes, WRITE_SAME detection is
274 manually overriden and support is disabled.
275
276 The default value of disable-write-same is no. This option is
277 available since 8.4.7.
278
279 --rs-discard-granularity byte
280 When rs-discard-granularity is set to a non zero, positive
281 value then DRBD tries to do a resync operation in requests of
282 this size. In case such a block contains only zero bytes on the
283 sync source node, the sync target node will issue a
284 discard/trim/unmap command for the area.
285
286 The value is constrained by the discard granularity of the
287 backing block device. In case rs-discard-granularity is not a
288 multiplier of the discard granularity of the backing block
289 device DRBD rounds it up. The feature only gets active if the
290 backing block device reads back zeroes after a discard command.
291
292 The default value of rs-discard-granularity is 0. This option
293 is available since 8.4.7.
294
295 drbdsetup peer-device-options resource peer_node_id volume
296 These are options that affect the peer's device.
297
298 --c-delay-target delay_target,
299 --c-fill-target fill_target,
300 --c-max-rate max_rate,
301 --c-plan-ahead plan_time
302 Dynamically control the resync speed. The following modes are
303 available:
304
305 • Dynamic control with fill target (default). Enabled when
306 c-plan-ahead is non-zero and c-fill-target is non-zero. The
307 goal is to fill the buffers along the data path with a
308 defined amount of data. This mode is recommended when
309 DRBD-proxy is used. Configured with c-plan-ahead,
310 c-fill-target and c-max-rate.
311
312 • Dynamic control with delay target. Enabled when
313 c-plan-ahead is non-zero (default) and c-fill-target is
314 zero. The goal is to have a defined delay along the path.
315 Configured with c-plan-ahead, c-delay-target and
316 c-max-rate.
317
318 • Fixed resync rate. Enabled when c-plan-ahead is zero. DRBD
319 will try to perform resync I/O at a fixed rate. Configured
320 with resync-rate.
321
322 The c-plan-ahead parameter defines how fast DRBD adapts to
323 changes in the resync speed. It should be set to five times the
324 network round-trip time or more. The default value of
325 c-plan-ahead is 20, in units of 0.1 seconds.
326
327 The c-fill-target parameter defines the how much resync data
328 DRBD should aim to have in-flight at all times. Common values
329 for "normal" data paths range from 4K to 100K. The default
330 value of c-fill-target is 100, in units of sectors
331
332 The c-delay-target parameter defines the delay in the resync
333 path that DRBD should aim for. This should be set to five times
334 the network round-trip time or more. The default value of
335 c-delay-target is 10, in units of 0.1 seconds.
336
337 The c-max-rate parameter limits the maximum bandwidth used by
338 dynamically controlled resyncs. Setting this to zero removes
339 the limitation (since DRBD 9.0.28). It should be set to either
340 the bandwidth available between the DRBD hosts and the machines
341 hosting DRBD-proxy, or to the available disk bandwidth. The
342 default value of c-max-rate is 102400, in units of KiB/s.
343
344 Dynamic resync speed control is available since DRBD 8.3.9.
345
346 --c-min-rate min_rate
347 A node which is primary and sync-source has to schedule
348 application I/O requests and resync I/O requests. The
349 c-min-rate parameter limits how much bandwidth is available for
350 resync I/O; the remaining bandwidth is used for application
351 I/O.
352
353 A c-min-rate value of 0 means that there is no limit on the
354 resync I/O bandwidth. This can slow down application I/O
355 significantly. Use a value of 1 (1 KiB/s) for the lowest
356 possible resync rate.
357
358 The default value of c-min-rate is 250, in units of KiB/s.
359
360 --resync-rate rate
361 Define how much bandwidth DRBD may use for resynchronizing.
362 DRBD allows "normal" application I/O even during a resync. If
363 the resync takes up too much bandwidth, application I/O can
364 become very slow. This parameter allows to avoid that. Please
365 note this is option only works when the dynamic resync
366 controller is disabled.
367
368 drbdsetup check-resize minor
369 Remember the current size of the lower-level device of the
370 specified replicated device. Used by drbdadm. The size information
371 is stored in file /var/lib/drbd/drbd-minor-minor.lkbd.
372
373 drbdsetup new-peer resource peer_node_id,
374 drbdsetup net-options resource peer_node_id
375 The new-peer command creates a connection within a resource. The
376 resource must have been created with drbdsetup new-resource. The
377 net-options command changes the network options of an existing
378 connection. Before a connection can be activated with the connect
379 command, at least one path need to added with the new-path command.
380 Available options:
381
382 --after-sb-0pri policy
383 Define how to react if a split-brain scenario is detected and
384 none of the two nodes is in primary role. (We detect
385 split-brain scenarios when two nodes connect; split-brain
386 decisions are always between two nodes.) The defined policies
387 are:
388
389 disconnect
390 No automatic resynchronization; simply disconnect.
391
392 discard-younger-primary,
393 discard-older-primary
394 Resynchronize from the node which became primary first
395 (discard-younger-primary) or last (discard-older-primary).
396 If both nodes became primary independently, the
397 discard-least-changes policy is used.
398
399 discard-zero-changes
400 If only one of the nodes wrote data since the split brain
401 situation was detected, resynchronize from this node to the
402 other. If both nodes wrote data, disconnect.
403
404 discard-least-changes
405 Resynchronize from the node with more modified blocks.
406
407 discard-node-nodename
408 Always resynchronize to the named node.
409
410 --after-sb-1pri policy
411 Define how to react if a split-brain scenario is detected, with
412 one node in primary role and one node in secondary role. (We
413 detect split-brain scenarios when two nodes connect, so
414 split-brain decisions are always among two nodes.) The defined
415 policies are:
416
417 disconnect
418 No automatic resynchronization, simply disconnect.
419
420 consensus
421 Discard the data on the secondary node if the after-sb-0pri
422 algorithm would also discard the data on the secondary
423 node. Otherwise, disconnect.
424
425 violently-as0p
426 Always take the decision of the after-sb-0pri algorithm,
427 even if it causes an erratic change of the primary's view
428 of the data. This is only useful if a single-node file
429 system (i.e., not OCFS2 or GFS) with the
430 allow-two-primaries flag is used. This option can cause the
431 primary node to crash, and should not be used.
432
433 discard-secondary
434 Discard the data on the secondary node.
435
436 call-pri-lost-after-sb
437 Always take the decision of the after-sb-0pri algorithm. If
438 the decision is to discard the data on the primary node,
439 call the pri-lost-after-sb handler on the primary node.
440
441 --after-sb-2pri policy
442 Define how to react if a split-brain scenario is detected and
443 both nodes are in primary role. (We detect split-brain
444 scenarios when two nodes connect, so split-brain decisions are
445 always among two nodes.) The defined policies are:
446
447 disconnect
448 No automatic resynchronization, simply disconnect.
449
450 violently-as0p
451 See the violently-as0p policy for after-sb-1pri.
452
453 call-pri-lost-after-sb
454 Call the pri-lost-after-sb helper program on one of the
455 machines unless that machine can demote to secondary. The
456 helper program is expected to reboot the machine, which
457 brings the node into a secondary role. Which machine runs
458 the helper program is determined by the after-sb-0pri
459 strategy.
460
461 --allow-two-primaries
462 The most common way to configure DRBD devices is to allow only
463 one node to be primary (and thus writable) at a time.
464
465 In some scenarios it is preferable to allow two nodes to be
466 primary at once; a mechanism outside of DRBD then must make
467 sure that writes to the shared, replicated device happen in a
468 coordinated way. This can be done with a shared-storage cluster
469 file system like OCFS2 and GFS, or with virtual machine images
470 and a virtual machine manager that can migrate virtual machines
471 between physical machines.
472
473 The allow-two-primaries parameter tells DRBD to allow two nodes
474 to be primary at the same time. Never enable this option when
475 using a non-distributed file system; otherwise, data corruption
476 and node crashes will result!
477
478 --always-asbp
479 Normally the automatic after-split-brain policies are only used
480 if current states of the UUIDs do not indicate the presence of
481 a third node.
482
483 With this option you request that the automatic
484 after-split-brain policies are used as long as the data sets of
485 the nodes are somehow related. This might cause a full sync, if
486 the UUIDs indicate the presence of a third node. (Or double
487 faults led to strange UUID sets.)
488
489 --connect-int time
490 As soon as a connection between two nodes is configured with
491 drbdsetup connect, DRBD immediately tries to establish the
492 connection. If this fails, DRBD waits for connect-int seconds
493 and then repeats. The default value of connect-int is 10
494 seconds.
495
496 --cram-hmac-alg hash-algorithm
497 Configure the hash-based message authentication code (HMAC) or
498 secure hash algorithm to use for peer authentication. The
499 kernel supports a number of different algorithms, some of which
500 may be loadable as kernel modules. See the shash algorithms
501 listed in /proc/crypto. By default, cram-hmac-alg is unset.
502 Peer authentication also requires a shared-secret to be
503 configured.
504
505 --csums-alg hash-algorithm
506 Normally, when two nodes resynchronize, the sync target
507 requests a piece of out-of-sync data from the sync source, and
508 the sync source sends the data. With many usage patterns, a
509 significant number of those blocks will actually be identical.
510
511 When a csums-alg algorithm is specified, when requesting a
512 piece of out-of-sync data, the sync target also sends along a
513 hash of the data it currently has. The sync source compares
514 this hash with its own version of the data. It sends the sync
515 target the new data if the hashes differ, and tells it that the
516 data are the same otherwise. This reduces the network bandwidth
517 required, at the cost of higher cpu utilization and possibly
518 increased I/O on the sync target.
519
520 The csums-alg can be set to one of the secure hash algorithms
521 supported by the kernel; see the shash algorithms listed in
522 /proc/crypto. By default, csums-alg is unset.
523
524 --csums-after-crash-only
525 Enabling this option (and csums-alg, above) makes it possible
526 to use the checksum based resync only for the first resync
527 after primary crash, but not for later "network hickups".
528
529 In most cases, block that are marked as need-to-be-resynced are
530 in fact changed, so calculating checksums, and both reading and
531 writing the blocks on the resync target is all effective
532 overhead.
533
534 The advantage of checksum based resync is mostly after primary
535 crash recovery, where the recovery marked larger areas (those
536 covered by the activity log) as need-to-be-resynced, just in
537 case. Introduced in 8.4.5.
538
539 --data-integrity-alg alg
540 DRBD normally relies on the data integrity checks built into
541 the TCP/IP protocol, but if a data integrity algorithm is
542 configured, it will additionally use this algorithm to make
543 sure that the data received over the network match what the
544 sender has sent. If a data integrity error is detected, DRBD
545 will close the network connection and reconnect, which will
546 trigger a resync.
547
548 The data-integrity-alg can be set to one of the secure hash
549 algorithms supported by the kernel; see the shash algorithms
550 listed in /proc/crypto. By default, this mechanism is turned
551 off.
552
553 Because of the CPU overhead involved, we recommend not to use
554 this option in production environments. Also see the notes on
555 data integrity below.
556
557 --fencing fencing_policy
558 Fencing is a preventive measure to avoid situations where both
559 nodes are primary and disconnected. This is also known as a
560 split-brain situation. DRBD supports the following fencing
561 policies:
562
563 dont-care
564 No fencing actions are taken. This is the default policy.
565
566 resource-only
567 If a node becomes a disconnected primary, it tries to fence
568 the peer. This is done by calling the fence-peer handler.
569 The handler is supposed to reach the peer over an
570 alternative communication path and call 'drbdadm outdate
571 minor' there.
572
573 resource-and-stonith
574 If a node becomes a disconnected primary, it freezes all
575 its IO operations and calls its fence-peer handler. The
576 fence-peer handler is supposed to reach the peer over an
577 alternative communication path and call 'drbdadm outdate
578 minor' there. In case it cannot do that, it should stonith
579 the peer. IO is resumed as soon as the situation is
580 resolved. In case the fence-peer handler fails, I/O can be
581 resumed manually with 'drbdadm resume-io'.
582
583 --ko-count number
584 If a secondary node fails to complete a write request in
585 ko-count times the timeout parameter, it is excluded from the
586 cluster. The primary node then sets the connection to this
587 secondary node to Standalone. To disable this feature, you
588 should explicitly set it to 0; defaults may change between
589 versions.
590
591 --max-buffers number
592 Limits the memory usage per DRBD minor device on the receiving
593 side, or for internal buffers during resync or online-verify.
594 Unit is PAGE_SIZE, which is 4 KiB on most systems. The minimum
595 possible setting is hard coded to 32 (=128 KiB). These buffers
596 are used to hold data blocks while they are written to/read
597 from disk. To avoid possible distributed deadlocks on
598 congestion, this setting is used as a throttle threshold rather
599 than a hard limit. Once more than max-buffers pages are in use,
600 further allocation from this pool is throttled. You want to
601 increase max-buffers if you cannot saturate the IO backend on
602 the receiving side.
603
604 --max-epoch-size number
605 Define the maximum number of write requests DRBD may issue
606 before issuing a write barrier. The default value is 2048, with
607 a minimum of 1 and a maximum of 20000. Setting this parameter
608 to a value below 10 is likely to decrease performance.
609
610 --on-congestion policy,
611 --congestion-fill threshold,
612 --congestion-extents threshold
613 By default, DRBD blocks when the TCP send queue is full. This
614 prevents applications from generating further write requests
615 until more buffer space becomes available again.
616
617 When DRBD is used together with DRBD-proxy, it can be better to
618 use the pull-ahead on-congestion policy, which can switch DRBD
619 into ahead/behind mode before the send queue is full. DRBD then
620 records the differences between itself and the peer in its
621 bitmap, but it no longer replicates them to the peer. When
622 enough buffer space becomes available again, the node
623 resynchronizes with the peer and switches back to normal
624 replication.
625
626 This has the advantage of not blocking application I/O even
627 when the queues fill up, and the disadvantage that peer nodes
628 can fall behind much further. Also, while resynchronizing, peer
629 nodes will become inconsistent.
630
631 The available congestion policies are block (the default) and
632 pull-ahead. The congestion-fill parameter defines how much data
633 is allowed to be "in flight" in this connection. The default
634 value is 0, which disables this mechanism of congestion
635 control, with a maximum of 10 GiBytes. The congestion-extents
636 parameter defines how many bitmap extents may be active before
637 switching into ahead/behind mode, with the same default and
638 limits as the al-extents parameter. The congestion-extents
639 parameter is effective only when set to a value smaller than
640 al-extents.
641
642 Ahead/behind mode is available since DRBD 8.3.10.
643
644 --ping-int interval
645 When the TCP/IP connection to a peer is idle for more than
646 ping-int seconds, DRBD will send a keep-alive packet to make
647 sure that a failed peer or network connection is detected
648 reasonably soon. The default value is 10 seconds, with a
649 minimum of 1 and a maximum of 120 seconds. The unit is seconds.
650
651 --ping-timeout timeout
652 Define the timeout for replies to keep-alive packets. If the
653 peer does not reply within ping-timeout, DRBD will close and
654 try to reestablish the connection. The default value is 0.5
655 seconds, with a minimum of 0.1 seconds and a maximum of 30
656 seconds. The unit is tenths of a second.
657
658 --socket-check-timeout timeout
659 In setups involving a DRBD-proxy and connections that
660 experience a lot of buffer-bloat it might be necessary to set
661 ping-timeout to an unusual high value. By default DRBD uses the
662 same value to wait if a newly established TCP-connection is
663 stable. Since the DRBD-proxy is usually located in the same
664 data center such a long wait time may hinder DRBD's connect
665 process.
666
667 In such setups socket-check-timeout should be set to at least
668 to the round trip time between DRBD and DRBD-proxy. I.e. in
669 most cases to 1.
670
671 The default unit is tenths of a second, the default value is 0
672 (which causes DRBD to use the value of ping-timeout instead).
673 Introduced in 8.4.5.
674
675 --protocol name
676 Use the specified protocol on this connection. The supported
677 protocols are:
678
679 A
680 Writes to the DRBD device complete as soon as they have
681 reached the local disk and the TCP/IP send buffer.
682
683 B
684 Writes to the DRBD device complete as soon as they have
685 reached the local disk, and all peers have acknowledged the
686 receipt of the write requests.
687
688 C
689 Writes to the DRBD device complete as soon as they have
690 reached the local and all remote disks.
691
692
693 --rcvbuf-size size
694 Configure the size of the TCP/IP receive buffer. A value of 0
695 (the default) causes the buffer size to adjust dynamically.
696 This parameter usually does not need to be set, but it can be
697 set to a value up to 10 MiB. The default unit is bytes.
698
699 --rr-conflict policy
700 This option helps to solve the cases when the outcome of the
701 resync decision is incompatible with the current role
702 assignment in the cluster. The defined policies are:
703
704 disconnect
705 No automatic resynchronization, simply disconnect.
706
707 retry-connect
708 Disconnect now, and retry to connect immediatly afterwards.
709
710 violently
711 Resync to the primary node is allowed, violating the
712 assumption that data on a block device are stable for one
713 of the nodes. Do not use this option, it is dangerous.
714
715 call-pri-lost
716 Call the pri-lost handler on one of the machines. The
717 handler is expected to reboot the machine, which puts it
718 into secondary role.
719
720 auto-discard
721 Auto-discard reverses the resync direction, so that DRBD
722 resyncs the current primary to the current secondary.
723 Auto-discard only applies when protocol A is in use and the
724 resync decision is based on the principle that a crashed
725 primary should be the source of a resync. When a primary
726 node crashes, it might have written some last updates to
727 its disk, which were not received by a protocol A
728 secondary. By promoting the secondary in the meantime the
729 user accepted that those last updates have been lost. By
730 using auto-discard you consent that the last updates
731 (before the crash of the primary) should be rolled back
732 automatically.
733
734 --shared-secret secret
735 Configure the shared secret used for peer authentication. The
736 secret is a string of up to 64 characters. Peer authentication
737 also requires the cram-hmac-alg parameter to be set.
738
739 --sndbuf-size size
740 Configure the size of the TCP/IP send buffer. Since DRBD 8.0.13
741 / 8.2.7, a value of 0 (the default) causes the buffer size to
742 adjust dynamically. Values below 32 KiB are harmful to the
743 throughput on this connection. Large buffer sizes can be useful
744 especially when protocol A is used over high-latency networks;
745 the maximum value supported is 10 MiB.
746
747 --tcp-cork
748 By default, DRBD uses the TCP_CORK socket option to prevent the
749 kernel from sending partial messages; this results in fewer and
750 bigger packets on the network. Some network stacks can perform
751 worse with this optimization. On these, the tcp-cork parameter
752 can be used to turn this optimization off.
753
754 --timeout time
755 Define the timeout for replies over the network: if a peer node
756 does not send an expected reply within the specified timeout,
757 it is considered dead and the TCP/IP connection is closed. The
758 timeout value must be lower than connect-int and lower than
759 ping-int. The default is 6 seconds; the value is specified in
760 tenths of a second.
761
762 --use-rle
763 Each replicated device on a cluster node has a separate bitmap
764 for each of its peer devices. The bitmaps are used for tracking
765 the differences between the local and peer device: depending on
766 the cluster state, a disk range can be marked as different from
767 the peer in the device's bitmap, in the peer device's bitmap,
768 or in both bitmaps. When two cluster nodes connect, they
769 exchange each other's bitmaps, and they each compute the union
770 of the local and peer bitmap to determine the overall
771 differences.
772
773 Bitmaps of very large devices are also relatively large, but
774 they usually compress very well using run-length encoding. This
775 can save time and bandwidth for the bitmap transfers.
776
777 The use-rle parameter determines if run-length encoding should
778 be used. It is on by default since DRBD 8.4.0.
779
780 --verify-alg hash-algorithm
781 Online verification (drbdadm verify) computes and compares
782 checksums of disk blocks (i.e., hash values) in order to detect
783 if they differ. The verify-alg parameter determines which
784 algorithm to use for these checksums. It must be set to one of
785 the secure hash algorithms supported by the kernel before
786 online verify can be used; see the shash algorithms listed in
787 /proc/crypto.
788
789 We recommend to schedule online verifications regularly during
790 low-load periods, for example once a month. Also see the notes
791 on data integrity below.
792
793 drbdsetup new-path resource peer_node_id local-addr remote-addr
794 The new-path command creates a path within a connection. The
795 connection must have been created with drbdsetup new-peer.
796 Local_addr and remote_addr refer to the local and remote protocol,
797 network address, and port in the format
798 [address-family:]address[:port]. The address families ipv4, ipv6,
799 ssocks (Dolphin Interconnect Solutions' "super sockets"), sdp
800 (Infiniband Sockets Direct Protocol), and sci are supported (sci is
801 an alias for ssocks). If no address family is specified, ipv4 is
802 assumed. For all address families except ipv6, the address uses
803 IPv4 address notation (for example, 1.2.3.4). For ipv6, the address
804 is enclosed in brackets and uses IPv6 address notation (for
805 example, [fd01:2345:6789:abcd::1]). The port defaults to 7788.
806
807 drbdsetup connect resource peer_node_id
808 The connect command activates a connection. That means that the
809 DRBD driver will bind and listen on all local addresses of the
810 connection-'s paths. It will begin to try to establish one or more
811 paths of the connection. Available options:
812
813 --tentative
814 Only determine if a connection to the peer can be established
815 and if a resync is necessary (and in which direction) without
816 actually establishing the connection or starting the resync.
817 Check the system log to see what DRBD would do without the
818 --tentative option.
819
820 --discard-my-data
821 Discard the local data and resynchronize with the peer that has
822 the most up-to-data data. Use this option to manually recover
823 from a split-brain situation.
824
825 drbdsetup del-peer resource peer_node_id
826 The del-peer command removes a connection from a resource.
827
828 drbdsetup del-path resource peer_node_id local-addr remote-addr
829 The del-path command removes a path from a connection. Please note
830 that it fails if the path is necessary to keep a connected
831 connection in tact. In order to remove all paths, disconnect the
832 connection first.
833
834 drbdsetup cstate resource peer_node_id
835 Show the current state of a connection. The connection is
836 identified by the node-id of the peer; see the drbdsetup connect
837 command.
838
839 drbdsetup del-minor minor
840 Remove a replicated device. No lower-level device may be attached;
841 see drbdsetup detach.
842
843 drbdsetup del-resource resource
844 Remove a resource. All volumes and connections must be removed
845 first (drbdsetup del-minor, drbdsetup disconnect). Alternatively,
846 drbdsetup down can be used to remove a resource together with all
847 its volumes and connections.
848
849 drbdsetup detach minor
850 Detach the lower-level device of a replicated device. Available
851 options:
852
853 --force
854 Force the detach and return immediately. This puts the
855 lower-level device into failed state until all pending I/O has
856 completed, and then detaches the device. Any I/O not yet
857 submitted to the lower-level device (for example, because I/O
858 on the device was suspended) is assumed to have failed.
859
860
861 drbdsetup disconnect resource peer_node_id
862 Remove a connection to a peer host. The connection is identified by
863 the node-id of the peer; see the drbdsetup connect command.
864
865 drbdsetup down {resource | all}
866 Take a resource down by removing all volumes, connections, and the
867 resource itself.
868
869 drbdsetup dstate minor
870 Show the current disk state of a lower-level device.
871
872 drbdsetup events2 {resource | all}
873 Show the current state of all configured DRBD objects, followed by
874 all changes to the state.
875
876 The output format is meant to be human as well as machine readable.
877 The line starts with a word that indicates the kind of event:
878 exists for an existing object; create, destroy, and change if an
879 object is created, destroyed, or changed; call or response if an
880 event handler is called or it returns; or rename when the name of
881 an object is changed. The second word indicates the object the
882 event applies to: resource, device, connection, peer-device, path,
883 helper, or a dash (-) to indicate that the current state has been
884 dumped completely.
885
886 The remaining words identify the object and describe the state that
887 the object is in. Some special keys are worth mentioning:
888
889 resource may_promote:{yes|no}
890 Whether promoting to primary is expected to succeed. When
891 quorum is enabled, this can be used to trigger failover. When
892 may_promote:yes is reported on this node, then no writes are
893 possible on any other node, which generally means that the
894 application can be started on this node, even when it has been
895 running on another.
896
897 resource promotion_score:score
898 An integer heuristic indicating the relative preference for
899 promoting this resource. A higher score is better in terms of
900 having local disks and having access to up-to-date data. The
901 score may be positive even when some node is primary. It will
902 be zero when promotion is impossible due to quorum or lack of
903 any access to up-to-date data.
904
905 Available options:
906
907 --now
908 Terminate after reporting the current state. The default is to
909 continuously listen and report state changes.
910
911 --poll
912 Read from stdin and update when n is read. Newlines are
913 ignored. Every other input terminates the command.
914
915 Without --now, changes are printed as usual. On each n the
916 current state is fetched, but only changed objects are printed.
917 This is useful with --statistics or --full because DRBD does
918 not otherwise send updates when only the statistics change.
919
920 In combination with --now the full state is printed on each n.
921 No other changes are printed.
922
923 --statistics
924 Include statistics in the output.
925
926 --diff
927 Write information in form of a diff between old and new state.
928 This helps simple tools to avoid (old) state tracking on their
929 own.
930
931 --full
932 Write complete state information, especially on change events.
933 This enables --statistics and --verbose.
934
935
936 drbdsetup get-gi resource peer_node_id volume
937 Show the data generation identifiers for a device on a particular
938 connection. The device is identified by its volume number. The
939 connection is identified by its endpoints; see the drbdsetup
940 connect command.
941
942 The output consists of the current UUID, bitmap UUID, and the first
943 two history UUIDS, folowed by a set of flags. The current UUID and
944 history UUIDs are device specific; the bitmap UUID and flags are
945 peer device specific. This command only shows the first two history
946 UUIDs. Internally, DRBD maintains one history UUID for each
947 possible peer device.
948
949 drbdsetup invalidate minor
950 Replace the local data of a device with that of a peer. All the
951 local data will be marked out-of-sync, and a resync with the
952 specified peer device will be initialted.
953
954 Available options:
955
956 --reset-bitmap=no
957 Usually an invalidate operation sets all bits in the bitmap to
958 out-of-sync before beginning the resync from the peer. By
959 giving --reset-bitmap=no DRBD will use the bitmap as it is.
960 Usually this is used after an online verify operation found
961 differences in the backing devices.
962
963 The --reset-bitmap option is available since DRBD kernel driver
964 9.0.29 and drbd-utils 9.17.
965
966 --sync-from-peer-node-id
967 This option allows the caller to select the node to resync
968 from. if it is not gives, DRBD selects a suitable source node
969 itself.
970
971
972 drbdsetup invalidate-remote resource peer_node_id volume
973 Replace a peer device's data of a resource with the local data. The
974 peer device's data will be marked out-of-sync, and a resync from
975 the local node to the specified peer will be initiated.
976
977 Available options:
978
979 --reset-bitmap=no
980 Usually an invalidate remote operation sets all bits in the
981 bitmap to out-of-sync before beginning the resync to the peer.
982 By giving --reset-bitmap=no DRBD will use the bitmap as it is.
983 Usually this is used after an online verify operation found
984 differences in the backing devices.
985
986 The --reset-bitmap option is available since DRBD kernel driver
987 9.0.29 and drbd-utils 9.17.
988
989
990 drbdsetup new-current-uuid minor
991 Generate a new current UUID and rotates all other UUID values. This
992 has at least two use cases, namely to skip the initial sync, and to
993 reduce network bandwidth when starting in a single node
994 configuration and then later (re-)integrating a remote site.
995
996 Available option:
997
998 --clear-bitmap
999 Clears the sync bitmap in addition to generating a new current
1000 UUID.
1001
1002 This can be used to skip the initial sync, if you want to start
1003 from scratch. This use-case does only work on "Just Created" meta
1004 data. Necessary steps:
1005
1006 1. On both nodes, initialize meta data and configure the device.
1007
1008 drbdadm create-md --force res/volume-number
1009
1010 2. They need to do the initial handshake, so they know their
1011 sizes.
1012
1013 drbdadm up res
1014
1015 3. They are now Connected Secondary/Secondary
1016 Inconsistent/Inconsistent. Generate a new current-uuid and
1017 clear the dirty bitmap.
1018
1019 drbdadm --clear-bitmap new-current-uuid res
1020
1021 4. They are now Connected Secondary/Secondary UpToDate/UpToDate.
1022 Make one side primary and create a file system.
1023
1024 drbdadm primary res
1025
1026 mkfs -t fs-type $(drbdadm sh-dev res)
1027
1028 One obvious side-effect is that the replica is full of old garbage
1029 (unless you made them identical using other means), so any
1030 online-verify is expected to find any number of out-of-sync blocks.
1031
1032 You must not use this on pre-existing data! Even though it may
1033 appear to work at first glance, once you switch to the other node,
1034 your data is toast, as it never got replicated. So do not leave out
1035 the mkfs (or equivalent).
1036
1037 This can also be used to shorten the initial resync of a cluster
1038 where the second node is added after the first node is gone into
1039 production, by means of disk shipping. This use-case works on
1040 disconnected devices only, the device may be in primary or
1041 secondary role.
1042
1043 The necessary steps on the current active server are:
1044
1045 1. drbdsetup new-current-uuid --clear-bitmap minor
1046
1047 2. Take the copy of the current active server. E.g. by pulling a
1048 disk out of the RAID1 controller, or by copying with dd. You
1049 need to copy the actual data, and the meta data.
1050
1051 3. drbdsetup new-current-uuid minor
1052
1053 Now add the disk to the new secondary node, and join it to the
1054 cluster. You will get a resync of that parts that were changed
1055 since the first call to drbdsetup in step 1.
1056
1057 drbdsetup new-minor resource minor volume
1058 Create a new replicated device within a resource. The command
1059 creates a block device inode for the replicated device (by default,
1060 /dev/drbdminor). The volume number identifies the device within the
1061 resource.
1062
1063 drbdsetup new-resource resource node_id,
1064 drbdsetup resource-options resource
1065 The new-resource command creates a new resource. The
1066 resource-options command changes the resource options of an
1067 existing resource. Available options:
1068
1069 --auto-promote bool-value
1070 A resource must be promoted to primary role before any of its
1071 devices can be mounted or opened for writing.
1072
1073 Before DRBD 9, this could only be done explicitly ("drbdadm
1074 primary"). Since DRBD 9, the auto-promote parameter allows to
1075 automatically promote a resource to primary role when one of
1076 its devices is mounted or opened for writing. As soon as all
1077 devices are unmounted or closed with no more remaining users,
1078 the role of the resource changes back to secondary.
1079
1080 Automatic promotion only succeeds if the cluster state allows
1081 it (that is, if an explicit drbdadm primary command would
1082 succeed). Otherwise, mounting or opening the device fails as it
1083 already did before DRBD 9: the mount(2) system call fails with
1084 errno set to EROFS (Read-only file system); the open(2) system
1085 call fails with errno set to EMEDIUMTYPE (wrong medium type).
1086
1087 Irrespective of the auto-promote parameter, if a device is
1088 promoted explicitly (drbdadm primary), it also needs to be
1089 demoted explicitly (drbdadm secondary).
1090
1091 The auto-promote parameter is available since DRBD 9.0.0, and
1092 defaults to yes.
1093
1094 --cpu-mask cpu-mask
1095 Set the cpu affinity mask for DRBD kernel threads. The cpu mask
1096 is specified as a hexadecimal number. The default value is 0,
1097 which lets the scheduler decide which kernel threads run on
1098 which CPUs. CPU numbers in cpu-mask which do not exist in the
1099 system are ignored.
1100
1101 --on-no-data-accessible policy
1102 Determine how to deal with I/O requests when the requested data
1103 is not available locally or remotely (for example, when all
1104 disks have failed). When quorum is enabled,
1105 on-no-data-accessible should be set to the same value as
1106 on-no-quorum. The defined policies are:
1107
1108 io-error
1109 System calls fail with errno set to EIO.
1110
1111 suspend-io
1112 The resource suspends I/O. I/O can be resumed by
1113 (re)attaching the lower-level device, by connecting to a
1114 peer which has access to the data, or by forcing DRBD to
1115 resume I/O with drbdadm resume-io res. When no data is
1116 available, forcing I/O to resume will result in the same
1117 behavior as the io-error policy.
1118
1119 This setting is available since DRBD 8.3.9; the default policy
1120 is io-error.
1121
1122 --peer-ack-window value
1123 On each node and for each device, DRBD maintains a bitmap of
1124 the differences between the local and remote data for each peer
1125 device. For example, in a three-node setup (nodes A, B, C) each
1126 with a single device, every node maintains one bitmap for each
1127 of its peers.
1128
1129 When nodes receive write requests, they know how to update the
1130 bitmaps for the writing node, but not how to update the bitmaps
1131 between themselves. In this example, when a write request
1132 propagates from node A to B and C, nodes B and C know that they
1133 have the same data as node A, but not whether or not they both
1134 have the same data.
1135
1136 As a remedy, the writing node occasionally sends peer-ack
1137 packets to its peers which tell them which state they are in
1138 relative to each other.
1139
1140 The peer-ack-window parameter specifies how much data a primary
1141 node may send before sending a peer-ack packet. A low value
1142 causes increased network traffic; a high value causes less
1143 network traffic but higher memory consumption on secondary
1144 nodes and higher resync times between the secondary nodes after
1145 primary node failures. (Note: peer-ack packets may be sent due
1146 to other reasons as well, e.g. membership changes or expiry of
1147 the peer-ack-delay timer.)
1148
1149 The default value for peer-ack-window is 2 MiB, the default
1150 unit is sectors. This option is available since 9.0.0.
1151
1152 --peer-ack-delay expiry-time
1153 If after the last finished write request no new write request
1154 gets issued for expiry-time, then a peer-ack packet is sent. If
1155 a new write request is issued before the timer expires, the
1156 timer gets reset to expiry-time. (Note: peer-ack packets may be
1157 sent due to other reasons as well, e.g. membership changes or
1158 the peer-ack-window option.)
1159
1160 This parameter may influence resync behavior on remote nodes.
1161 Peer nodes need to wait until they receive an peer-ack for
1162 releasing a lock on an AL-extent. Resync operations between
1163 peers may need to wait for for these locks.
1164
1165 The default value for peer-ack-delay is 100 milliseconds, the
1166 default unit is milliseconds. This option is available since
1167 9.0.0.
1168
1169 --quorum value
1170 When activated, a cluster partition requires quorum in order to
1171 modify the replicated data set. That means a node in the
1172 cluster partition can only be promoted to primary if the
1173 cluster partition has quorum. Every node with a disk directly
1174 connected to the node that should be promoted counts. If a
1175 primary node should execute a write request, but the cluster
1176 partition has lost quorum, it will freeze IO or reject the
1177 write request with an error (depending on the on-no-quorum
1178 setting). Upon loosing quorum a primary always invokes the
1179 quorum-lost handler. The handler is intended for notification
1180 purposes, its return code is ignored.
1181
1182 The option's value might be set to off, majority, all or a
1183 numeric value. If you set it to a numeric value, make sure that
1184 the value is greater than half of your number of nodes. Quorum
1185 is a mechanism to avoid data divergence, it might be used
1186 instead of fencing when there are more than two repicas. It
1187 defaults to off
1188
1189 If all missing nodes are marked as outdated, a partition always
1190 has quorum, no matter how small it is. I.e. If you disconnect
1191 all secondary nodes gracefully a single primary continues to
1192 operate. In the moment a single secondary is lost, it has to be
1193 assumed that it forms a partition with all the missing outdated
1194 nodes. In case my partition might be smaller than the other,
1195 quorum is lost in this moment.
1196
1197 In case you want to allow permanently diskless nodes to gain
1198 quorum it is recommendet to not use majority or all. It is
1199 recommended to specify an absolute number, since DBRD's
1200 heuristic to determine the complete number of diskfull nodes in
1201 the cluster is unreliable.
1202
1203 The quorum implementation is available starting with the DRBD
1204 kernel driver version 9.0.7.
1205
1206 --quorum-minimum-redundancy value
1207 This option sets the minimal required number of nodes with an
1208 UpToDate disk to allow the partition to gain quorum. This is a
1209 different requirement than the plain quorum option expresses.
1210
1211 The option's value might be set to off, majority, all or a
1212 numeric value. If you set it to a numeric value, make sure that
1213 the value is greater than half of your number of nodes.
1214
1215 In case you want to allow permanently diskless nodes to gain
1216 quorum it is recommendet to not use majority or all. It is
1217 recommended to specify an absolute number, since DBRD's
1218 heuristic to determine the complete number of diskfull nodes in
1219 the cluster is unreliable.
1220
1221 This option is available starting with the DRBD kernel driver
1222 version 9.0.10.
1223
1224 --on-no-quorum {io-error | suspend-io}
1225 By default DRBD freezes IO on a device, that lost quorum. By
1226 setting the on-no-quorum to io-error it completes all IO
1227 operations with an error if quorum ist lost.
1228
1229 Usually, the on-no-data-accessible should be set to the same
1230 value as on-no-quorum, as it has precedence.
1231
1232 The on-no-quorum options is available starting with the DRBD
1233 kernel driver version 9.0.8.
1234
1235
1236 drbdsetup outdate minor
1237 Mark the data on a lower-level device as outdated. This is used for
1238 fencing, and prevents the resource the device is part of from
1239 becoming primary in the future. See the --fencing disk option.
1240
1241 drbdsetup pause-sync resource peer_node_id volume
1242 Stop resynchronizing between a local and a peer device by setting
1243 the local pause flag. The resync can only resume if the pause flags
1244 on both sides of a connection are cleared.
1245
1246 drbdsetup primary resource
1247 Change the role of a node in a resource to primary. This allows the
1248 replicated devices in this resource to be mounted or opened for
1249 writing. Available options:
1250
1251 --overwrite-data-of-peer
1252 This option is an alias for the --force option.
1253
1254 --force
1255 Force the resource to become primary even if some devices are
1256 not guaranteed to have up-to-date data. This option is used to
1257 turn one of the nodes in a newly created cluster into the
1258 primary node, or when manually recovering from a disaster.
1259
1260 Note that this can lead to split-brain scenarios. Also, when
1261 forcefully turning an inconsistent device into an up-to-date
1262 device, it is highly recommended to use any integrity checks
1263 available (such as a filesystem check) to make sure that the
1264 device can at least be used without crashing the system.
1265
1266 Note that DRBD usually only allows one node in a cluster to be in
1267 primary role at any time; this allows DRBD to coordinate access to
1268 the devices in a resource across nodes. The --allow-two-primaries
1269 network option changes this; in that case, a mechanism outside of
1270 DRBD needs to coordinate device access.
1271
1272 drbdsetup resize minor
1273 Reexamine the size of the lower-level devices of a replicated
1274 device on all nodes. This command is called after the lower-level
1275 devices on all nodes have been grown to adjust the size of the
1276 replicated device. Available options:
1277
1278 --assume-peer-has-space
1279 Resize the device even if some of the peer devices are not
1280 connected at the moment. DRBD will try to resize the peer
1281 devices when they next connect. It will refuse to connect to a
1282 peer device which is too small.
1283
1284 --assume-clean
1285 Do not resynchronize the added disk space; instead, assume that
1286 it is identical on all nodes. This option can be used when the
1287 disk space is uninitialized and differences do not matter, or
1288 when it is known to be identical on all nodes. See the
1289 drbdsetup verify command.
1290
1291 --size val
1292 This option can be used to online shrink the usable size of a
1293 drbd device. It's the users responsibility to make sure that a
1294 file system on the device is not truncated by that operation.
1295
1296 --al-stripes val --al-stripes val
1297 These options may be used to change the layout of the activity
1298 log online. In case of internal meta data this may invovle
1299 shrinking the user visible size at the same time (unsing the
1300 --size) or increasing the avalable space on the backing
1301 devices.
1302
1303
1304 drbdsetup resume-io minor
1305 Resume I/O on a replicated device. See the --fencing net option.
1306
1307 drbdsetup resume-sync resource peer_node_id volume
1308 Allow resynchronization to resume by clearing the local sync pause
1309 flag.
1310
1311 drbdsetup role resource
1312 Show the current role of a resource.
1313
1314 drbdsetup secondary resource
1315 Change the role of a node in a resource to secondary. This command
1316 fails if the replicated device is in use.
1317
1318 drbdsetup show {resource | all}
1319 Show the current configuration of a resource, or of all resources.
1320 Available options:
1321
1322 --show-defaults
1323 Show all configuration parameters, even the ones with default
1324 values. Normally, parameters with default values are not shown.
1325
1326
1327 drbdsetup show-gi resource peer_node_id volume
1328 Show the data generation identifiers for a device on a particular
1329 connection. In addition, explain the output. The output otherwise
1330 is the same as in the drbdsetup get-gi command.
1331
1332 drbdsetup state
1333 This is an alias for drbdsetup role. Deprecated.
1334
1335 drbdsetup status {resource | all}
1336 Show the status of a resource, or of all resources. The output
1337 consists of one paragraph for each configured resource. Each
1338 paragraph contains one line for each resource, followed by one line
1339 for each device, and one line for each connection. The device and
1340 connection lines are indented. The connection lines are followed by
1341 one line for each peer device; these lines are indented against the
1342 connection line.
1343
1344 Long lines are wrapped around at terminal width, and indented to
1345 indicate how the lines belongs together. Available options:
1346
1347 --verbose
1348 Include more information in the output even when it is likely
1349 redundant or irrelevant.
1350
1351 --statistics
1352 Include data transfer statistics in the output.
1353
1354 --color={always | auto | never}
1355 Colorize the output. With --color=auto, drbdsetup emits color
1356 codes only when standard output is connected to a terminal.
1357
1358 For example, the non-verbose output for a resource with only one
1359 connection and only one volume could look like this:
1360
1361 drbd0 role:Primary
1362 disk:UpToDate
1363 host2.example.com role:Secondary
1364 disk:UpToDate
1365
1366
1367 With the --verbose option, the same resource could be reported as:
1368
1369 drbd0 node-id:1 role:Primary suspended:no
1370 volume:0 minor:1 disk:UpToDate blocked:no
1371 host2.example.com local:ipv4:192.168.123.4:7788
1372 peer:ipv4:192.168.123.2:7788 node-id:0 connection:WFReportParams
1373 role:Secondary congested:no
1374 volume:0 replication:Connected disk:UpToDate resync-suspended:no
1375
1376
1377
1378 drbdsetup suspend-io minor
1379 Suspend I/O on a replicated device. It is not usually necessary to
1380 use this command.
1381
1382 drbdsetup verify resource peer_node_id volume
1383 Start online verification, change which part of the device will be
1384 verified, or stop online verification. The command requires the
1385 specified peer to be connected.
1386
1387 Online verification compares each disk block on the local and peer
1388 node. Blocks which differ between the nodes are marked as
1389 out-of-sync, but they are not automatically brought back into sync.
1390 To bring them into sync, the drbdsetup invalidate or drbdsetup
1391 invalidate-remote with the --reset-bitmap=no option can be used.
1392 Progress can be monitored in the output of drbdsetup status
1393 --statistics. Available options:
1394
1395 --start position
1396 Define where online verification should start. This parameter
1397 is ignored if online verification is already in progress. If
1398 the start parameter is not specified, online verification will
1399 continue where it was interrupted (if the connection to the
1400 peer was lost while verifying), after the previous stop sector
1401 (if the previous online verification has finished), or at the
1402 beginning of the device (if the end of the device was reached,
1403 or online verify has not run before).
1404
1405 The position on disk is specified in disk sectors (512 bytes)
1406 by default.
1407
1408 --stop position
1409 Define where online verification should stop. If online
1410 verification is already in progress, the stop position of the
1411 active online verification process is changed. Use this to stop
1412 online verification.
1413
1414 The position on disk is specified in disk sectors (512 bytes)
1415 by default.
1416
1417 Also see the notes on data integrity in the drbd.conf(5) manual
1418 page.
1419
1420 drbdsetup wait-connect-volume resource peer_node_id volume,
1421 drbdsetup wait-connect-connection resource peer_node_id,
1422 drbdsetup wait-connect-resource resource,
1423 drbdsetup wait-sync-volume resource peer_node_id volume,
1424 drbdsetup wait-sync-connection resource peer_node_id,
1425 drbdsetup wait-sync-resource resource
1426 The wait-connect-* commands waits until a device on a peer is
1427 visible. The wait-sync-* commands waits until a device on a peer is
1428 up to date. Available options for both commands:
1429
1430 --degr-wfc-timeout timeout
1431 Define how long to wait until all peers are connected in case
1432 the cluster consisted of a single node only when the system
1433 went down. This parameter is usually set to a value smaller
1434 than wfc-timeout. The assumption here is that peers which were
1435 unreachable before a reboot are less likely to be reachable
1436 after the reboot, so waiting is less likely to help.
1437
1438 The timeout is specified in seconds. The default value is 0,
1439 which stands for an infinite timeout. Also see the wfc-timeout
1440 parameter.
1441
1442 --outdated-wfc-timeout timeout
1443 Define how long to wait until all peers are connected if all
1444 peers were outdated when the system went down. This parameter
1445 is usually set to a value smaller than wfc-timeout. The
1446 assumption here is that an outdated peer cannot have become
1447 primary in the meantime, so we don't need to wait for it as
1448 long as for a node which was alive before.
1449
1450 The timeout is specified in seconds. The default value is 0,
1451 which stands for an infinite timeout. Also see the wfc-timeout
1452 parameter.
1453
1454 --wait-after-sb
1455 This parameter causes DRBD to continue waiting in the init
1456 script even when a split-brain situation has been detected, and
1457 the nodes therefore refuse to connect to each other.
1458
1459 --wfc-timeout timeout
1460 Define how long the init script waits until all peers are
1461 connected. This can be useful in combination with a cluster
1462 manager which cannot manage DRBD resources: when the cluster
1463 manager starts, the DRBD resources will already be up and
1464 running. With a more capable cluster manager such as Pacemaker,
1465 it makes more sense to let the cluster manager control DRBD
1466 resources. The timeout is specified in seconds. The default
1467 value is 0, which stands for an infinite timeout. Also see the
1468 degr-wfc-timeout parameter.
1469
1470
1471 drbdsetup forget-peer resource peer_node_id
1472 The forget-peer command removes all traces of a peer node from the
1473 meta-data. It frees a bitmap slot in the meta-data and make it
1474 avalable for futher bitmap slot allocation in case a so-far never
1475 seen node connects.
1476
1477 The connection must be taken down before this command may be used.
1478 In case the peer re-connects at a later point a bit-map based
1479 resync will be turned into a full-sync.
1480
1481 drbdsetup rename-resource resource new_name
1482 Change the name of resource to new_name on the local node. Note
1483 that, since there is no concept of resource names in DRBD's network
1484 protocol, it is technically possible to have different names for a
1485 resource on different nodes. However, it is strongly recommended to
1486 issue the same rename-resource command on all nodes to have
1487 consistent naming across the cluster.
1488
1489 A rename event will be issued on the events2 stream to notify users
1490 of the new name.
1491
1493 Please see the DRBD User's Guide[1] for examples.
1494
1496 This document was revised for version 9.0.0 of the DRBD distribution.
1497
1499 Written by Philipp Reisner <philipp.reisner@linbit.com> and Lars
1500 Ellenberg <lars.ellenberg@linbit.com>.
1501
1503 Report bugs to <drbd-user@lists.linbit.com>.
1504
1506 Copyright 2001-2018 LINBIT Information Technologies, Philipp Reisner,
1507 Lars Ellenberg. This is free software; see the source for copying
1508 conditions. There is NO warranty; not even for MERCHANTABILITY or
1509 FITNESS FOR A PARTICULAR PURPOSE.
1510
1512 drbd.conf(5), drbd(8), drbdadm(8), DRBD User's Guide[1], DRBD Web
1513 Site[2]
1514
1516 1. DRBD User's Guide
1517 http://www.drbd.org/users-guide/
1518
1519 2. DRBD Web Site
1520 http://www.drbd.org/
1521
1522
1523
1524DRBD 9.0.x 17 January 2018 DRBDSETUP(8)