1DRBDSETUP(8)                 System Administration                DRBDSETUP(8)
2
3
4

NAME

6       drbdsetup - Setup tool for DRBD
7

SYNOPSIS

9       drbdsetup {device} disk {lower_dev} {meta_data_dev} {meta_data_index}
10                 [-d {size}] [-e {err_handler}] [-f {fencing_policy}] [-b]
11                 [-t {disk_timeout}]
12
13       drbdsetup {device} net [af:] {local_addr} [:port] [af:] {remote_addr}
14                 [:port] {protocol} [-c {time}] [-i {time}] [-t {val}]
15                 [-S {size}] [-r {size}] [-k {count}] [-e {max_epoch_size}]
16                 [-b {max_buffers}] [-m] [-a {hash_alg}] [-x {shared_secret}]
17                 [-A {asb-0p-policy}] [-B {asb-1p-policy}]
18                 [-C {asb-2p-policy}] [-D] [-R {role-resync-conflict-policy}]
19                 [-p {ping_timeout}] [-u {val}] [-d {hash_alg}] [-o] [-n]
20                 [-g {congestion_policy}] [-f {val}] [-h {val}]
21
22       drbdsetup {device} syncer [-a {dev_minor}] [-r {rate}] [-e {extents}]
23                 [-v {verify-hash-alg}] [-c {cpu-mask}] [-C {csums-hash-alg}]
24                 [-R] [-p {plan_time}] [-s {fill_target}] [-d {delay_target}]
25                 [-m {max_rate}] [-n {ond-policy}]
26
27       drbdsetup {device} disconnect
28
29       drbdsetup {device} detach [-f]
30
31       drbdsetup {device} down
32
33       drbdsetup {device} primary [-f] [-o]
34
35       drbdsetup {device} secondary
36
37       drbdsetup {device} verify [-s {start-position}] [-S {stop-position}]
38
39       drbdsetup {device} invalidate
40
41       drbdsetup {device} invalidate-remote
42
43       drbdsetup {device} wait-connect [-t {wfc_timeout}]
44                 [-d {degr_wfc_timeout}] [-o {outdated_wfc_timeout}] [-w]
45
46       drbdsetup {device} wait-sync [-t {wfc_timeout}] [-d {degr_wfc_timeout}]
47                 [-o {outdated_wfc_timeout}] [-w]
48
49       drbdsetup {device} role
50
51       drbdsetup {device} cstate
52
53       drbdsetup {device} dstate
54
55       drbdsetup {device} status
56
57       drbdsetup {device} resize [-d {size}] [-f {assume-peer-has-space}]
58                 [-c {assume-clean}]
59
60       drbdsetup {device} check-resize
61
62       drbdsetup {device} pause-sync
63
64       drbdsetup {device} resume-sync
65
66       drbdsetup {device} outdate
67
68       drbdsetup {device} show-gi
69
70       drbdsetup {device} get-gi
71
72       drbdsetup {device} show
73
74       drbdsetup {device} suspend-io
75
76       drbdsetup {device} resume-io
77
78       drbdsetup {device} events [-u] [-a]
79
80       drbdsetup {device} new-current-uuid [-c]
81

DESCRIPTION

83       drbdsetup is used to associate DRBD devices with their backing block
84       devices, to set up DRBD device pairs to mirror their backing block
85       devices, and to inspect the configuration of running DRBD devices.
86

NOTE

88       drbdsetup is a low level tool of the DRBD program suite. It is used by
89       the data disk and drbd scripts to communicate with the device driver.
90

COMMANDS

92       Each drbdsetup sub-command might require arguments and bring its own
93       set of options. All values have default units which might be overruled
94       by K, M or G. These units are defined in the usual way (e.g. K = 2^10 =
95       1024).
96
97   Common options
98       All drbdsetup sub-commands accept these two options
99
100       --create-device
101           In case the specified DRBD device (minor number) does not exist
102           yet, create it implicitly.
103
104       --set-defaults
105           When --set-defaults is given on the command line, all options of
106           the invoked sub-command that are not explicitly set are reset to
107           their default values.
108
109   disk
110       Associates device with lower_device to store its data blocks on. The -d
111       (or --disk-size) should only be used if you wish not to use as much as
112       possible from the backing block devices. If you do not use -d, the
113       device is only ready for use as soon as it was connected to its peer
114       once. (See the net command.)
115
116       -d, --disk-size size
117           You can override DRBD's size determination method with this option.
118           If you need to use the device before it was ever connected to its
119           peer, use this option to pass the size of the DRBD device to the
120           driver. Default unit is sectors (1s = 512 bytes).
121
122           If you use the size parameter in drbd.conf, we strongly recommend
123           to add an explicit unit postfix. drbdadm and drbdsetup used to have
124           mismatching default units.
125
126       -e, --on-io-error err_handler
127           If the driver of the lower_device reports an error to DRBD, DRBD
128           will mark the disk as inconsistent, call a helper program, or
129           detach the device from its backing storage and perform all further
130           IO by requesting it from the peer. The valid err_handlers are:
131           pass_on, call-local-io-error and detach.
132
133       -f, --fencing fencing_policy
134           Under fencing we understand preventive measures to avoid situations
135           where both nodes are primary and disconnected (AKA split brain).
136
137           Valid fencing policies are:
138
139           dont-care
140               This is the default policy. No fencing actions are done.
141
142           resource-only
143               If a node becomes a disconnected primary, it tries to outdate
144               the peer's disk. This is done by calling the fence-peer
145               handler. The handler is supposed to reach the other node over
146               alternative communication paths and call 'drbdadm outdate res'
147               there.
148
149           resource-and-stonith
150               If a node becomes a disconnected primary, it freezes all its IO
151               operations and calls its fence-peer handler. The fence-peer
152               handler is supposed to reach the peer over alternative
153               communication paths and call 'drbdadm outdate res' there. In
154               case it cannot reach the peer, it should stonith the peer. IO
155               is resumed as soon as the situation is resolved. In case your
156               handler fails, you can resume IO with the resume-io command.
157
158       -b, --use-bmbv
159           In case the backing storage's driver has a merge_bvec_fn()
160           function, DRBD has to pretend that it can only process IO requests
161           in units not larger than 4 KiB. (At time of writing the only known
162           drivers which have such a function are: md (software raid driver),
163           dm (device mapper - LVM) and DRBD itself)
164
165           To get best performance out of DRBD on top of software raid (or any
166           other driver with a merge_bvec_fn() function) you might enable this
167           option, if you know for sure that the merge_bvec_fn() function will
168           deliver the same results on all nodes of your cluster. I.e. the
169           physical disks of the software raid are exactly of the same type.
170           USE THIS OPTION ONLY IF YOU KNOW WHAT YOU ARE DOING.
171
172       -a, --no-disk-barrier, -i, --no-disk-flushes, -D, --no-disk-drain
173           DRBD has four implementations to express write-after-write
174           dependencies to its backing storage device. DRBD will use the first
175           method that is supported by the backing storage device and that is
176           not disabled by the user.
177
178           When selecting the method you should not only base your decision on
179           the measurable performance. In case your backing storage device has
180           a volatile write cache (plain disks, RAID of plain disks) you
181           should use one of the first two. In case your backing storage
182           device has battery-backed write cache you may go with option 3.
183           Option 4 (disable everything, use "none") is dangerous on most IO
184           stacks, may result in write-reordering, and if so, can
185           theoretically be the reason for data corruption, or disturb the
186           DRBD protocol, causing spurious disconnect/reconnect cycles.  Do
187           not use no-disk-drain.
188
189           Unfortunately device mapper (LVM) might not support barriers.
190
191           The letter after "wo:" in /proc/drbd indicates with method is
192           currently in use for a device: b, f, d, n. The implementations:
193
194           barrier
195               The first requires that the driver of the backing storage
196               device support barriers (called 'tagged command queuing' in
197               SCSI and 'native command queuing' in SATA speak). The use of
198               this method can be disabled by the --no-disk-barrier option.
199               Note: Since Linux-2.6.36 (or RHEL's 2.6.32) this method is
200               disabled.
201
202           flush
203               The second requires that the backing device support disk
204               flushes (called 'force unit access' in the drive vendors
205               speak). The use of this method can be disabled using the
206               --no-disk-flushes option.
207
208           drain
209               The third method is simply to let write requests drain before
210               write requests of a new reordering domain are issued. That was
211               the only implementation before 8.0.9.
212
213           none
214               The fourth method is to not express write-after-write
215               dependencies to the backing store at all, by also specifying
216               --no-disk-drain. This is dangerous on most IO stacks, may
217               result in write-reordering, and if so, can theoretically be the
218               reason for data corruption, or disturb the DRBD protocol,
219               causing spurious disconnect/reconnect cycles.  Do not use
220               --no-disk-drain.
221
222       -m, --no-md-flushes
223           Disables the use of disk flushes and barrier BIOs when accessing
224           the meta data device. See the notes on --no-disk-flushes.
225
226       -s, --max-bio-bvecs
227           In some special circumstances the device mapper stack manages to
228           pass BIOs to DRBD that violate the constraints that are set forth
229           by DRBD's merge_bvec() function and which have more than one bvec.
230           A known example is: phys-disk -> DRBD -> LVM -> Xen -> missaligned
231           partition (63) -> DomU FS. Then you might see "bio would need to,
232           but cannot, be split:" in the Dom0's kernel log.
233
234           The best workaround is to proper align the partition within the VM
235           (E.g. start it at sector 1024). That costs 480 KiB of storage.
236           Unfortunately the default of most Linux partitioning tools is to
237           start the first partition at an odd number (63). Therefore most
238           distributions install helpers for virtual linux machines will end
239           up with missaligned partitions. The second best workaround is to
240           limit DRBD's max bvecs per BIO (i.e., the max-bio-bvecs option) to
241           1, but that might cost performance.
242
243           The default value of max-bio-bvecs is 0, which means that there is
244           no user imposed limitation.
245
246       -t, --disk-timeout disk_timeout
247           If the driver of the lower_device does not finish an IO request
248           within disk_timeout, DRBD considers the disk as failed. If DRBD is
249           connected to a remote host, it will reissue local pending IO
250           requests to the peer, and ship all new IO requests to the peer
251           only. The disk state advances to diskless, as soon as the backing
252           block device has finished all IO requests.
253
254           The default value of disk-timeout is 0, which means that no timeout
255           is enforced. The default unit is 100ms. This option is available
256           since 8.3.12.
257
258   net
259       Sets up the device to listen on af:local_addr:port for incoming
260       connections and to try to connect to af:remote_addr:port. If port is
261       omitted, 7788 is used as default. If af is omitted ipv4 gets used.
262       Other supported address families are ipv6, ssocks for Dolphin
263       Interconnect Solutions' "super sockets" and sdp for Sockets Direct
264       Protocol (Infiniband).
265
266       On the TCP/IP link the specified protocol is used. Valid protocol
267       specifiers are A, B, and C.
268
269       Protocol A: write IO is reported as completed, if it has reached local
270       disk and local TCP send buffer.
271
272       Protocol B: write IO is reported as completed, if it has reached local
273       disk and remote buffer cache.
274
275       Protocol C: write IO is reported as completed, if it has reached both
276       local and remote disk.
277
278       -c, --connect-int time
279           In case it is not possible to connect to the remote DRBD device
280           immediately, DRBD keeps on trying to connect. With this option you
281           can set the time between two retries. The default value is 10
282           seconds, the unit is 1 second.
283
284       -i, --ping-int time
285           If the TCP/IP connection linking a DRBD device pair is idle for
286           more than time seconds, DRBD will generate a keep-alive packet to
287           check if its partner is still alive. The default value is 10
288           seconds, the unit is 1 second.
289
290       -t, --timeout val
291           If the partner node fails to send an expected response packet
292           within val tenths of a second, the partner node is considered dead
293           and therefore the TCP/IP connection is abandoned. The default value
294           is 60 (= 6 seconds).
295
296       -S, --sndbuf-size size
297           The socket send buffer is used to store packets sent to the
298           secondary node, which are not yet acknowledged (from a network
299           point of view) by the secondary node. When using protocol A, it
300           might be necessary to increase the size of this data structure in
301           order to increase asynchronicity between primary and secondary
302           nodes. But keep in mind that more asynchronicity is synonymous with
303           more data loss in the case of a primary node failure. Since 8.0.13
304           resp. 8.2.7 setting the size value to 0 means that the kernel
305           should autotune this. The default size is 0, i.e. autotune.
306
307       -r, --rcvbuf-size size
308           Packets received from the network are stored in the socket receive
309           buffer first. From there they are consumed by DRBD. Before 8.3.2
310           the receive buffer's size was always set to the size of the socket
311           send buffer. Since 8.3.2 they can be tuned independently. A value
312           of 0 means that the kernel should autotune this. The default size
313           is 0, i.e. autotune.
314
315       -k, --ko-count count
316           In case the secondary node fails to complete a single write request
317           for count times the timeout, it is expelled from the cluster, i.e.
318           the primary node goes into StandAlone mode. To disable this
319           feature, you should explicitly set it to 0; defaults may change
320           between versions.
321
322       -e, --max-epoch-size val
323           With this option the maximal number of write requests between two
324           barriers is limited. Typically set to the same as --max-buffers, or
325           the allowed maximum. Values smaller than 10 can lead to degraded
326           performance. The default value is 2048.
327
328       -b, --max-buffers val
329           With this option the maximal number of buffer pages allocated by
330           DRBD's receiver thread is limited. Typically set to the same as
331           --max-epoch-size. Small values could lead to degraded performance.
332           The default value is 2048, the minimum 32. Increase this if you
333           cannot saturate the IO backend of the receiving side during linear
334           write or during resync while otherwise idle.
335
336           See also drbd.conf(5)
337
338       -u, --unplug-watermark val
339           This setting has no effect with recent kernels that use explicit
340           on-stack plugging (upstream Linux kernel 2.6.39, distributions may
341           have backported).
342
343           When the number of pending write requests on the standby
344           (secondary) node exceeds the unplug-watermark, we trigger the
345           request processing of our backing storage device. Some storage
346           controllers deliver better performance with small values, others
347           deliver best performance when the value is set to the same value as
348           max-buffers, yet others don't feel much effect at all. Minimum 16,
349           default 128, maximum 131072.
350
351       -m, --allow-two-primaries
352           With this option set you may assign primary role to both nodes. You
353           only should use this option if you use a shared storage file system
354           on top of DRBD. At the time of writing the only ones are: OCFS2 and
355           GFS. If you use this option with any other file system, you are
356           going to crash your nodes and to corrupt your data!
357
358       -a, --cram-hmac-alg alg
359           You need to specify the HMAC algorithm to enable peer
360           authentication at all. You are strongly encouraged to use peer
361           authentication. The HMAC algorithm will be used for the challenge
362           response authentication of the peer. You may specify any digest
363           algorithm that is named in /proc/crypto.
364
365       -x, --shared-secret secret
366           The shared secret used in peer authentication. May be up to 64
367           characters.
368
369       -A, --after-sb-0pri asb-0p-policy
370           possible policies are:
371
372           disconnect
373               No automatic resynchronization, simply disconnect.
374
375           discard-younger-primary
376               Auto sync from the node that was primary before the split-brain
377               situation occurred.
378
379           discard-older-primary
380               Auto sync from the node that became primary as second during
381               the split-brain situation.
382
383           discard-zero-changes
384               In case one node did not write anything since the split brain
385               became evident, sync from the node that wrote something to the
386               node that did not write anything. In case none wrote anything
387               this policy uses a random decision to perform a "resync" of 0
388               blocks. In case both have written something this policy
389               disconnects the nodes.
390
391           discard-least-changes
392               Auto sync from the node that touched more blocks during the
393               split brain situation.
394
395           discard-node-NODENAME
396               Auto sync to the named node.
397
398       -B, --after-sb-1pri asb-1p-policy
399           possible policies are:
400
401           disconnect
402               No automatic resynchronization, simply disconnect.
403
404           consensus
405               Discard the version of the secondary if the outcome of the
406               after-sb-0pri algorithm would also destroy the current
407               secondary's data. Otherwise disconnect.
408
409           discard-secondary
410               Discard the secondary's version.
411
412           call-pri-lost-after-sb
413               Always honor the outcome of the after-sb-0pri algorithm. In
414               case it decides the current secondary has the correct data,
415               call the pri-lost-after-sb on the current primary.
416
417           violently-as0p
418               Always honor the outcome of the after-sb-0pri algorithm. In
419               case it decides the current secondary has the correct data,
420               accept a possible instantaneous change of the primary's data.
421
422       -C, --after-sb-2pri asb-2p-policy
423           possible policies are:
424
425           disconnect
426               No automatic resynchronization, simply disconnect.
427
428           call-pri-lost-after-sb
429               Always honor the outcome of the after-sb-0pri algorithm. In
430               case it decides the current secondary has the right data, call
431               the pri-lost-after-sb on the current primary.
432
433           violently-as0p
434               Always honor the outcome of the after-sb-0pri algorithm. In
435               case it decides the current secondary has the right data,
436               accept a possible instantaneous change of the primary's data.
437
438       -P, --always-asbp
439           Normally the automatic after-split-brain policies are only used if
440           current states of the UUIDs do not indicate the presence of a third
441           node.
442
443           With this option you request that the automatic after-split-brain
444           policies are used as long as the data sets of the nodes are somehow
445           related. This might cause a full sync, if the UUIDs indicate the
446           presence of a third node. (Or double faults have led to strange
447           UUID sets.)
448
449       -R, --rr-conflict role-resync-conflict-policy
450           This option sets DRBD's behavior when DRBD deduces from its meta
451           data that a resynchronization is needed, and the SyncTarget node is
452           already primary. The possible settings are: disconnect,
453           call-pri-lost and violently. While disconnect speaks for itself,
454           with the call-pri-lost setting the pri-lost handler is called which
455           is expected to either change the role of the node to secondary, or
456           remove the node from the cluster. The default is disconnect.
457
458           With the violently setting you allow DRBD to force a primary node
459           into SyncTarget state. This means that the data exposed by DRBD
460           changes to the SyncSource's version of the data instantaneously.
461           USE THIS OPTION ONLY IF YOU KNOW WHAT YOU ARE DOING.
462
463       -d, --data-integrity-alg hash_alg
464           DRBD can ensure the data integrity of the user's data on the
465           network by comparing hash values. Normally this is ensured by the
466           16 bit checksums in the headers of TCP/IP packets. This option can
467           be set to any of the kernel's data digest algorithms. In a typical
468           kernel configuration you should have at least one of md5, sha1, and
469           crc32c available. By default this is not enabled.
470
471           See also the notes on data integrity on the drbd.conf manpage.
472
473       -o, --no-tcp-cork
474           DRBD usually uses the TCP socket option TCP_CORK to hint to the
475           network stack when it can expect more data, and when it should
476           flush out what it has in its send queue. There is at least one
477           network stack that performs worse when one uses this hinting
478           method. Therefore we introduced this option, which disable the
479           setting and clearing of the TCP_CORK socket option by DRBD.
480
481       -p, --ping-timeout ping_timeout
482           The time the peer has to answer to a keep-alive packet. In case the
483           peer's reply is not received within this time period, it is
484           considered dead. The default unit is tenths of a second, the
485           default value is 5 (for half a second).
486
487       -D, --discard-my-data
488           Use this option to manually recover from a split-brain situation.
489           In case you do not have any automatic after-split-brain policies
490           selected, the nodes refuse to connect. By passing this option you
491           make this node a sync target immediately after successful connect.
492
493       -n, --dry-run
494           Causes DRBD to abort the connection process after the resync
495           handshake, i.e. no resync gets performed. You can find out which
496           resync DRBD would perform by looking at the kernel's log file.
497
498       -g, --on-congestion congestion_policy, -f, --congestion-fill
499       fill_threshold, -h, --congestion-extents active_extents_threshold
500           By default DRBD blocks when the available TCP send queue becomes
501           full. That means it will slow down the application that generates
502           the write requests that cause DRBD to send more data down that TCP
503           connection.
504
505           When DRBD is deployed with DRBD-proxy it might be more desirable
506           that DRBD goes into AHEAD/BEHIND mode shortly before the send queue
507           becomes full. In AHEAD/BEHIND mode DRBD does no longer replicate
508           data, but still keeps the connection open.
509
510           The advantage of the AHEAD/BEHIND mode is that the application is
511           not slowed down, even if DRBD-proxy's buffer is not sufficient to
512           buffer all write requests. The downside is that the peer node falls
513           behind, and that a resync will be necessary to bring it back into
514           sync. During that resync the peer node will have an inconsistent
515           disk.
516
517           Available congestion_policys are block and pull-ahead. The default
518           is block.  Fill_threshold might be in the range of 0 to 10GiBytes.
519           The default is 0 which disables the check.
520           Active_extents_threshold has the same limits as al-extents.
521
522           The AHEAD/BEHIND mode and its settings are available since DRBD
523           8.3.10.
524
525   syncer
526       Changes the synchronization daemon parameters of device at runtime.
527
528       -r, --rate rate
529           To ensure smooth operation of the application on top of DRBD, it is
530           possible to limit the bandwidth that may be used by background
531           synchronization. The default is 250 KiB/sec, the default unit is
532           KiB/sec.
533
534       -a, --after minor
535           Start resync on this device only if the device with minor is
536           already in connected state. Otherwise this device waits in
537           SyncPause state.
538
539       -e, --al-extents extents
540           DRBD automatically performs hot area detection. With this parameter
541           you control how big the hot area (=active set) can get. Each extent
542           marks 4M of the backing storage. In case a primary node leaves the
543           cluster unexpectedly, the areas covered by the active set must be
544           resynced upon rejoining of the failed node. The data structure is
545           stored in the meta-data area, therefore each change of the active
546           set is a write operation to the meta-data device. A higher number
547           of extents gives longer resync times but less updates to the
548           meta-data. The default number of extents is 127. (Minimum: 7,
549           Maximum: 3843)
550
551       -v, --verify-alg hash-alg
552           During online verification (as initiated by the verify
553           sub-command), rather than doing a bit-wise comparison, DRBD applies
554           a hash function to the contents of every block being verified, and
555           compares that hash with the peer. This option defines the hash
556           algorithm being used for that purpose. It can be set to any of the
557           kernel's data digest algorithms. In a typical kernel configuration
558           you should have at least one of md5, sha1, and crc32c available. By
559           default this is not enabled; you must set this option explicitly in
560           order to be able to use on-line device verification.
561
562           See also the notes on data integrity on the drbd.conf manpage.
563
564       -c, --cpu-mask cpu-mask
565           Sets the cpu-affinity-mask for DRBD's kernel threads of this
566           device. The default value of cpu-mask is 0, which means that DRBD's
567           kernel threads should be spread over all CPUs of the machine. This
568           value must be given in hexadecimal notation. If it is too big it
569           will be truncated.
570
571       -C, --csums-alg hash-alg
572           A resync process sends all marked data blocks form the source to
573           the destination node, as long as no csums-alg is given. When one is
574           specified the resync process exchanges hash values of all marked
575           blocks first, and sends only those data blocks over, that have
576           different hash values.
577
578           This setting is useful for DRBD setups with low bandwidth links.
579           During the restart of a crashed primary node, all blocks covered by
580           the activity log are marked for resync. But a large part of those
581           will actually be still in sync, therefore using csums-alg will
582           lower the required bandwidth in exchange for CPU cycles.
583
584       -R, --use-rle
585           During resync-handshake, the dirty-bitmaps of the nodes are
586           exchanged and merged (using bit-or), so the nodes will have the
587           same understanding of which blocks are dirty. On large devices, the
588           fine grained dirty-bitmap can become large as well, and the bitmap
589           exchange can take quite some time on low-bandwidth links.
590
591           Because the bitmap typically contains compact areas where all bits
592           are unset (clean) or set (dirty), a simple run-length encoding
593           scheme can considerably reduce the network traffic necessary for
594           the bitmap exchange.
595
596           For backward compatibilty reasons, and because on fast links this
597           possibly does not improve transfer time but consumes cpu cycles,
598           this defaults to off.
599
600           Introduced in 8.3.2.
601
602       -p, --c-plan-ahead plan_time, -s, --c-fill-target fill_target, -d,
603       --c-delay-target delay_target, -M, --c-max-rate max_rate
604           The dynamic resync speed controller gets enabled with setting
605           plan_time to a positive value. It aims to fill the buffers along
606           the data path with either a constant amount of data fill_target, or
607           aims to have a constant delay time of delay_target along the path.
608           The controller has an upper bound of max_rate.
609
610           By plan_time the agility of the controller is configured. Higher
611           values yield for slower/lower responses of the controller to
612           deviation from the target value. It should be at least 5 times RTT.
613           For regular data paths a fill_target in the area of 4k to 100k is
614           appropriate. For a setup that contains drbd-proxy it is advisable
615           to use delay_target instead. Only when fill_target is set to 0 the
616           controller will use delay_target. 5 times RTT is a reasonable
617           starting value.  Max_rate should be set to the bandwidth available
618           between the DRBD-hosts and the machines hosting DRBD-proxy, or to
619           the available disk-bandwidth.
620
621           The default value of plan_time is 0, the default unit is 0.1
622           seconds.  Fill_target has 0 and sectors as default unit.
623           Delay_target has 1 (100ms) and 0.1 as default unit.  Max_rate has
624           10240 (100MiB/s) and KiB/s as default unit.
625
626       -m, --c-min-rate min_rate
627           We track the disk IO rate caused by the resync, so we can detect
628           non-resync IO on the lower level device. If the lower level device
629           seems to be busy, and the current resync rate is above min_rate, we
630           throttle the resync.
631
632           The default value of min_rate is 4M, the default unit is k. If you
633           want to not throttle at all, set it to zero, if you want to
634           throttle always, set it to one.
635
636       -n, --on-no-data-accessible ond-policy
637           This setting controls what happens to IO requests on a degraded,
638           disk less node (I.e. no data store is reachable). The available
639           policies are io-error and suspend-io.
640
641           If ond-policy is set to suspend-io you can either resume IO by
642           attaching/connecting the last lost data storage, or by the drbdadm
643           resume-io res command. The latter will result in IO errors of
644           course.
645
646           The default is io-error. This setting is available since DRBD
647           8.3.9.
648
649   primary
650       Sets the device into primary role. This means that applications (e.g. a
651       file system) may open the device for read and write access. Data
652       written to the device in primary role are mirrored to the device in
653       secondary role.
654
655       Normally it is not possible to set both devices of a connected DRBD
656       device pair to primary role. By using the --allow-two-primaries option,
657       you override this behavior and instruct DRBD to allow two primaries.
658
659       -o, --overwrite-data-of-peer
660           Alias for --force.
661
662       -f, --force
663           Becoming primary fails if the local replica is not up-to-date. I.e.
664           when it is inconsistent, outdated of consistent. By using this
665           option you can force it into primary role anyway. USE THIS OPTION
666           ONLY IF YOU KNOW WHAT YOU ARE DOING.
667
668   secondary
669       Brings the device into secondary role. This operation fails as long as
670       at least one application (or file system) has opened the device.
671
672       It is possible that both devices of a connected DRBD device pair are
673       secondary.
674
675   verify
676       This initiates on-line device verification. During on-line
677       verification, the contents of every block on the local node are
678       compared to those on the peer node. Device verification progress can be
679       monitored via /proc/drbd. Any blocks whose content differs from that of
680       the corresponding block on the peer node will be marked out-of-sync in
681       DRBD's on-disk bitmap; they are not brought back in sync automatically.
682       To do that, simply disconnect and reconnect the resource.
683
684       If on-line verification is already in progress (and this node is
685       "VerifyS"), this command silently "succeeds". In this case, any
686       start-sector (see below) will be ignored, and any stop-sector (see
687       below) will be honored. This can be used to stop a running verify, or
688       to update/shorten/extend the coverage of the currently running verify.
689
690       This command will fail if the device is not part of a connected device
691       pair.
692
693       See also the notes on data integrity on the drbd.conf manpage.
694
695       -s, --start start-sector
696           Since version 8.3.2, on-line verification should resume from the
697           last position after connection loss. It may also be started from an
698           arbitrary position by setting this option. If you had reached some
699           stop-sector before, and you do not specify an explicit
700           start-sector, verify should resume from the previous stop-sector.
701
702           Default unit is sectors. You may also specify a unit explicitly.
703           The start-sector will be rounded down to a multiple of 8 sectors
704           (4kB).
705
706       -S, --stop stop-sector
707           Since version 8.3.14, on-line verification can be stopped before it
708           reaches end-of-device. This can be
709
710           Default unit is sectors. You may also specify a unit explicitly.
711           The stop-sector may be updated by issuing an additional drbdsetup
712           verify command on the same node while the verify is running.
713
714   invalidate
715       This forces the local device of a pair of connected DRBD devices into
716       SyncTarget state, which means that all data blocks of the device are
717       copied over from the peer.
718
719       This command will fail if the device is not either part of a connected
720       device pair, or disconnected Secondary.
721
722   invalidate-remote
723       This forces the local device of a pair of connected DRBD devices into
724       SyncSource state, which means that all data blocks of the device are
725       copied to the peer.
726
727       On a disconnected Primary device, this will set all bits in the out of
728       sync bitmap. As a side affect this suspends updates to the on disk
729       activity log. Updates to the on disk activity log resume automatically
730       when necessary.
731
732   wait-connect
733       Returns as soon as the device can communicate with its partner device.
734
735       -t, --wfc-timeout wfc_timeout, -d, --degr-wfc-timeout degr_wfc_timeout,
736       -o, --outdated-wfc-timeout outdated_wfc_timeout, -w, --wait-after-sb
737           This command will fail if the device cannot communicate with its
738           partner for timeout seconds. If the peer was working before this
739           node was rebooted, the wfc_timeout is used. If the peer was already
740           down before this node was rebooted, the degr_wfc_timeout is used.
741           If the peer was successfully outdated before this node was rebooted
742           the outdated_wfc_timeout is used. The default value for all those
743           timeout values is 0 which means to wait forever. In case the
744           connection status goes down to StandAlone because the peer appeared
745           but the devices had a split brain situation, the default for the
746           command is to terminate. You can change this behavior with the
747           --wait-after-sb option.
748
749   wait-sync
750       Returns as soon as the device leaves any synchronization into connected
751       state. The options are the same as with the wait-connect command.
752
753   disconnect
754       Removes the information set by the net command from the device. This
755       means that the device goes into unconnected state and will no longer
756       listen for incoming connections.
757
758   detach
759       Removes the information set by the disk command from the device. This
760       means that the device is detached from its backing storage device.
761
762       -f, --force
763           A regular detach returns after the disk state finally reached
764           diskless. As a consequence detaching from a frozen backing block
765           device never terminates.
766
767           On the other hand A forced detach returns immediately. It allows
768           you to detach DRBD from a frozen backing block device. Please note
769           that the disk will be marked as failed until all pending IO
770           requests where finished by the backing block device.
771
772   down
773       Removes all configuration information from the device and forces it
774       back to unconfigured state.
775
776   role
777       Shows the current roles of the device and its peer, as local/peer.
778
779   state
780       Deprecated alias for "role"
781
782   cstate
783       Shows the current connection state of the device.
784
785   dstate
786       Shows the current states of the backing storage devices, as local/peer.
787
788   status
789       Shows the current status of the device in XML-like format. Example
790       output:
791
792           <resource minor="0" name="s0" cs="SyncTarget" st1="Secondary" st2="Secondary"
793                    ds1="Inconsistent" ds2="UpToDate" resynced_precent="5.9" />
794
795
796   resize
797       This causes DRBD to reexamine the size of the device's backing storage
798       device. To actually do online growing you need to extend the backing
799       storages on both devices and call the resize command on one of your
800       nodes.
801
802       The --assume-peer-has-space allows you to resize a device which is
803       currently not connected to the peer. Use with care, since if you do not
804       resize the peer's disk as well, further connect attempts of the two
805       will fail.
806
807       When the --assume-clean option is given DRBD will skip the resync of
808       the new storage. Only do this if you know that the new storage was
809       initialized to the same content by other means.
810
811   check-resize
812       To enable DRBD to detect offline resizing of backing devices this
813       command may be used to record the current size of backing devices. The
814       size is stored in files in /var/lib/drbd/ named drbd-minor-??.lkbd
815
816       This command is called by drbdadm resize res after drbdsetup device
817       resize returned.
818
819   pause-sync
820       Temporarily suspend an ongoing resynchronization by setting the local
821       pause flag. Resync only progresses if neither the local nor the remote
822       pause flag is set. It might be desirable to postpone DRBD's
823       resynchronization after eventual resynchronization of the backing
824       storage's RAID setup.
825
826   resume-sync
827       Unset the local sync pause flag.
828
829   outdate
830       Mark the data on the local backing storage as outdated. An outdated
831       device refuses to become primary. This is used in conjunction with
832       fencing and by the peer's fence-peer handler.
833
834   show-gi
835       Displays the device's data generation identifiers verbosely.
836
837   get-gi
838       Displays the device's data generation identifiers.
839
840   show
841       Shows all available configuration information of the device.
842
843   suspend-io
844       This command is of no apparent use and just provided for the sake of
845       completeness.
846
847   resume-io
848       If the fence-peer handler fails to stonith the peer node, and your
849       fencing policy is set to resource-and-stonith, you can unfreeze IO
850       operations with this command.
851
852   events
853       Displays every state change of DRBD and all calls to helper programs.
854       This might be used to get notified of DRBD's state changes by piping
855       the output to another program.
856
857       -a, --all-devices
858           Display the events of all DRBD minors.
859
860       -u, --unfiltered
861           This is a debugging aid that displays the content of all received
862           netlink messages.
863
864   new-current-uuid
865       Generates a new current UUID and rotates all other UUID values. This
866       has at least two use cases, namely to skip the initial sync, and to
867       reduce network bandwidth when starting in a single node configuration
868       and then later (re-)integrating a remote site.
869
870       Available option:
871
872       -c, --clear-bitmap
873           Clears the sync bitmap in addition to generating a new current
874           UUID.
875
876       This can be used to skip the initial sync, if you want to start from
877       scratch. This use-case does only work on "Just Created" meta data.
878       Necessary steps:
879
880        1. On both nodes, initialize meta data and configure the device.
881
882           drbdadm -- --force create-md res
883
884        2. They need to do the initial handshake, so they know their sizes.
885
886           drbdadm up res
887
888        3. They are now Connected Secondary/Secondary
889           Inconsistent/Inconsistent. Generate a new current-uuid and clear
890           the dirty bitmap.
891
892           drbdadm -- --clear-bitmap new-current-uuid res
893
894        4. They are now Connected Secondary/Secondary UpToDate/UpToDate. Make
895           one side primary and create a file system.
896
897           drbdadm primary res
898
899           mkfs -t fs-type $(drbdadm sh-dev res)
900
901       One obvious side-effect is that the replica is full of old garbage
902       (unless you made them identical using other means), so any
903       online-verify is expected to find any number of out-of-sync blocks.
904
905       You must not use this on pre-existing data!  Even though it may appear
906       to work at first glance, once you switch to the other node, your data
907       is toast, as it never got replicated. So do not leave out the mkfs (or
908       equivalent).
909
910       This can also be used to shorten the initial resync of a cluster where
911       the second node is added after the first node is gone into production,
912       by means of disk shipping. This use-case works on disconnected devices
913       only, the device may be in primary or secondary role.
914
915       The necessary steps on the current active server are:
916
917        1. drbdsetup device new-current-uuid --clear-bitmap
918
919        2. Take the copy of the current active server. E.g. by pulling a disk
920           out of the RAID1 controller, or by copying with dd. You need to
921           copy the actual data, and the meta data.
922
923        3. drbdsetup device new-current-uuid
924
925       Now add the disk to the new secondary node, and join it to the cluster.
926       You will get a resync of that parts that were changed since the first
927       call to drbdsetup in step 1.
928

EXAMPLES

930       For examples, please have a look at the DRBD User's Guide[1].
931

VERSION

933       This document was revised for version 8.3.2 of the DRBD distribution.
934

AUTHOR

936       Written by Philipp Reisner <philipp.reisner@linbit.com> and Lars
937       Ellenberg <lars.ellenberg@linbit.com>
938

REPORTING BUGS

940       Report bugs to <drbd-user@lists.linbit.com>.
941
943       Copyright 2001-2008 LINBIT Information Technologies, Philipp Reisner,
944       Lars Ellenberg. This is free software; see the source for copying
945       conditions. There is NO warranty; not even for MERCHANTABILITY or
946       FITNESS FOR A PARTICULAR PURPOSE.
947

SEE ALSO

949       drbd.conf(5), drbd(8), drbddisk(8), drbdadm(8), DRBD User's Guide[1],
950       DRBD web site[2]
951

NOTES

953        1. DRBD User's Guide
954           http://www.drbd.org/users-guide/
955
956        2. DRBD web site
957           http://www.drbd.org/
958
959
960
961DRBD 8.3.2                        5 Dec 2008                      DRBDSETUP(8)
Impressum