1RBD-NBD(8) Ceph RBD-NBD(8)
2
3
4
6 rbd-nbd - map rbd images to nbd device
7
9 rbd-nbd [-c conf] [--read-only] [--device nbd device] [--nbds_max limit] [--max_part limit] [--exclusive] [--notrim] [--encryption-format format] [--encryption-passphrase-file passphrase-file] [--io-timeout seconds] [--reattach-timeout seconds] map image-spec | snap-spec
10 rbd-nbd unmap nbd device | image-spec | snap-spec
11 rbd-nbd list-mapped
12 rbd-nbd attach --device nbd device image-spec | snap-spec
13 rbd-nbd detach nbd device | image-spec | snap-spec
14
15
17 rbd-nbd is a client for RADOS block device (rbd) images like rbd kernel
18 module. It will map a rbd image to a nbd (Network Block Device) de‐
19 vice, allowing access it as regular local block device.
20
22 -c ceph.conf
23 Use ceph.conf configuration file instead of the default
24 /etc/ceph/ceph.conf to determine monitor addresses during
25 startup.
26
27 --read-only
28 Map read-only.
29
30 --nbds_max *limit*
31 Override the parameter nbds_max of NBD kernel module when mod‐
32 probe, used to limit the count of nbd device.
33
34 --max_part *limit*
35 Override for module param max_part.
36
37 --exclusive
38 Forbid writes by other clients.
39
40 --notrim
41 Turn off trim/discard.
42
43 --encryption-format
44 Image encryption format. Possible values: luks1, luks2
45
46 --encryption-passphrase-file
47 Path of file containing a passphrase for unlocking image encryp‐
48 tion.
49
50 --io-timeout *seconds*
51 Override device timeout. Linux kernel will default to a 30 sec‐
52 ond request timeout. Allow the user to optionally specify an
53 alternate timeout.
54
55 --reattach-timeout *seconds*
56 Specify timeout for the kernel to wait for a new rbd-nbd process
57 is attached after the old process is detached. The default is 30
58 second.
59
61 image-spec is [pool-name]/image-name
62 snap-spec is [pool-name]/image-name@snap-name
63
64
65 The default for pool-name is "rbd". If an image name contains a slash
66 character ('/'), pool-name is required.
67
69 rbd-nbd is part of Ceph, a massively scalable, open-source, distributed
70 storage system. Please refer to the Ceph documentation at
71 https://docs.ceph.com/ for more information.
72
74 rbd(8)
75
77 2010-2022, Inktank Storage, Inc. and contributors. Licensed under Cre‐
78 ative Commons Attribution Share Alike 3.0 (CC-BY-SA-3.0)
79
80
81
82
83dev Oct 18, 2022 RBD-NBD(8)