1CEPH-DEPLOY(8)                       Ceph                       CEPH-DEPLOY(8)
2
3
4

NAME

6       ceph-deploy - Ceph deployment tool
7

SYNOPSIS

9       ceph-deploy new [initial-monitor-node(s)]
10
11       ceph-deploy install [ceph-node] [ceph-node...]
12
13       ceph-deploy mon create-initial
14
15       ceph-deploy osd create --data device ceph-node
16
17       ceph-deploy admin [admin-node][ceph-node...]
18
19       ceph-deploy purgedata [ceph-node][ceph-node...]
20
21       ceph-deploy forgetkeys
22
23

DESCRIPTION

25       ceph-deploy  is a tool which allows easy and quick deployment of a Ceph
26       cluster without involving complex and detailed manual configuration. It
27       uses  ssh  to gain access to other Ceph nodes from the admin node, sudo
28       for administrator privileges on them and the underlying Python  scripts
29       automates the manual process of Ceph installation on each node from the
30       admin node itself.  It can be easily run on an workstation and  doesn't
31       require   servers,   databases  or  any  other  automated  tools.  With
32       ceph-deploy, it is really easy to set up and take down a cluster.  How‐
33       ever,  it is not a generic deployment tool. It is a specific tool which
34       is designed for those who want to get Ceph up and running quickly  with
35       only  the  unavoidable  initial  configuration settings and without the
36       overhead of installing other tools like Chef, Puppet or Juju. Those who
37       want  to customize security settings, partitions or directory locations
38       and want to set up a cluster following detailed  manual  steps,  should
39       use other tools i.e, Chef, Puppet, Juju or Crowbar.
40
41       With ceph-deploy, you can install Ceph packages on remote nodes, create
42       a cluster, add monitors, gather/forget  keys,  add  OSDs  and  metadata
43       servers, configure admin hosts or take down the cluster.
44

COMMANDS

46   new
47       Start  deploying  a  new  cluster  and  write  a configuration file and
48       keyring for it.  It tries to copy ssh keys  from  admin  node  to  gain
49       passwordless ssh to monitor node(s), validates host IP, creates a clus‐
50       ter with a new initial monitor node or nodes for monitor quorum, a ceph
51       configuration file, a monitor secret keyring and a log file for the new
52       cluster. It populates the newly created Ceph  configuration  file  with
53       fsid  of cluster, hostnames and IP addresses of initial monitor members
54       under [global] section.
55
56       Usage:
57
58          ceph-deploy new [MON][MON...]
59
60       Here, [MON] is the initial monitor hostname (short hostname i.e,  host‐
61       name -s).
62
63       Other  options  like  --no-ssh-copykey,  --fsid,  --cluster-network and
64       --public-network can also be used with this command.
65
66       If more than one network interface is used, public network setting  has
67       to  be  added under [global] section of Ceph configuration file. If the
68       public subnet is given, new command will choose the  one  IP  from  the
69       remote  host  that  exists  within the subnet range. Public network can
70       also be added at runtime using --public-network option with the command
71       as mentioned above.
72
73   install
74       Install  Ceph  packages  on  remote  hosts. As a first step it installs
75       yum-plugin-priorities in admin and other nodes using  passwordless  ssh
76       and sudo so that Ceph packages from upstream repository get more prior‐
77       ity. It then detects the platform and distribution for  the  hosts  and
78       installs  Ceph  normally  by  downloading distro compatible packages if
79       adequate repo for Ceph is already added.  --release flag is used to get
80       the  latest  release for installation. During detection of platform and
81       distribution before installation, if it finds  the  distro.init  to  be
82       sysvinit  (Fedora, CentOS/RHEL etc), it doesn't allow installation with
83       custom cluster name and uses the default name ceph for the cluster.
84
85       If the user explicitly specifies a custom repo url with --repo-url  for
86       installation, anything detected from the configuration will be overrid‐
87       den and the custom repository location will be used for installation of
88       Ceph  packages.   If  required,  valid  custom  repositories  are  also
89       detected and installed. In case of installation from a  custom  repo  a
90       boolean  is used to determine the logic needed to proceed with a custom
91       repo installation. A custom repo  install  helper  is  used  that  goes
92       through  config  checks to retrieve repos (and any extra repos defined)
93       and installs them. cd_conf is the object built from argparse that holds
94       the  flags  and  information needed to determine what metadata from the
95       configuration is to be used.
96
97       A user can also opt to install only the repository  without  installing
98       Ceph and its dependencies by using --repo option.
99
100       Usage:
101
102          ceph-deploy install [HOST][HOST...]
103
104       Here, [HOST] is/are the host node(s) where Ceph is to be installed.
105
106       An  option  --release  is  used  to install a release known as CODENAME
107       (default: firefly).
108
109       Other options like --testing, --dev, --adjust-repos, --no-adjust-repos,
110       --repo,  --local-mirror, --repo-url and --gpg-url can also be used with
111       this command.
112
113   mds
114       Deploy Ceph mds on remote hosts. A metadata server  is  needed  to  use
115       CephFS  and  the  mds command is used to create one on the desired host
116       node. It uses the subcommand create to do so.  create  first  gets  the
117       hostname  and distro information of the desired mds host. It then tries
118       to read the bootstrap-mds key for the cluster  and  deploy  it  in  the
119       desired  host.  The  key  generally  has  a  format  of {cluster}.boot‐
120       strap-mds.keyring. If it doesn't finds a keyring, it runs gatherkeys to
121       get  the  keyring.  It then creates a mds on the desired host under the
122       path /var/lib/ceph/mds/  in  /var/lib/ceph/mds/{cluster}-{name}  format
123       and   a   bootstrap   keyring   under  /var/lib/ceph/bootstrap-mds/  in
124       /var/lib/ceph/bootstrap-mds/{cluster}.keyring  format.  It  then   runs
125       appropriate commands based on distro.init to start the mds.
126
127       Usage:
128
129          ceph-deploy mds create [HOST[:DAEMON-NAME]] [HOST[:DAEMON-NAME]...]
130
131       The [DAEMON-NAME] is optional.
132
133   mon
134       Deploy  Ceph  monitor on remote hosts. mon makes use of certain subcom‐
135       mands to deploy Ceph monitors on other nodes.
136
137       Subcommand create-initial deploys for monitors defined in  mon  initial
138       members  under  [global] section in Ceph configuration file, wait until
139       they form quorum and then  gatherkeys,  reporting  the  monitor  status
140       along the process. If monitors don't form quorum the command will even‐
141       tually time out.
142
143       Usage:
144
145          ceph-deploy mon create-initial
146
147       Subcommand create is used to deploy Ceph monitors by explicitly  speci‐
148       fying  the hosts which are desired to be made monitors. If no hosts are
149       specified it will default to use the mon initial members defined  under
150       [global] section of Ceph configuration file. create first detects plat‐
151       form and distro for desired hosts and checks if hostname is  compatible
152       for  deployment.  It  then  uses  the monitor keyring initially created
153       using new command and deploys the monitor in desired host. If  multiple
154       hosts  were  specified  during  new  command i.e, if there are multiple
155       hosts in mon initial members and multiple keyrings were created then  a
156       concatenated  keyring  is  used  for  deployment  of  monitors. In this
157       process a keyring parser is used which looks for [entity]  sections  in
158       monitor keyrings and returns a list of those sections. A helper is then
159       used to collect all keyrings into a single blob that will  be  used  to
160       inject  it  to  monitors with --mkfs on remote nodes. All keyring files
161       are concatenated to be in a directory ending with .keyring. During this
162       process  the helper uses list of sections returned by keyring parser to
163       check if an entity is already present in a keyring and if not, adds it.
164       The  concatenated keyring is used for deployment of monitors to desired
165       multiple hosts.
166
167       Usage:
168
169          ceph-deploy mon create [HOST] [HOST...]
170
171       Here, [HOST] is hostname of desired monitor host(s).
172
173       Subcommand add is used to add a monitor  to  an  existing  cluster.  It
174       first  detects platform and distro for desired host and checks if host‐
175       name is compatible for deployment. It then uses  the  monitor  keyring,
176       ensures  configuration for new monitor host and adds the monitor to the
177       cluster. If the section for the monitor exists and  defines  a  monitor
178       address  that will be used, otherwise it will fallback by resolving the
179       hostname to an IP. If --address is used  it  will  override  all  other
180       options. After adding the monitor to the cluster, it gives it some time
181       to start. It then looks for any monitor errors and checks monitor  sta‐
182       tus.  Monitor  errors  arise if the monitor is not added in mon initial
183       members, if it doesn't exist in monmap and if neither  public_addr  nor
184       public_network  keys  were defined for monitors. Under such conditions,
185       monitors may not be able to form quorum. Monitor status  tells  if  the
186       monitor  is  up  and running normally. The status is checked by running
187       ceph daemon mon.hostname mon_status on remote end  which  provides  the
188       output and returns a boolean status of what is going on.  False means a
189       monitor that is not fine even if it is up and running, while True means
190       the monitor is up and running correctly.
191
192       Usage:
193
194          ceph-deploy mon add [HOST]
195
196          ceph-deploy mon add [HOST] --address [IP]
197
198       Here,  [HOST] is the hostname and [IP] is the IP address of the desired
199       monitor node. Please note, unlike other mon subcommands, only one  node
200       can be specified at a time.
201
202       Subcommand  destroy  is  used  to  completely remove monitors on remote
203       hosts.  It takes hostnames as arguments. It stops the monitor, verifies
204       if  ceph-mon  daemon  really  stopped,  creates  an  archive  directory
205       mon-remove under /var/lib/ceph/,  archives  old  monitor  directory  in
206       {cluster}-{hostname}-{stamp}  format in it and removes the monitor from
207       cluster by running ceph remove... command.
208
209       Usage:
210
211          ceph-deploy mon destroy [HOST] [HOST...]
212
213       Here, [HOST] is hostname of monitor that is to be removed.
214
215   gatherkeys
216       Gather authentication keys for provisioning new nodes. It  takes  host‐
217       names  as  arguments.  It  checks for and fetches client.admin keyring,
218       monitor keyring and bootstrap-mds/bootstrap-osd  keyring  from  monitor
219       host. These authentication keys are used when new monitors/OSDs/MDS are
220       added to the cluster.
221
222       Usage:
223
224          ceph-deploy gatherkeys [HOST] [HOST...]
225
226       Here, [HOST] is hostname of the monitor  from  where  keys  are  to  be
227       pulled.
228
229   disk
230       Manage  disks  on  a  remote host. It actually triggers the ceph-volume
231       utility and its subcommands to manage disks.
232
233       Subcommand list lists disk partitions and Ceph OSDs.
234
235       Usage:
236
237          ceph-deploy disk list HOST
238
239       Subcommand zap zaps/erases/destroys a device's partition table and con‐
240       tents.   It  actually  uses ceph-volume lvm zap remotely, alternatively
241       allowing someone to remove the Ceph metadata from the logical volume.
242
243   osd
244       Manage OSDs by preparing data disk on remote host.  osd  makes  use  of
245       certain subcommands for managing OSDs.
246
247       Subcommand  create  prepares  a  device  for  Ceph OSD. It first checks
248       against multiple OSDs getting created and warns about  the  possibility
249       of  more than the recommended which would cause issues with max allowed
250       PIDs in a system. It then reads the bootstrap-osd key for  the  cluster
251       or  writes  the  bootstrap  key if not found.  It then uses ceph-volume
252       utility's lvm create subcommand to prepare the disk,  (and  journal  if
253       using  filestore)  and  deploy  the OSD on the desired host.  Once pre‐
254       pared, it gives some time to the OSD to start and checks for any possi‐
255       ble errors and if found, reports to the user.
256
257       Bluestore Usage:
258
259          ceph-deploy osd create --data DISK HOST
260
261       Filestore Usage:
262
263          ceph-deploy osd create --data DISK --journal JOURNAL HOST
264
265       NOTE:
266          For  other  flags  available,  please see the man page or the --help
267          menu on ceph-deploy osd create
268
269       Subcommand list lists devices associated to Ceph as part of an OSD.  It
270       uses  the  ceph-volume  lvm list output that has a rich output, mapping
271       OSDs to devices and other interesting information about the OSD setup.
272
273       Usage:
274
275          ceph-deploy osd list HOST
276
277   admin
278       Push configuration and client.admin key to a remote host. It takes  the
279       {cluster}.client.admin.keyring  from  admin  node  and  writes it under
280       /etc/ceph directory of desired node.
281
282       Usage:
283
284          ceph-deploy admin [HOST] [HOST...]
285
286       Here, [HOST] is desired host to be configured for Ceph administration.
287
288   config
289       Push/pull configuration file to/from a remote host. It uses  push  sub‐
290       command to takes the configuration file from admin host and write it to
291       remote host under /etc/ceph directory. It uses pull  subcommand  to  do
292       the opposite i.e, pull the configuration file under /etc/ceph directory
293       of remote host to admin node.
294
295       Usage:
296
297          ceph-deploy config push [HOST] [HOST...]
298
299          ceph-deploy config pull [HOST] [HOST...]
300
301       Here, [HOST] is the hostname of the node  where  config  file  will  be
302       pushed to or pulled from.
303
304   uninstall
305       Remove  Ceph  packages  from  remote hosts. It detects the platform and
306       distro of selected host and uninstalls Ceph packages from it.  However,
307       some  dependencies  like  librbd1  and  librados2  will  not be removed
308       because they can cause issues with qemu-kvm.
309
310       Usage:
311
312          ceph-deploy uninstall [HOST] [HOST...]
313
314       Here, [HOST] is hostname of the node from  where  Ceph  will  be  unin‐
315       stalled.
316
317   purge
318       Remove  Ceph  packages from remote hosts and purge all data. It detects
319       the platform and distro of selected host, uninstalls Ceph packages  and
320       purges  all data. However, some dependencies like librbd1 and librados2
321       will not be removed because they can cause issues with qemu-kvm.
322
323       Usage:
324
325          ceph-deploy purge [HOST] [HOST...]
326
327       Here, [HOST] is hostname of the node from where Ceph will be purged.
328
329   purgedata
330       Purge  (delete,  destroy,  discard,   shred)   any   Ceph   data   from
331       /var/lib/ceph.   Once  it  detects  the  platform and distro of desired
332       host, it first checks if Ceph is still installed on the  selected  host
333       and if installed, it won't purge data from it. If Ceph is already unin‐
334       stalled  from  the  host,  it  tries  to   remove   the   contents   of
335       /var/lib/ceph.  If  it  fails  then probably OSDs are still mounted and
336       needs to be unmounted to continue. It unmount the  OSDs  and  tries  to
337       remove  the  contents  of /var/lib/ceph again and checks for errors. It
338       also removes contents of /etc/ceph. Once  all  steps  are  successfully
339       completed, all the Ceph data from the selected host are removed.
340
341       Usage:
342
343          ceph-deploy purgedata [HOST] [HOST...]
344
345       Here,  [HOST]  is  hostname  of  the  node from where Ceph data will be
346       purged.
347
348   forgetkeys
349       Remove authentication keys from the local directory. It removes all the
350       authentication  keys  i.e, monitor keyring, client.admin keyring, boot‐
351       strap-osd and bootstrap-mds keyring from the node.
352
353       Usage:
354
355          ceph-deploy forgetkeys
356
357   pkg
358       Manage packages on remote hosts. It is used for installing or  removing
359       packages  from  remote  hosts.  The  package  names for installation or
360       removal are to be specified after the command.  Two  options  --install
361       and --remove are used for this purpose.
362
363       Usage:
364
365          ceph-deploy pkg --install [PKGs] [HOST] [HOST...]
366
367          ceph-deploy pkg --remove [PKGs] [HOST] [HOST...]
368
369       Here, [PKGs] is comma-separated package names and [HOST] is hostname of
370       the remote node where packages are to be installed or removed from.
371

OPTIONS

373       --address
374              IP address of the host node to be added to the cluster.
375
376       --adjust-repos
377              Install packages modifying source repos.
378
379       --ceph-conf
380              Use (or reuse) a given ceph.conf file.
381
382       --cluster
383              Name of the cluster.
384
385       --dev  Install a bleeding edge built from Git branch or  tag  (default:
386              master).
387
388       --cluster-network
389              Specify the (internal) cluster network.
390
391       --dmcrypt
392              Encrypt [data-path] and/or journal devices with dm-crypt.
393
394       --dmcrypt-key-dir
395              Directory where dm-crypt keys are stored.
396
397       --install
398              Comma-separated package(s) to install on remote hosts.
399
400       --fs-type
401              Filesystem  to  use  to  format disk (xfs, btrfs or ext4).  Note
402              that support for btrfs and ext4 is no longer  tested  or  recom‐
403              mended; please use xfs.
404
405       --fsid Provide an alternate FSID for ceph.conf generation.
406
407       --gpg-url
408              Specify  a GPG key url to be used with custom repos (defaults to
409              ceph.com).
410
411       --keyrings
412              Concatenate multiple keyrings to be seeded on new monitors.
413
414       --local-mirror
415              Fetch packages and push them to hosts for a local repo mirror.
416
417       --mkfs Inject keys to MONs on remote nodes.
418
419       --no-adjust-repos
420              Install packages without modifying source repos.
421
422       --no-ssh-copykey
423              Do not attempt to copy ssh keys.
424
425       --overwrite-conf
426              Overwrite an existing conf file on remote host (if present).
427
428       --public-network
429              Specify the public network for a cluster.
430
431       --remove
432              Comma-separated package(s) to remove from remote hosts.
433
434       --repo Install repo files only (skips package installation).
435
436       --repo-url
437              Specify a repo url that mirrors/contains Ceph packages.
438
439       --testing
440              Install the latest development release.
441
442       --username
443              The username to connect to the remote host.
444
445       --version
446              The current installed version of ceph-deploy.
447
448       --zap-disk
449              Destroy the partition table and content of a disk.
450

AVAILABILITY

452       ceph-deploy is part of Ceph, a massively  scalable,  open-source,  dis‐
453       tributed   storage   system.  Please  refer  to  the  documentation  at
454       https://ceph.com/ceph-deploy/docs for more information.
455

SEE ALSO

457       ceph-mon(8), ceph-osd(8), ceph-volume(8), ceph-mds(8)
458
460       2010-2020, Inktank Storage, Inc. and contributors. Licensed under  Cre‐
461       ative Commons Attribution Share Alike 3.0 (CC-BY-SA-3.0)
462
463
464
465
466dev                              Apr 21, 2020                   CEPH-DEPLOY(8)
Impressum