1CEPH-DEPLOY(8)                       Ceph                       CEPH-DEPLOY(8)
2
3
4

NAME

6       ceph-deploy - Ceph deployment tool
7

SYNOPSIS

9       ceph-deploy new [initial-monitor-node(s)]
10
11       ceph-deploy install [ceph-node] [ceph-node...]
12
13       ceph-deploy mon create-initial
14
15       ceph-deploy osd prepare [ceph-node]:[dir-path]
16
17       ceph-deploy osd activate [ceph-node]:[dir-path]
18
19       ceph-deploy osd create [ceph-node]:[dir-path]
20
21       ceph-deploy admin [admin-node][ceph-node...]
22
23       ceph-deploy purgedata [ceph-node][ceph-node...]
24
25       ceph-deploy forgetkeys
26
27

DESCRIPTION

29       ceph-deploy  is a tool which allows easy and quick deployment of a Ceph
30       cluster without involving complex and detailed manual configuration. It
31       uses  ssh  to gain access to other Ceph nodes from the admin node, sudo
32       for administrator privileges on them and the underlying Python  scripts
33       automates the manual process of Ceph installation on each node from the
34       admin node itself.  It can be easily run on an workstation and  doesn't
35       require   servers,   databases  or  any  other  automated  tools.  With
36       ceph-deploy, it is really easy to set up and take down a cluster.  How‐
37       ever,  it is not a generic deployment tool. It is a specific tool which
38       is designed for those who want to get Ceph up and running quickly  with
39       only  the  unavoidable  initial  configuration settings and without the
40       overhead of installing other tools like Chef, Puppet or Juju. Those who
41       want  to customize security settings, partitions or directory locations
42       and want to set up a cluster following detailed  manual  steps,  should
43       use other tools i.e, Chef, Puppet, Juju or Crowbar.
44
45       With ceph-deploy, you can install Ceph packages on remote nodes, create
46       a cluster, add monitors, gather/forget  keys,  add  OSDs  and  metadata
47       servers, configure admin hosts or take down the cluster.
48

COMMANDS

50   new
51       Start  deploying  a  new  cluster  and  write  a configuration file and
52       keyring for it.  It tries to copy ssh keys  from  admin  node  to  gain
53       passwordless ssh to monitor node(s), validates host IP, creates a clus‐
54       ter with a new initial monitor node or nodes for monitor quorum, a ceph
55       configuration file, a monitor secret keyring and a log file for the new
56       cluster. It populates the newly created Ceph  configuration  file  with
57       fsid  of cluster, hostnames and IP addresses of initial monitor members
58       under [global] section.
59
60       Usage:
61
62          ceph-deploy new [MON][MON...]
63
64       Here, [MON] is the initial monitor hostname (short hostname i.e,  host‐
65       name -s).
66
67       Other  options  like  --no-ssh-copykey,  --fsid,  --cluster-network and
68       --public-network can also be used with this command.
69
70       If more than one network interface is used, public network setting  has
71       to  be  added under [global] section of Ceph configuration file. If the
72       public subnet is given, new command will choose the  one  IP  from  the
73       remote  host  that  exists  within the subnet range. Public network can
74       also be added at runtime using --public-network option with the command
75       as mentioned above.
76
77   install
78       Install  Ceph  packages  on  remote  hosts. As a first step it installs
79       yum-plugin-priorities in admin and other nodes using  passwordless  ssh
80       and sudo so that Ceph packages from upstream repository get more prior‐
81       ity. It then detects the platform and distribution for  the  hosts  and
82       installs  Ceph  normally  by  downloading distro compatible packages if
83       adequate repo for Ceph is already added.  --release flag is used to get
84       the  latest  release for installation. During detection of platform and
85       distribution before installation, if it finds  the  distro.init  to  be
86       sysvinit  (Fedora, CentOS/RHEL etc), it doesn't allow installation with
87       custom cluster name and uses the default name ceph for the cluster.
88
89       If the user explicitly specifies a custom repo url with --repo-url  for
90       installation, anything detected from the configuration will be overrid‐
91       den and the custom repository location will be used for installation of
92       Ceph  packages.   If  required,  valid  custom  repositories  are  also
93       detected and installed. In case of installation from a  custom  repo  a
94       boolean  is used to determine the logic needed to proceed with a custom
95       repo installation. A custom repo  install  helper  is  used  that  goes
96       through  config  checks to retrieve repos (and any extra repos defined)
97       and installs them. cd_conf is the object built from argparse that holds
98       the  flags  and  information needed to determine what metadata from the
99       configuration is to be used.
100
101       A user can also opt to install only the repository  without  installing
102       Ceph and its dependencies by using --repo option.
103
104       Usage:
105
106          ceph-deploy install [HOST][HOST...]
107
108       Here, [HOST] is/are the host node(s) where Ceph is to be installed.
109
110       An  option  --release  is  used  to install a release known as CODENAME
111       (default: firefly).
112
113       Other options like --testing, --dev, --adjust-repos, --no-adjust-repos,
114       --repo,  --local-mirror, --repo-url and --gpg-url can also be used with
115       this command.
116
117   mds
118       Deploy Ceph mds on remote hosts. A metadata server  is  needed  to  use
119       CephFS  and  the  mds command is used to create one on the desired host
120       node. It uses the subcommand create to do so.  create  first  gets  the
121       hostname  and distro information of the desired mds host. It then tries
122       to read the bootstrap-mds key for the cluster  and  deploy  it  in  the
123       desired  host.  The  key  generally  has  a  format  of {cluster}.boot‐
124       strap-mds.keyring. If it doesn't finds a keyring, it runs gatherkeys to
125       get  the  keyring.  It then creates a mds on the desired host under the
126       path /var/lib/ceph/mds/  in  /var/lib/ceph/mds/{cluster}-{name}  format
127       and   a   bootstrap   keyring   under  /var/lib/ceph/bootstrap-mds/  in
128       /var/lib/ceph/bootstrap-mds/{cluster}.keyring  format.  It  then   runs
129       appropriate commands based on distro.init to start the mds.
130
131       Usage:
132
133          ceph-deploy mds create [HOST[:DAEMON-NAME]] [HOST[:DAEMON-NAME]...]
134
135       The [DAEMON-NAME] is optional.
136
137   mon
138       Deploy  Ceph  monitor on remote hosts. mon makes use of certain subcom‐
139       mands to deploy Ceph monitors on other nodes.
140
141       Subcommand create-initial deploys for monitors defined in  mon  initial
142       members  under  [global] section in Ceph configuration file, wait until
143       they form quorum and then  gatherkeys,  reporting  the  monitor  status
144       along the process. If monitors don't form quorum the command will even‐
145       tually time out.
146
147       Usage:
148
149          ceph-deploy mon create-initial
150
151       Subcommand create is used to deploy Ceph monitors by explicitly  speci‐
152       fying  the hosts which are desired to be made monitors. If no hosts are
153       specified it will default to use the mon initial members defined  under
154       [global] section of Ceph configuration file. create first detects plat‐
155       form and distro for desired hosts and checks if hostname is  compatible
156       for  deployment.  It  then  uses  the monitor keyring initially created
157       using new command and deploys the monitor in desired host. If  multiple
158       hosts  were  specified  during  new  command i.e, if there are multiple
159       hosts in mon initial members and multiple keyrings were created then  a
160       concatenated  keyring  is  used  for  deployment  of  monitors. In this
161       process a keyring parser is used which looks for [entity]  sections  in
162       monitor keyrings and returns a list of those sections. A helper is then
163       used to collect all keyrings into a single blob that will  be  used  to
164       inject  it  to  monitors with --mkfs on remote nodes. All keyring files
165       are concatenated to be in a directory ending with .keyring. During this
166       process  the helper uses list of sections returned by keyring parser to
167       check if an entity is already present in a keyring and if not, adds it.
168       The  concatenated keyring is used for deployment of monitors to desired
169       multiple hosts.
170
171       Usage:
172
173          ceph-deploy mon create [HOST] [HOST...]
174
175       Here, [HOST] is hostname of desired monitor host(s).
176
177       Subcommand add is used to add a monitor  to  an  existing  cluster.  It
178       first  detects platform and distro for desired host and checks if host‐
179       name is compatible for deployment. It then uses  the  monitor  keyring,
180       ensures  configuration for new monitor host and adds the monitor to the
181       cluster. If the section for the monitor exists and defines a  mon  addr
182       that will be used, otherwise it will fallback by resolving the hostname
183       to an IP. If --address is used it  will  override  all  other  options.
184       After  adding  the  monitor  to  the  cluster, it gives it some time to
185       start. It then looks for any monitor errors and checks monitor  status.
186       Monitor  errors  arise  if the monitor is not added in mon initial mem‐
187       bers, if it doesn't exist in monmap and if neither public_addr nor pub‐
188       lic_network keys were defined for monitors. Under such conditions, mon‐
189       itors may not be able to form quorum. Monitor status tells if the moni‐
190       tor  is  up and running normally. The status is checked by running ceph
191       daemon mon.hostname mon_status on remote end which provides the  output
192       and  returns a boolean status of what is going on.  False means a moni‐
193       tor that is not fine even if it is up and running, while True means the
194       monitor is up and running correctly.
195
196       Usage:
197
198          ceph-deploy mon add [HOST]
199
200          ceph-deploy mon add [HOST] --address [IP]
201
202       Here,  [HOST] is the hostname and [IP] is the IP address of the desired
203       monitor node. Please note, unlike other mon subcommands, only one  node
204       can be specified at a time.
205
206       Subcommand  destroy  is  used  to  completely remove monitors on remote
207       hosts.  It takes hostnames as arguments. It stops the monitor, verifies
208       if  ceph-mon  daemon  really  stopped,  creates  an  archive  directory
209       mon-remove under /var/lib/ceph/,  archives  old  monitor  directory  in
210       {cluster}-{hostname}-{stamp}  format in it and removes the monitor from
211       cluster by running ceph remove... command.
212
213       Usage:
214
215          ceph-deploy mon destroy [HOST] [HOST...]
216
217       Here, [HOST] is hostname of monitor that is to be removed.
218
219   gatherkeys
220       Gather authentication keys for provisioning new nodes. It  takes  host‐
221       names  as  arguments.  It  checks for and fetches client.admin keyring,
222       monitor keyring and bootstrap-mds/bootstrap-osd  keyring  from  monitor
223       host. These authentication keys are used when new monitors/OSDs/MDS are
224       added to the cluster.
225
226       Usage:
227
228          ceph-deploy gatherkeys [HOST] [HOST...]
229
230       Here, [HOST] is hostname of the monitor  from  where  keys  are  to  be
231       pulled.
232
233   disk
234       Manage disks on a remote host. It actually triggers the ceph-disk util‐
235       ity and it's subcommands to manage disks.
236
237       Subcommand list lists disk partitions and Ceph OSDs.
238
239       Usage:
240
241          ceph-deploy disk list [HOST:[DISK]]
242
243       Here, [HOST] is hostname of the node and [DISK] is disk name or path.
244
245       Subcommand prepare prepares a directory, disk or drive for a Ceph  OSD.
246       It  creates  a  GPT partition, marks the partition with Ceph type uuid,
247       creates a file system, marks the file system as ready for Ceph consump‐
248       tion,  uses  entire  partition  and adds a new partition to the journal
249       disk.
250
251       Usage:
252
253          ceph-deploy disk prepare [HOST:[DISK]]
254
255       Here, [HOST] is hostname of the node and [DISK] is disk name or path.
256
257       Subcommand activate activates the Ceph OSD. It mounts the volume  in  a
258       temporary  location,  allocates  an OSD id (if needed), remounts in the
259       correct location /var/lib/ceph/osd/$cluster-$id and starts ceph-osd. It
260       is triggered by udev when it sees the OSD GPT partition type or on ceph
261       service start with ceph disk activate-all.
262
263       Usage:
264
265          ceph-deploy disk activate [HOST:[DISK]]
266
267       Here, [HOST] is hostname of the node and [DISK] is disk name or path.
268
269       Subcommand zap zaps/erases/destroys a device's partition table and con‐
270       tents.   It  actually  uses sgdisk and it's option --zap-all to destroy
271       both GPT and MBR data structures so that the disk becomes suitable  for
272       repartitioning.   sgdisk then uses --mbrtogpt to convert the MBR or BSD
273       disklabel disk to a GPT disk. The prepare subcommand can  now  be  exe‐
274       cuted which will create a new GPT partition.
275
276       Usage:
277
278          ceph-deploy disk zap [HOST:[DISK]]
279
280       Here, [HOST] is hostname of the node and [DISK] is disk name or path.
281
282   osd
283       Manage  OSDs  by  preparing  data disk on remote host. osd makes use of
284       certain subcommands for managing OSDs.
285
286       Subcommand prepare prepares a directory, disk or drive for a Ceph  OSD.
287       It  first  checks against multiple OSDs getting created and warns about
288       the possibility of more than the recommended which would  cause  issues
289       with  max allowed PIDs in a system. It then reads the bootstrap-osd key
290       for the cluster or writes the bootstrap key if not found. It then  uses
291       ceph-disk utility's prepare subcommand to prepare the disk, journal and
292       deploy the OSD on the desired host. Once prepared, it gives  some  time
293       to  the  OSD to settle and checks for any possible errors and if found,
294       reports to the user.
295
296       Usage:
297
298          ceph-deploy osd prepare HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
299
300       Subcommand activate activates the OSD prepared  using  prepare  subcom‐
301       mand.   It  actually  uses ceph-disk utility's activate subcommand with
302       appropriate init type based on distro to activate the OSD.  Once  acti‐
303       vated, it gives some time to the OSD to start and checks for any possi‐
304       ble errors and if found, reports to the user. It checks the  status  of
305       the  prepared  OSD,  checks the OSD tree and makes sure the OSDs are up
306       and in.
307
308       Usage:
309
310          ceph-deploy osd activate HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
311
312       Subcommand create uses prepare and activate subcommands  to  create  an
313       OSD.
314
315       Usage:
316
317          ceph-deploy osd create HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
318
319       Subcommand  list  lists disk partitions, Ceph OSDs and prints OSD meta‐
320       data.   It  gets  the  osd  tree  from  a  monitor   host,   uses   the
321       ceph-disk-list  output  and  gets  the mount point by matching the line
322       where the partition mentions the OSD name, reads metadata  from  files,
323       checks if a journal path exists, if the OSD is in a OSD tree and prints
324       the OSD metadata.
325
326       Usage:
327
328          ceph-deploy osd list HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
329
330   admin
331       Push configuration and client.admin key to a remote host. It takes  the
332       {cluster}.client.admin.keyring  from  admin  node  and  writes it under
333       /etc/ceph directory of desired node.
334
335       Usage:
336
337          ceph-deploy admin [HOST] [HOST...]
338
339       Here, [HOST] is desired host to be configured for Ceph administration.
340
341   config
342       Push/pull configuration file to/from a remote host. It uses  push  sub‐
343       command to takes the configuration file from admin host and write it to
344       remote host under /etc/ceph directory. It uses pull  subcommand  to  do
345       the opposite i.e, pull the configuration file under /etc/ceph directory
346       of remote host to admin node.
347
348       Usage:
349
350          ceph-deploy config push [HOST] [HOST...]
351
352          ceph-deploy config pull [HOST] [HOST...]
353
354       Here, [HOST] is the hostname of the node  where  config  file  will  be
355       pushed to or pulled from.
356
357   uninstall
358       Remove  Ceph  packages  from  remote hosts. It detects the platform and
359       distro of selected host and uninstalls Ceph packages from it.  However,
360       some  dependencies  like  librbd1  and  librados2  will  not be removed
361       because they can cause issues with qemu-kvm.
362
363       Usage:
364
365          ceph-deploy uninstall [HOST] [HOST...]
366
367       Here, [HOST] is hostname of the node from  where  Ceph  will  be  unin‐
368       stalled.
369
370   purge
371       Remove  Ceph  packages from remote hosts and purge all data. It detects
372       the platform and distro of selected host, uninstalls Ceph packages  and
373       purges  all data. However, some dependencies like librbd1 and librados2
374       will not be removed because they can cause issues with qemu-kvm.
375
376       Usage:
377
378          ceph-deploy purge [HOST] [HOST...]
379
380       Here, [HOST] is hostname of the node from where Ceph will be purged.
381
382   purgedata
383       Purge  (delete,  destroy,  discard,   shred)   any   Ceph   data   from
384       /var/lib/ceph.   Once  it  detects  the  platform and distro of desired
385       host, it first checks if Ceph is still installed on the  selected  host
386       and if installed, it won't purge data from it. If Ceph is already unin‐
387       stalled  from  the  host,  it  tries  to   remove   the   contents   of
388       /var/lib/ceph.  If  it  fails  then probably OSDs are still mounted and
389       needs to be unmounted to continue. It unmount the  OSDs  and  tries  to
390       remove  the  contents  of /var/lib/ceph again and checks for errors. It
391       also removes contents of /etc/ceph. Once  all  steps  are  successfully
392       completed, all the Ceph data from the selected host are removed.
393
394       Usage:
395
396          ceph-deploy purgedata [HOST] [HOST...]
397
398       Here,  [HOST]  is  hostname  of  the  node from where Ceph data will be
399       purged.
400
401   forgetkeys
402       Remove authentication keys from the local directory. It removes all the
403       authentication  keys  i.e, monitor keyring, client.admin keyring, boot‐
404       strap-osd and bootstrap-mds keyring from the node.
405
406       Usage:
407
408          ceph-deploy forgetkeys
409
410   pkg
411       Manage packages on remote hosts. It is used for installing or  removing
412       packages  from  remote  hosts.  The  package  names for installation or
413       removal are to be specified after the command.  Two  options  --install
414       and --remove are used for this purpose.
415
416       Usage:
417
418          ceph-deploy pkg --install [PKGs] [HOST] [HOST...]
419
420          ceph-deploy pkg --remove [PKGs] [HOST] [HOST...]
421
422       Here, [PKGs] is comma-separated package names and [HOST] is hostname of
423       the remote node where packages are to be installed or removed from.
424
425   calamari
426       Install and configure Calamari nodes. It first checks if distro is sup‐
427       ported for Calamari installation by ceph-deploy. An argument connect is
428       used for installation and configuration. It checks for ceph-deploy con‐
429       figuration  file (cd_conf) and Calamari release repo or calamari-minion
430       repo. It relies on default for repo installation as it doesn't  install
431       Ceph  unless  specified  otherwise.  options dictionary is also defined
432       because ceph-deploy pops items  internally  which  causes  issues  when
433       those items are needed to be available for every host. If the distro is
434       Debian/Ubuntu, it is ensured that proxy is disabled for calamari-minion
435       repo.  calamari-minion  package is then installed and custom repository
436       files are added. minion config  is placed prior to installation so that
437       it is present when the minion first starts.  config directory, calamari
438       salt config are created and salt-minion package is  installed.  If  the
439       distro is Redhat/CentOS, the salt-minion service needs to be started.
440
441       Usage:
442
443          ceph-deploy calamari {connect} [HOST] [HOST...]
444
445       Here, [HOST] is the hostname where Calamari is to be installed.
446
447       An  option  --release can be used to use a given release from reposito‐
448       ries defined in ceph-deploy's configuration. Defaults to  calamari-min‐
449       ion.
450
451       Another option --master can also be used with this command.
452

OPTIONS

454       --address
455              IP address of the host node to be added to the cluster.
456
457       --adjust-repos
458              Install packages modifying source repos.
459
460       --ceph-conf
461              Use (or reuse) a given ceph.conf file.
462
463       --cluster
464              Name of the cluster.
465
466       --dev  Install  a  bleeding edge built from Git branch or tag (default:
467              master).
468
469       --cluster-network
470              Specify the (internal) cluster network.
471
472       --dmcrypt
473              Encrypt [data-path] and/or journal devices with dm-crypt.
474
475       --dmcrypt-key-dir
476              Directory where dm-crypt keys are stored.
477
478       --install
479              Comma-separated package(s) to install on remote hosts.
480
481       --fs-type
482              Filesystem to use to format disk (xfs,  btrfs  or  ext4).   Note
483              that  support  for  btrfs and ext4 is no longer tested or recom‐
484              mended; please use xfs.
485
486       --fsid Provide an alternate FSID for ceph.conf generation.
487
488       --gpg-url
489              Specify a GPG key url to be used with custom repos (defaults  to
490              ceph.com).
491
492       --keyrings
493              Concatenate multiple keyrings to be seeded on new monitors.
494
495       --local-mirror
496              Fetch packages and push them to hosts for a local repo mirror.
497
498       --master
499              The domain for the Calamari master server.
500
501       --mkfs Inject keys to MONs on remote nodes.
502
503       --no-adjust-repos
504              Install packages without modifying source repos.
505
506       --no-ssh-copykey
507              Do not attempt to copy ssh keys.
508
509       --overwrite-conf
510              Overwrite an existing conf file on remote host (if present).
511
512       --public-network
513              Specify the public network for a cluster.
514
515       --remove
516              Comma-separated package(s) to remove from remote hosts.
517
518       --repo Install repo files only (skips package installation).
519
520       --repo-url
521              Specify a repo url that mirrors/contains Ceph packages.
522
523       --testing
524              Install the latest development release.
525
526       --username
527              The username to connect to the remote host.
528
529       --version
530              The current installed version of ceph-deploy.
531
532       --zap-disk
533              Destroy the partition table and content of a disk.
534

AVAILABILITY

536       ceph-deploy  is  part  of Ceph, a massively scalable, open-source, dis‐
537       tributed  storage  system.  Please  refer  to  the   documentation   at
538       http://ceph.com/ceph-deploy/docs for more information.
539

SEE ALSO

541       ceph-mon(8), ceph-osd(8), ceph-disk(8), ceph-mds(8)
542
544       2010-2014,  Inktank Storage, Inc. and contributors. Licensed under Cre‐
545       ative Commons Attribution Share Alike 3.0 (CC-BY-SA-3.0)
546
547
548
549
550dev                              Apr 14, 2019                   CEPH-DEPLOY(8)
Impressum