1SSM(8)                      System Storage Manager                      SSM(8)
2
3
4

NAME

6       ssm - System Storage Manager: a single tool to manage your storage
7

SYNOPSIS

9       ssm  [-h]  [--version]  [-v]  [-v**v]  [-v**vv]  [-f] [-b BACKEND] [-n]
10       {check,resize,create,list,info,add,remove,snapshot,mount,migrate} ...
11
12       ssm create [-h] [-s SIZE] [-n NAME] [--fstype FSTYPE]  [-r  LEVEL]  [-I
13       STRIPESIZE] [-i STRIPES] [-p POOL] [-e [{luks,plain}]] [-o MNT_OPTIONS]
14       [-v VIRTUAL_SIZE] [device ...] [mount]
15
16       ssm    list    [-h]    [{volumes,vol,dev,devices,pool,pools,fs,filesys‐
17       tems,snap,snapshots}]
18
19       ssm info [-h] [item]
20
21       ssm remove [-h] [-a] [items ...]
22
23       ssm resize [-h] [-s SIZE] volume [device ...]
24
25       ssm check [-h] device [device ...]
26
27       ssm snapshot [-h] [-s SIZE] [-d DEST | -n NAME] volume
28
29       ssm add [-h] [-p POOL] device [device ...]
30
31       ssm mount [-h] [-o OPTIONS] volume directory
32
33       ssm migrate [-h] source target
34

DESCRIPTION

36       System  Storage  Manager provides an easy to use command line interface
37       to manage your storage using various technologies like lvm, btrfs,  en‐
38       crypted volumes and more.
39
40       In  more sophisticated enterprise storage environments, management with
41       Device Mapper (dm), Logical Volume Manager (LVM), or  Multiple  Devices
42       (md)  is becoming increasingly more difficult.  With file systems added
43       to the mix, the number of tools needed to configure and manage  storage
44       has  grown  so large that it is simply not user friendly.  With so many
45       options for a system administrator to consider, the opportunity for er‐
46       rors and problems is large.
47
48       The  btrfs  administration  tools have shown us that storage management
49       can be simplified, and we are working to bring  that  ease  of  use  to
50       Linux filesystems in general.
51

OPTIONS

53       -h, --help
54              show this help message and exit
55
56       --version
57              show program's version number and exit
58
59       -v, --verbose
60              Show aditional information while executing.
61
62       -vv    Show yet more aditional information while executing.
63
64       -vvv   Show yet more aditional information while executing.
65
66       -f, --force
67              Force  execution  in the case where ssm has some doubts or ques‐
68              tions.
69
70       -b BACKEND, --backend BACKEND
71              Choose  backend  to  use.  Currently   you   can   choose   from
72              (lvm,btrfs,crypt,multipath).
73
74       -n, --dry-run
75              Dry run. Do not do anything, just parse the command line options
76              and gather system information if necessary. Note that with  this
77              option  ssm  will  not perform all the check as some of them are
78              done by the backends themselves. This option is mainly used  for
79              debugging purposes, but still requires root privileges.
80

SYSTEM STORAGE MANAGER COMMANDS

82   Introduction
83       System Storage Manager has several commands that you can specify on the
84       command line as a first argument to ssm. They all have a  specific  use
85       and their own arguments, but global ssm arguments are propagated to all
86       commands.
87
88   Create command
89       ssm create [-h] [-s SIZE] [-n NAME] [--fstype FSTYPE]  [-r  LEVEL]  [-I
90       STRIPESIZE] [-i STRIPES] [-p POOL] [-e [{luks,plain}]] [-o MNT_OPTIONS]
91       [-v VIRTUAL_SIZE] [device ...] [mount]
92
93       This command creates a new volume with defined parameters. If a  device
94       is  provided  it  will  be  used to create the volume, hence it will be
95       added into the pool prior to volume creation (See Add command section).
96       More than one device can be used to create a volume.
97
98       If  the device is already being used in a different pool, then ssm will
99       ask you whether you want to remove it from the original  pool.  If  you
100       decline,  or  the  removal fails, then the volume creation fails if the
101       SIZE was not provided. On the other hand, if the SIZE is  provided  and
102       some  devices  can  not be added to the pool, the volume creation might
103       still succeed if there is enough space in the pool.
104
105       In addition to specifying size of the volume directly,  percentage  can
106       be specified as well. Specify --size 70% to indicate the volume size to
107       be 70% of total pool size. Additionally, percentage  of  the  used,  or
108       free  pool  space can be specified as well using keywords FREE, or USED
109       respectively.
110
111       The POOL name can be specified as well. If the pool exists, a new  vol‐
112       ume  will  be created from that pool (optionally adding device into the
113       pool).  However if the POOL does not exist, then ssm  will  attempt  to
114       create  a new pool with the provided device, and then create a new vol‐
115       ume from this pool. If the --backend argument is omitted,  the  default
116       ssm backend will be used. The default backend is lvm.
117
118       ssm also supports creating a RAID configuration, however some back-ends
119       might not support all RAID levels, or may not even support RAID at all.
120       In this case, volume creation will fail.
121
122       If  a mount point is provided, ssm will attempt to mount the volume af‐
123       ter it is created. However it will fail if mountable file system is not
124       present on the volume.
125
126       If  the  backend allows it (currently only supported with lvm backend),
127       ssm can be used to create  thinly  provisioned  volumes  by  specifying
128       --virtual-size  option. This will automatically create a thin pool of a
129       given size provided with --size option and thin volume of a given  size
130       provided  with  --virtual-size option and name provided with --name op‐
131       tion. Virtual size can be much bigger than available space in the pool.
132
133       -h, --help
134              show this help message and exit
135
136       -s SIZE, --size SIZE
137              Gives the size to allocate for the new logical volume.   A  size
138              suffix K|k, M|m, G|g, T|t, P|p, E|e can be used to define 'power
139              of two' units. If no unit is provided, it defaults to kilobytes.
140              This is optional and if not given, maximum possible size will be
141              used.  Additionally the new size can be specified as a  percent‐
142              age  of  the total pool size (50%), as a percentage of free pool
143              space  (50%FREE),  or  as  a  percentage  of  used  pool   space
144              (50%USED).
145
146       -n NAME, --name NAME
147              The  name  for  the  new logical volume. This is optional and if
148              omitted, name will be generated by the corresponding backend.
149
150       --fstype FSTYPE
151              Gives the file system type to create on the new logical  volume.
152              Supported file systems are (ext3, ext4, xfs, btrfs). This is op‐
153              tional and if not given file system will not be created.
154
155       -r LEVEL, --raid LEVEL
156              Specify a RAID level you want to use when creating a new volume.
157              Note  that  some backends might not implement all supported RAID
158              levels. This is optional and if no specified, linear volume will
159              be created.  You can choose from the following list of supported
160              levels (0,1,10).
161
162       -I STRIPESIZE, --stripesize STRIPESIZE
163              Gives the number of kilobytes for the  granularity  of  stripes.
164              This is optional and if not given, backend default will be used.
165              Note that you have to specify RAID level as well.
166
167       -i STRIPES, --stripes STRIPES
168              Gives the number of stripes. This is  equal  to  the  number  of
169              physical volumes to scatter the logical volume. This is optional
170              and if stripesize is  set  and  multiple  devices  are  provided
171              stripes  is determined automatically from the number of devices.
172              Note that you have to specify RAID level as well.
173
174       -p POOL, --pool POOL
175              Pool to use to create the new volume.
176
177       -e [{luks,plain}], --encrypt [{luks,plain}]
178              Create encrpted volume. Extension to use can be specified.
179
180       -o MNT_OPTIONS, --mnt-options MNT_OPTIONS
181              Mount options are specified with a -o flag followed by  a  comma
182              separated string of options. This option is equivalent to the -o
183              mount(8) option.
184
185       -v VIRTUAL_SIZE, --virtual-size VIRTUAL_SIZE
186              Gives the virtual size for the new thinly provisioned volume.  A
187              size  suffix  K|k, M|m, G|g, T|t, P|p, E|e can be used to define
188              'power of two' units. If no unit is  provided,  it  defaults  to
189              kilobytes.
190
191   Info command
192       ssm info [-h] [item]
193
194       EXPERIMENTAL  This feature is currently experimental. The output format
195       can change and fields can be added or removed.
196
197       Show detailed information about all detected  devices,  pools,  volumes
198       and  snapshots found on the system. The info command can be used either
199       alone to show all available items, or you can specify a  device,  pool,
200       or any other identifier to see information about the specific item.
201
202       -h, --help
203              show this help message and exit
204
205   List command
206       ssm    list    [-h]    [{volumes,vol,dev,devices,pool,pools,fs,filesys‐
207       tems,snap,snapshots}]
208
209       Lists information about all detected devices, pools, volumes and  snap‐
210       shots found on the system. The list command can be used either alone to
211       list all of the information, or you can request specific sections only.
212
213       The following sections can be specified:
214
215       {volumes | vol}
216              List information about all volumes found in the system.
217
218       {devices | dev}
219              List information about all devices found on the system. Some de‐
220              vices  are intentionally hidden, like for example cdrom or DM/MD
221              devices since those are actually listed as volumes.
222
223       {pools | pool}
224              List information about all pools found in the system.
225
226       {filesystems | fs}
227              List information about all volumes containing filesystems  found
228              in the system.
229
230       {snapshots | snap}
231              List  information  about all snapshots found in the system. Note
232              that some back-ends do not support snapshotting and some  cannot
233              distinguish  snapshot  from  regular  volumes. In this case, ssm
234              will try to recognize the volume name in  order  to  identify  a
235              snapshot,  but  if the ssm regular expression does not match the
236              snapshot pattern, the problematic snapshot will  not  be  recog‐
237              nized.
238
239       -h, --help
240              show this help message and exit
241
242   Remove command
243       ssm remove [-h] [-a] [items ...]
244
245       This  command  removes  an  item from the system. Multiple items can be
246       specified.  If the item cannot be removed for some reason, it  will  be
247       skipped.
248
249       An item can be any of the following:
250
251       device Remove  a device from the pool. Note that this cannot be done in
252              some cases where the device is being used by the pool.  You  can
253              use the -f argument to force removal. If the device does not be‐
254              long to any pool, it will be skipped.
255
256       pool   Remove a pool from the system. This will also remove all volumes
257              created from that pool.
258
259       volume Remove a volume from the system. Note that this will fail if the
260              volume is mounted and cannot be forced with -f.
261
262       -h, --help
263              show this help message and exit
264
265       -a, --all
266              Remove all pools in the system.
267
268   Resize command
269       ssm resize [-h] [-s SIZE] volume [device ...]
270
271       Change size of the volume and file system. If there is no file  system,
272       only the volume itself will be resized. You can specify a device to add
273       into the volume pool prior the resize. Note that the device  will  only
274       be added into the pool if the volume size is going to grow.
275
276       If  the  device  is already used in a different pool, then ssm will ask
277       you whether or not you want to remove it from the original pool.
278
279       In some cases, the file system has to be mounted in  order  to  resize.
280       This will be handled by ssm automatically by mounting the volume tempo‐
281       rarily.
282
283       In addition to specifying new size of the volume  directly,  percentage
284       can  be  specified  as well. Specify --size 70% to resize the volume to
285       70% of it's original size. Additionally, percentage  of  the  used,  or
286       free  pool  space can be specified as well using keywords FREE, or USED
287       respectively.
288
289       Note that resizing btrfs subvolume is not  supported,  only  the  whole
290       file system can be resized.
291
292       -h, --help
293              show this help message and exit
294
295       -s SIZE, --size SIZE
296              New  size of the volume. With the + or - sign the value is added
297              to or subtracted from the actual size of the volume and  without
298              it,  the value will be set as the new volume size. A size suffix
299              of [k|K] for kilobytes, [m|M] for  megabytes,  [g|G]  for  giga‐
300              bytes,  [t|T]  for terabytes or [p|P] for petabytes is optional.
301              If no unit is provided the default is  kilobytes.   Additionally
302              the  new  size  can be specified as a percentage of the original
303              volume size ([+][-]50%), as a  percentage  of  free  pool  space
304              ([+][-]50%FREE),   or   as  a  percentage  of  used  pool  space
305              ([+][-]50%USED).
306
307   Check command
308       ssm check [-h] device [device ...]
309
310       Check the file system consistency on the volume. You can specify multi‐
311       ple  volumes  to  check. If there is no file system on the volume, this
312       volume will be skipped.
313
314       In some cases the file system has to be mounted in order to  check  the
315       file system.  This will be handled by ssm automatically by mounting the
316       volume temporarily.
317
318       -h, --help
319              show this help message and exit
320
321   Snapshot command
322       ssm snapshot [-h] [-s SIZE] [-d DEST | -n NAME] volume
323
324       Take a snapshot of an existing volume. This operation will fail if  the
325       back-end  to which the volume belongs to does not support snapshotting.
326       Note that you cannot specify both NAME and DEST since those options are
327       mutually exclusive.
328
329       In  addition  to specifying new size of the volume directly, percentage
330       can be specified as well. Specify --size 70% to indicate the new  snap‐
331       shot size to be 70% of the origin volume size. Additionally, percentage
332       of the used, or free pool space can be specified as well using keywords
333       FREE, or USED respectively.
334
335       In  some  cases  the  file  system has to be mounted in order to take a
336       snapshot of the volume. This will be handled by  ssm  automatically  by
337       mounting the volume temporarily.
338
339       -h, --help
340              show this help message and exit
341
342       -s SIZE, --size SIZE
343              Gives  the  size to allocate for the new snapshot volume. A size
344              suffix K|k, M|m, G|g, T|t, P|p, E|e can be used to define 'power
345              of two' units. If no unit is provided, it defaults to kilobytes.
346              This is optional and if not given, the size will  be  determined
347              automatically.  Additionally  the new size can be specified as a
348              percentage of the original volume size (50%), as a percentage of
349              free pool space (50%FREE), or as a percentage of used pool space
350              (50%USED).
351
352       -d DEST, --dest DEST
353              Destination of the snapshot specified with absolute path  to  be
354              used for the new snapshot. This is optional and if not specified
355              default backend policy will be performed.
356
357       -n NAME, --name NAME
358              Name of the new snapshot. This is optional and if not  specified
359              default backend policy will be performed.
360
361   Add command
362       ssm add [-h] [-p POOL] device [device ...]
363
364       This  command  adds a device into the pool. By default, the device will
365       not be added if it's already a part of a different pool, but  the  user
366       will  be  asked whether or not to remove the device from its pool. When
367       multiple devices are provided, all of them are added into the pool.  If
368       one  of  the  devices cannot be added into the pool for any reason, the
369       add command will fail. If no pool is specified, the default  pool  will
370       be chosen. In the case of a non existing pool, it will be created using
371       the provided devices.
372
373       -h, --help
374              show this help message and exit
375
376       -p POOL, --pool POOL
377              Pool to add device into. If not specified the  default  pool  is
378              used.
379
380   Mount command
381       ssm mount [-h] [-o OPTIONS] volume directory
382
383       This command will mount the volume at the specified directory. The vol‐
384       ume can be specified in the same way as with mount(8), however in addi‐
385       tion,  one can also specify a volume in the format as it appears in the
386       ssm list table.
387
388       For example, instead of finding out what the device and subvolume id of
389       the  btrfs  subvolume  "btrfs_pool:vol001" is in order to mount it, one
390       can simply call ssm mount btrfs_pool:vol001 /mnt/test.
391
392       One can also specify OPTIONS in the same way as with mount(8).
393
394       -h, --help
395              show this help message and exit
396
397       -o OPTIONS, --options OPTIONS
398              Options are specified with a -o flag followed by a  comma  sepa‐
399              rated  string  of options. This option is equivalent to the same
400              mount(8) option.
401
402   Migrate command
403       ssm migrate [-h] source target
404
405       Move data from one device to another. For btrfs and lvm their  special‐
406       ized  utilities  are  used,  so the data are moved in an all-or-nothing
407       fashion and no other operation is needed to add/remove the  devices  or
408       rebalance  the pool.  Devices that do not belong to a backend that sup‐
409       ports specialized device migration tools will be migrated using dd.
410
411       This operation is not intended to be used for duplication, because  the
412       process can change metadata and an access to the data may be difficult.
413
414       -h, --help
415              show this help message and exit
416

BACK-ENDS

418   Introduction
419       Ssm  aims  to  create a unified user interface for various technologies
420       like Device Mapper (dm), Btrfs file system, Multiple Devices  (md)  and
421       possibly  more.  In  order to do so we have a core abstraction layer in
422       ssmlib/main.py. This abstraction  layer  should  ideally  know  nothing
423       about  the  underlying  technology, but rather comply with device, pool
424       and volume abstractions.
425
426       Various backends can be registered in ssmlib/main.py in order to handle
427       specific  storage  technology,  implementing methods like create, snap‐
428       shot, or remove volumes and pools. The core will then call these  meth‐
429       ods  to manage the storage without needing to know what lies underneath
430       it. There are already several backends registered in ssm.
431
432   Btrfs backend
433       Btrfs is the file system with many advanced features  including  volume
434       management.  This  is  the reason why btrfs is handled differently than
435       other conventional file systems in ssm. It is used as a volume  manage‐
436       ment back-end.
437
438       Pools, volumes and snapshots can be created with btrfs backend and here
439       is what it means from the btrfs point of view:
440
441       pool   A pool is actually a btrfs file system itself, because it can be
442              extended  by  adding more devices, or shrunk by removing devices
443              from it. Subvolumes and snapshots can also be created. When  the
444              new  btrfs  pool  should  be created, ssm simply creates a btrfs
445              file system, which means that every new btrfs pool has one  vol‐
446              ume of the same name as the pool itself which can not be removed
447              without removing the entire pool. The default btrfs pool name is
448              btrfs_pool.
449
450              When  creating a new btrfs pool, the name of the pool is used as
451              the file system label. If there is  an  already  existing  btrfs
452              file  system  in  the  system without a label, a btrfs pool name
453              will be generated for  internal  use  in  the  following  format
454              "btrfs_{device base name}".
455
456              A  btrfs  pool is created when the create or add command is used
457              with specified devices and non existing pool name.
458
459       volume A volume in the btrfs back-end is actually just btrfs  subvolume
460              with  the  exception  of  the first volume created on btrfs pool
461              creation, which is the file system itself. Subvolumes  can  only
462              be  created on the btrfs file system when it is mounted, but the
463              user does not have to worry about that since ssm will  automati‐
464              cally mount the file system temporarily in order to create a new
465              subvolume.
466
467              The volume name is used as subvolume path in the btrfs file sys‐
468              tem  and every object in this path must exist in order to create
469              a volume. The volume name for internal tracking and that is vis‐
470              ible to the user is generated in the format "{pool_name}:{volume
471              name}", but volumes can be also referenced by its mount point.
472
473              The btrfs volumes are only shown in the list  output,  when  the
474              file  system  is  mounted,  with the exception of the main btrfs
475              volume - the file system itself.
476
477              Also note that btrfs volumes and subvolumes cannot  be  resized.
478              This  is mainly limitation of the btrfs tools which currently do
479              not work reliably.
480
481              A new btrfs volume can be created with the create command.
482
483       snapshot
484              The btrfs file system supports subvolume  snapshotting,  so  you
485              can  take a snapshot of any btrfs volume in the system with ssm.
486              However btrfs does not distinguish between subvolumes and  snap‐
487              shots, because a snapshot is actually just a subvolume with some
488              blocks shared with a different subvolume.  This means, that  ssm
489              is  not  able  to directly recognize a btrfs snapshot.  Instead,
490              ssm will try to recognize a special name  format  of  the  btrfs
491              volume  that  denotes  it is a snapshot. However, if the NAME is
492              specified when creating snapshot which does not match  the  spe‐
493              cial  pattern, snapshot will not be recognized by the ssm and it
494              will be listed as regular btrfs volume.
495
496              A new btrfs snapshot can be created with the snapshot command.
497
498       device Btrfs does not require a special device to be created on.
499
500   Lvm backend
501       Pools, volumes and snapshots can be created with lvm, which pretty much
502       match the lvm abstraction.
503
504       pool   An  lvm  pool  is  just a volume group in lvm language. It means
505              that it is grouping devices and new logical volumes can be  cre‐
506              ated  out  of  the  lvm  pool.   The  default  lvm  pool name is
507              lvm_pool.
508
509              An lvm pool is created when the create or add commands are  used
510              with specified devices and a non existing pool name.
511
512              Alternatively  a  thin  pool can be created as a result of using
513              --virtual-size option to create thin volume.
514
515       volume An lvm volume is just a logical volume in lvm language.  An  lvm
516              volume can be created with the create command.
517
518       snapshot
519              Lvm  volumes can be snapshotted as well. When a snapshot is cre‐
520              ated from the lvm volume, a  new  snapshot  volume  is  created,
521              which  can  be handled as any other lvm volume. Unlike btrfs lvm
522              is able to distinguish snapshot from regular volume, so there is
523              no need for a snapshot name to match special pattern.
524
525       device Lvm  requires a physical device to be created on the device, but
526              with ssm this is transparent for the user.
527
528   Crypt backend
529       The crypt backend in ssm uses cryptsetup and dm-crypt target to  manage
530       encrypted  volumes.  The crypt backend can be used as a regular backend
531       for creating encrypted volumes on top of regular block devices, or even
532       other  volumes  (lvm  or  md volumes for example). Or it can be used to
533       create encrypted lvm volumes right away in a single step.
534
535       Only volumes can be created with crypt backend. This backend  does  not
536       support pooling and does not require special devices.
537
538       pool   The crypt backend does not support pooling, and it is not possi‐
539              ble to create crypt pool or add a device into a pool.
540
541       volume A volume in the crypt backend is the volume created by  dm-crypt
542              which  represents  the  data on the original encrypted device in
543              unencrypted form.  The crypt backend does not  support  pooling,
544              so  only  one device can be used to create crypt volume. It also
545              does not support raid or any device concatenation.
546
547              Currently two modes,  or  extensions  are  supported:  luks  and
548              plain.  Luks  is used by default. For more information about the
549              extensions, please see cryptsetup manual page.
550
551       snapshot
552              The crypt backend does not support snapshotting, however if  the
553              encrypted  volume  is  created  on top of an lvm volume, the lvm
554              volume itself can be  snapshotted.  The  snapshot  can  be  then
555              opened  by  using  cryptsetup.   It  is possible that this might
556              change in the future so that ssm will be able  to  activate  the
557              volume directly without the extra step.
558
559       device The  crypt  backend does not require a special device to be cre‐
560              ated on.
561
562   MD backend
563       MD backend in ssm is currently limited to only gather  the  information
564       about MD volumes in the system. You can not create or manage MD volumes
565       or pools, but this functionality will be extended in the future.
566
567   Multipath backend
568       Multipath backend in ssm is currently limited to only gather the infor‐
569       mation  about  multipath  volumes  in the system. You can not create or
570       manage multipath volumes or pools, but this functionality will  be  ex‐
571       tended in the future.
572

EXAMPLES

574       List system storage information:
575
576          # ssm list
577
578       List all pools in the system:
579
580          # ssm list pools
581
582       Create  a  new 100GB volume with the default lvm backend using /dev/sda
583       and /dev/sdb with xfs file system:
584
585          # ssm create --size 100G --fs xfs /dev/sda /dev/sdb
586
587       Create a new volume with a btrfs backend using  /dev/sda  and  /dev/sdb
588       and let the volume to be RAID 1:
589
590          # ssm -b btrfs create --raid 1 /dev/sda /dev/sdb
591
592       Using  the lvm backend create a RAID 0 volume with devices /dev/sda and
593       /dev/sdb with 128kB stripe size, ext4  file  system  and  mount  it  on
594       /home:
595
596          # ssm create --raid 0 --stripesize 128k /dev/sda /dev/sdb /home
597
598       Create a new thinly provisioned volume with a lvm backend using devices
599       /dev/sda and /dev/sdb using --virtual-size option:
600
601          # ssm create --virtual-size 1T /dev/sda /dev/sdb
602
603       Create a new thinly provisioned volume with a defined  thin  pool  size
604       and devices /dev/sda and /dev/sdb:
605
606          # ssm create --size 50G --virtual-size 1T /dev/sda /dev/sdb
607
608       Extend  btrfs  volume btrfs_pool by 500GB and use /dev/sdc and /dev/sde
609       to cover the resize:
610
611          # ssm resize -s +500G btrfs_pool /dev/sdc /dev/sde
612
613       Shrink volume /dev/lvm_pool/lvol001 by 1TB:
614
615          # ssm resize -s-1t /dev/lvm_pool/lvol001
616
617       Remove /dev/sda device from the pool, remove the  btrfs_pool  pool  and
618       also remove the volume /dev/lvm_pool/lvol001:
619
620          # ssm remove /dev/sda btrfs_pool /dev/lvm_pool/lvol001
621
622       Take a snapshot of the btrfs volume btrfs_pool:my_volume:
623
624          # ssm snapshot btrfs_pool:my_volume
625
626       Add devices /dev/sda and /dev/sdb into the btrfs_pool pool:
627
628          # ssm add -p btrfs_pool /dev/sda /dev/sdb
629
630       Mount btrfs subvolume btrfs_pool:vol001 on /mnt/test:
631
632          # ssm mount btrfs_pool:vol001 /mnt/test
633

ENVIRONMENT VARIABLES

635       SSM_DEFAULT_BACKEND
636              Specify which backend will be used by default. This can be over‐
637              ridden by specifying the -b  or  --backend  argument.  Currently
638              only lvm and btrfs are supported.
639
640       SSM_LVM_DEFAULT_POOL
641              Name  of the default lvm pool to be used if the -p or --pool ar‐
642              gument is omitted.
643
644       SSM_BTRFS_DEFAULT_POOL
645              Name of the default btrfs pool to be used if the  -p  or  --pool
646              argument is omitted.
647
648       SSM_PREFIX_FILTER
649              When  this  is set, ssm will filter out all devices, volumes and
650              pools whose name does not start with this  prefix.  It  is  used
651              mainly  in the ssm test suite to make sure that we do not scram‐
652              ble the local system configuration.
653

LICENCE

655       (C)2017 Red Hat, Inc., Jan Tulak <jtulak@redhat.com> (C)2011  Red  Hat,
656       Inc., Lukas Czerner <lczerner@redhat.com>
657
658       This program is free software: you can redistribute it and/or modify it
659       under the terms of the GNU General Public License as published  by  the
660       Free  Software Foundation, either version 2 of the License, or (at your
661       option) any later version.
662
663       This program is distributed in the hope that it  will  be  useful,  but
664       WITHOUT  ANY  WARRANTY;  without  even  the  implied  warranty  of MER‐
665       CHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU  General
666       Public License for more details.
667
668       You should have received a copy of the GNU General Public License along
669       with this program.  If not, see <http://www.gnu.org/licenses/>.
670

REQUIREMENTS

672       Python 2.6 or higher is required to run this tool. System Storage  Man‐
673       ager  can  only  be run as root since most of the commands require root
674       privileges.
675
676       There are other requirements listed below, but note  that  you  do  not
677       necessarily  need all dependencies for all backends. However if some of
678       the tools required by a backend are  missing,  that  backend  will  not
679       work.
680
681   Python modules
682       • argparse
683
684       • atexit
685
686       • base64
687
688       • datetime
689
690       • fcntl
691
692       • getpass
693
694       • os
695
696       • pwquality
697
698       • re
699
700       • socket
701
702       • stat
703
704       • struct
705
706       • subprocess
707
708       • sys
709
710       • tempfile
711
712       • termios
713
714       • threading
715
716       • tty
717
718   System tools
719       • tune2fs
720
721       • fsck.SUPPORTED_FS
722
723       • resize2fs
724
725       • xfs_db
726
727       • xfs_check
728
729       • xfs_growfs
730
731       • mkfs.SUPPORTED_FS
732
733       • which
734
735       • mount
736
737       • blkid
738
739       • wipefs
740
741       • dd
742
743   Lvm backend
744       • lvm2 binaries
745
746       Some  distributions  (e.g. Debian) have thin provisioning tools for LVM
747       as an optional dependency, while others install it automatically.  Thin
748       provisioning without these tools installed is not supported by SSM.
749
750   Btrfs backend
751       • btrfs progs
752
753   Crypt backend
754       • dmsetup
755
756       • cryptsetup
757
758   Multipath backend
759       • multipath
760

AVAILABILITY

762       System        storage       manager       is       available       from
763       http://system-storage-manager.github.io.   You   can    subscribe    to
764       storagemanager-devel@lists.sourceforge.net to follow the current devel‐
765       opment.
766

AUTHOR

768       Jan Ťulák <jtulak@redhat.com>, Lukáš Czerner <lczerner@redhat.com>
769
771       2021, Red Hat, Inc., Jan Ťulák <jtulak@redhat.com>, Lukáš Czerner <lcz‐
772       erner@redhat.com>
773
774
775
776
7771.3                              Jan 27, 2021                           SSM(8)
Impressum