1SSM(8)                      System Storage Manager                      SSM(8)
2
3
4

NAME

6       ssm - System Storage Manager: a single tool to manage your storage
7

SYNOPSIS

9       ssm  [-h]  [--version]  [-v]  [-v**v]  [-v**vv]  [-f] [-b BACKEND] [-n]
10       {check,resize,create,list,info,add,remove,snapshot,mount,migrate} ...
11
12       ssm create [-h] [-s SIZE] [-n NAME] [--fstype FSTYPE]  [-r  LEVEL]  [-I
13       STRIPESIZE] [-i STRIPES] [-p POOL] [-e [{luks,plain}]] [-o MNT_OPTIONS]
14       [-v VIRTUAL_SIZE] [device [device ...]] [mount]
15
16       ssm    list    [-h]    [{volumes,vol,dev,devices,pool,pools,fs,filesys‐
17       tems,snap,snapshots}]
18
19       ssm info [-h] [item]
20
21       ssm remove [-h] [-a] [items [items ...]]
22
23       ssm resize [-h] [-s SIZE] volume [device [device ...]]
24
25       ssm check [-h] device [device ...]
26
27       ssm snapshot [-h] [-s SIZE] [-d DEST | -n NAME] volume
28
29       ssm add [-h] [-p POOL] device [device ...]
30
31       ssm mount [-h] [-o OPTIONS] volume directory
32
33       ssm migrate [-h] source target
34

DESCRIPTION

36       System  Storage  Manager provides an easy to use command line interface
37       to manage your storage using  various  technologies  like  lvm,  btrfs,
38       encrypted volumes and more.
39
40       In  more sophisticated enterprise storage environments, management with
41       Device Mapper (dm), Logical Volume Manager (LVM), or  Multiple  Devices
42       (md)  is becoming increasingly more difficult.  With file systems added
43       to the mix, the number of tools needed to configure and manage  storage
44       has  grown  so large that it is simply not user friendly.  With so many
45       options for a system administrator to  consider,  the  opportunity  for
46       errors and problems is large.
47
48       The  btrfs  administration  tools have shown us that storage management
49       can be simplified, and we are working to bring  that  ease  of  use  to
50       Linux filesystems in general.
51

OPTIONS

53       -h, --help
54              show this help message and exit
55
56       --version
57              show program's version number and exit
58
59       -v, --verbose
60              Show aditional information while executing.
61
62       -vv    Show yet more aditional information while executing.
63
64       -vvv   Show yet more aditional information while executing.
65
66       -f, --force
67              Force  execution  in the case where ssm has some doubts or ques‐
68              tions.
69
70       -b BACKEND, --backend BACKEND
71              Choose  backend  to  use.  Currently   you   can   choose   from
72              (lvm,btrfs,crypt,multipath).
73
74       -n, --dry-run
75              Dry run. Do not do anything, just parse the command line options
76              and gather system information if necessary. Note that with  this
77              option  ssm  will  not perform all the check as some of them are
78              done by the backends themselves. This option is mainly used  for
79              debugging purposes, but still requires root privileges.
80

SYSTEM STORAGE MANAGER COMMANDS

82   Introduction
83       System Storage Manager has several commands that you can specify on the
84       command line as a first argument to ssm. They all have a  specific  use
85       and their own arguments, but global ssm arguments are propagated to all
86       commands.
87
88   Create command
89       ssm create [-h] [-s SIZE] [-n NAME] [--fstype FSTYPE]  [-r  LEVEL]  [-I
90       STRIPESIZE] [-i STRIPES] [-p POOL] [-e [{luks,plain}]] [-o MNT_OPTIONS]
91       [-v VIRTUAL_SIZE] [device [device ...]] [mount]
92
93       This command creates a new volume with defined parameters. If a  device
94       is  provided  it  will  be  used to create the volume, hence it will be
95       added into the pool prior to volume creation (See Add command section).
96       More than one device can be used to create a volume.
97
98       If  the device is already being used in a different pool, then ssm will
99       ask you whether you want to remove it from the original  pool.  If  you
100       decline,  or  the  removal fails, then the volume creation fails if the
101       SIZE was not provided. On the other hand, if the SIZE is  provided  and
102       some  devices  can  not be added to the pool, the volume creation might
103       still succeed if there is enough space in the pool.
104
105       In addition to specifying size of the volume directly,  percentage  can
106       be specified as well. Specify --size 70% to indicate the volume size to
107       be 70% of total pool size. Additionally, percentage  of  the  used,  or
108       free  pool  space can be specified as well using keywords FREE, or USED
109       respectively.
110
111       The POOL name can be specified as well. If the pool exists, a new  vol‐
112       ume  will  be created from that pool (optionally adding device into the
113       pool).  However if the POOL does not exist, then ssm  will  attempt  to
114       create  a new pool with the provided device, and then create a new vol‐
115       ume from this pool. If the --backend argument is omitted,  the  default
116       ssm backend will be used. The default backend is lvm.
117
118       ssm also supports creating a RAID configuration, however some back-ends
119       might not support all RAID levels, or may not even support RAID at all.
120       In this case, volume creation will fail.
121
122       If  a  mount  point  is  provided, ssm will attempt to mount the volume
123       after it is created. However it will fail if mountable file  system  is
124       not present on the volume.
125
126       If  the  backend allows it (currently only supported with lvm backend),
127       ssm can be used to create  thinly  provisioned  volumes  by  specifying
128       --virtual-size  option. This will automatically create a thin pool of a
129       given size provided with --size option and thin volume of a given  size
130       provided  with  --virtual-size  option  and  name  provided with --name
131       option. Virtual size can be much bigger than  available  space  in  the
132       pool.
133
134       -h, --help
135              show this help message and exit
136
137       -s SIZE, --size SIZE
138              Gives  the  size to allocate for the new logical volume.  A size
139              suffix K|k, M|m, G|g, T|t, P|p, E|e can be used to define 'power
140              of two' units. If no unit is provided, it defaults to kilobytes.
141              This is optional and if not given, maximum possible size will be
142              used.   Additionally the new size can be specified as a percent‐
143              age of the total pool size (50%), as a percentage of  free  pool
144              space   (50%FREE),  or  as  a  percentage  of  used  pool  space
145              (50%USED).
146
147       -n NAME, --name NAME
148              The name for the new logical volume. This  is  optional  and  if
149              omitted, name will be generated by the corresponding backend.
150
151       --fstype FSTYPE
152              Gives  the file system type to create on the new logical volume.
153              Supported file systems are (ext3, ext4,  xfs,  btrfs).  This  is
154              optional and if not given file system will not be created.
155
156       -r LEVEL, --raid LEVEL
157              Specify a RAID level you want to use when creating a new volume.
158              Note that some backends might not implement all  supported  RAID
159              levels. This is optional and if no specified, linear volume will
160              be created.  You can choose from the following list of supported
161              levels (0,1,10).
162
163       -I STRIPESIZE, --stripesize STRIPESIZE
164              Gives  the  number  of kilobytes for the granularity of stripes.
165              This is optional and if not given, backend default will be used.
166              Note that you have to specify RAID level as well.
167
168       -i STRIPES, --stripes STRIPES
169              Gives  the  number  of  stripes.  This is equal to the number of
170              physical volumes to scatter the logical volume. This is optional
171              and  if  stripesize  is  set  and  multiple devices are provided
172              stripes is determined automatically from the number of  devices.
173              Note that you have to specify RAID level as well.
174
175       -p POOL, --pool POOL
176              Pool to use to create the new volume.
177
178       -e [{luks,plain}], --encrypt [{luks,plain}]
179              Create encrpted volume. Extension to use can be specified.
180
181       -o MNT_OPTIONS, --mnt-options MNT_OPTIONS
182              Mount  options  are specified with a -o flag followed by a comma
183              separated string of options. This option is equivalent to the -o
184              mount(8) option.
185
186       -v VIRTUAL_SIZE, --virtual-size VIRTUAL_SIZE
187              Gives  the virtual size for the new thinly provisioned volume. A
188              size suffix K|k, M|m, G|g, T|t, P|p, E|e can be used  to  define
189              'power  of  two'  units.  If no unit is provided, it defaults to
190              kilobytes.
191
192   Info command
193       ssm info [-h] [item]
194
195       EXPERIMENTAL This feature is currently experimental. The output  format
196       can change and fields can be added or removed.
197
198       Show  detailed  information  about all detected devices, pools, volumes
199       and snapshots found on the system. The info command can be used  either
200       alone  to  show all available items, or you can specify a device, pool,
201       or any other identifier to see information about the specific item.
202
203       -h, --help
204              show this help message and exit
205
206   List command
207       ssm    list    [-h]    [{volumes,vol,dev,devices,pool,pools,fs,filesys‐
208       tems,snap,snapshots}]
209
210       Lists  information about all detected devices, pools, volumes and snap‐
211       shots found on the system. The list command can be used either alone to
212       list all of the information, or you can request specific sections only.
213
214       The following sections can be specified:
215
216       {volumes | vol}
217              List information about all volumes found in the system.
218
219       {devices | dev}
220              List  information  about  all  devices found on the system. Some
221              devices are intentionally hidden,  like  for  example  cdrom  or
222              DM/MD devices since those are actually listed as volumes.
223
224       {pools | pool}
225              List information about all pools found in the system.
226
227       {filesystems | fs}
228              List  information about all volumes containing filesystems found
229              in the system.
230
231       {snapshots | snap}
232              List information about all snapshots found in the  system.  Note
233              that  some back-ends do not support snapshotting and some cannot
234              distinguish snapshot from regular volumes.  In  this  case,  ssm
235              will  try  to  recognize  the volume name in order to identify a
236              snapshot, but if the ssm regular expression does not  match  the
237              snapshot  pattern,  the  problematic snapshot will not be recog‐
238              nized.
239
240       -h, --help
241              show this help message and exit
242
243   Remove command
244       ssm remove [-h] [-a] [items [items ...]]
245
246       This command removes an item from the system.  Multiple  items  can  be
247       specified.   If  the item cannot be removed for some reason, it will be
248       skipped.
249
250       An item can be any of the following:
251
252       device Remove a device from the pool. Note that this cannot be done  in
253              some  cases  where the device is being used by the pool. You can
254              use the -f argument to force removal. If  the  device  does  not
255              belong to any pool, it will be skipped.
256
257       pool   Remove a pool from the system. This will also remove all volumes
258              created from that pool.
259
260       volume Remove a volume from the system. Note that this will fail if the
261              volume is mounted and cannot be forced with -f.
262
263       -h, --help
264              show this help message and exit
265
266       -a, --all
267              Remove all pools in the system.
268
269   Resize command
270       ssm resize [-h] [-s SIZE] volume [device [device ...]]
271
272       Change  size of the volume and file system. If there is no file system,
273       only the volume itself will be resized. You can specify a device to add
274       into  the  volume pool prior the resize. Note that the device will only
275       be added into the pool if the volume size is going to grow.
276
277       If the device is already used in a different pool, then  ssm  will  ask
278       you whether or not you want to remove it from the original pool.
279
280       In  some  cases,  the file system has to be mounted in order to resize.
281       This will be handled by ssm automatically by mounting the volume tempo‐
282       rarily.
283
284       In  addition  to specifying new size of the volume directly, percentage
285       can be specified as well. Specify --size 70% to resize  the  volume  to
286       70%  of  it's  original  size. Additionally, percentage of the used, or
287       free pool space can be specified as well using keywords FREE,  or  USED
288       respectively.
289
290       Note  that  resizing  btrfs  subvolume is not supported, only the whole
291       file system can be resized.
292
293       -h, --help
294              show this help message and exit
295
296       -s SIZE, --size SIZE
297              New size of the volume. With the + or - sign the value is  added
298              to  or subtracted from the actual size of the volume and without
299              it, the value will be set as the new volume size. A size  suffix
300              of  [k|K]  for  kilobytes,  [m|M] for megabytes, [g|G] for giga‐
301              bytes, [t|T] for terabytes or [p|P] for petabytes  is  optional.
302              If  no  unit is provided the default is kilobytes.  Additionally
303              the new size can be specified as a percentage  of  the  original
304              volume  size  ([+][-]50%),  as  a  percentage of free pool space
305              ([+][-]50%FREE),  or  as  a  percentage  of  used   pool   space
306              ([+][-]50%USED).
307
308   Check command
309       ssm check [-h] device [device ...]
310
311       Check the file system consistency on the volume. You can specify multi‐
312       ple volumes to check. If there is no file system on  the  volume,  this
313       volume will be skipped.
314
315       In  some  cases the file system has to be mounted in order to check the
316       file system.  This will be handled by ssm automatically by mounting the
317       volume temporarily.
318
319       -h, --help
320              show this help message and exit
321
322   Snapshot command
323       ssm snapshot [-h] [-s SIZE] [-d DEST | -n NAME] volume
324
325       Take  a snapshot of an existing volume. This operation will fail if the
326       back-end to which the volume belongs to does not support  snapshotting.
327       Note that you cannot specify both NAME and DEST since those options are
328       mutually exclusive.
329
330       In addition to specifying new size of the volume  directly,  percentage
331       can  be specified as well. Specify --size 70% to indicate the new snap‐
332       shot size to be 70% of the origin volume size. Additionally, percentage
333       of the used, or free pool space can be specified as well using keywords
334       FREE, or USED respectively.
335
336       In some cases the file system has to be mounted  in  order  to  take  a
337       snapshot  of  the  volume. This will be handled by ssm automatically by
338       mounting the volume temporarily.
339
340       -h, --help
341              show this help message and exit
342
343       -s SIZE, --size SIZE
344              Gives the size to allocate for the new snapshot volume.  A  size
345              suffix K|k, M|m, G|g, T|t, P|p, E|e can be used to define 'power
346              of two' units. If no unit is provided, it defaults to kilobytes.
347              This  is  optional and if not given, the size will be determined
348              automatically. Additionally the new size can be specified  as  a
349              percentage of the original volume size (50%), as a percentage of
350              free pool space (50%FREE), or as a percentage of used pool space
351              (50%USED).
352
353       -d DEST, --dest DEST
354              Destination  of  the snapshot specified with absolute path to be
355              used for the new snapshot. This is optional and if not specified
356              default backend policy will be performed.
357
358       -n NAME, --name NAME
359              Name  of the new snapshot. This is optional and if not specified
360              default backend policy will be performed.
361
362   Add command
363       ssm add [-h] [-p POOL] device [device ...]
364
365       This command adds a device into the pool. By default, the  device  will
366       not  be  added if it's already a part of a different pool, but the user
367       will be asked whether or not to remove the device from its  pool.  When
368       multiple  devices are provided, all of them are added into the pool. If
369       one of the devices cannot be added into the pool for  any  reason,  the
370       add  command  will fail. If no pool is specified, the default pool will
371       be chosen. In the case of a non existing pool, it will be created using
372       the provided devices.
373
374       -h, --help
375              show this help message and exit
376
377       -p POOL, --pool POOL
378              Pool  to  add  device into. If not specified the default pool is
379              used.
380
381   Mount command
382       ssm mount [-h] [-o OPTIONS] volume directory
383
384       This command will mount the volume at the specified directory. The vol‐
385       ume can be specified in the same way as with mount(8), however in addi‐
386       tion, one can also specify a volume in the format as it appears in  the
387       ssm list table.
388
389       For example, instead of finding out what the device and subvolume id of
390       the btrfs subvolume "btrfs_pool:vol001" is in order to  mount  it,  one
391       can simply call ssm mount btrfs_pool:vol001 /mnt/test.
392
393       One can also specify OPTIONS in the same way as with mount(8).
394
395       -h, --help
396              show this help message and exit
397
398       -o OPTIONS, --options OPTIONS
399              Options  are  specified with a -o flag followed by a comma sepa‐
400              rated string of options. This option is equivalent to  the  same
401              mount(8) option.
402
403   Migrate command
404       ssm migrate [-h] source target
405
406       Move  data from one device to another. For btrfs and lvm their special‐
407       ized utilities are used, so the data are  moved  in  an  all-or-nothing
408       fashion  and  no other operation is needed to add/remove the devices or
409       rebalance the pool.  Devices that do not belong to a backend that  sup‐
410       ports specialized device migration tools will be migrated using dd.
411
412       This  operation is not intended to be used for duplication, because the
413       process can change metadata and an access to the data may be difficult.
414
415       -h, --help
416              show this help message and exit
417

BACK-ENDS

419   Introduction
420       Ssm aims to create a unified user interface  for  various  technologies
421       like  Device  Mapper (dm), Btrfs file system, Multiple Devices (md) and
422       possibly more. In order to do so we have a core  abstraction  layer  in
423       ssmlib/main.py.  This  abstraction  layer  should  ideally know nothing
424       about the underlying technology, but rather comply  with  device,  pool
425       and volume abstractions.
426
427       Various backends can be registered in ssmlib/main.py in order to handle
428       specific storage technology, implementing methods  like  create,  snap‐
429       shot,  or remove volumes and pools. The core will then call these meth‐
430       ods to manage the storage without needing to know what lies  underneath
431       it. There are already several backends registered in ssm.
432
433   Btrfs backend
434       Btrfs  is  the file system with many advanced features including volume
435       management. This is the reason why btrfs is  handled  differently  than
436       other  conventional file systems in ssm. It is used as a volume manage‐
437       ment back-end.
438
439       Pools, volumes and snapshots can be created with btrfs backend and here
440       is what it means from the btrfs point of view:
441
442       pool   A pool is actually a btrfs file system itself, because it can be
443              extended by adding more devices, or shrunk by  removing  devices
444              from  it. Subvolumes and snapshots can also be created. When the
445              new btrfs pool should be created, ssm  simply  creates  a  btrfs
446              file  system, which means that every new btrfs pool has one vol‐
447              ume of the same name as the pool itself which can not be removed
448              without removing the entire pool. The default btrfs pool name is
449              btrfs_pool.
450
451              When creating a new btrfs pool, the name of the pool is used  as
452              the  file  system  label.  If there is an already existing btrfs
453              file system in the system without a label,  a  btrfs  pool  name
454              will  be  generated  for  internal  use  in the following format
455              "btrfs_{device base name}".
456
457              A btrfs pool is created when the create or add command  is  used
458              with specified devices and non existing pool name.
459
460       volume A  volume in the btrfs back-end is actually just btrfs subvolume
461              with the exception of the first volume  created  on  btrfs  pool
462              creation,  which  is the file system itself. Subvolumes can only
463              be created on the btrfs file system when it is mounted, but  the
464              user  does not have to worry about that since ssm will automati‐
465              cally mount the file system temporarily in order to create a new
466              subvolume.
467
468              The volume name is used as subvolume path in the btrfs file sys‐
469              tem and every object in this path must exist in order to  create
470              a volume. The volume name for internal tracking and that is vis‐
471              ible to the user is generated in the format "{pool_name}:{volume
472              name}", but volumes can be also referenced by its mount point.
473
474              The  btrfs  volumes  are only shown in the list output, when the
475              file system is mounted, with the exception  of  the  main  btrfs
476              volume - the file system itself.
477
478              Also  note  that btrfs volumes and subvolumes cannot be resized.
479              This is mainly limitation of the btrfs tools which currently  do
480              not work reliably.
481
482              A new btrfs volume can be created with the create command.
483
484       snapshot
485              The  btrfs  file  system supports subvolume snapshotting, so you
486              can take a snapshot of any btrfs volume in the system with  ssm.
487              However  btrfs does not distinguish between subvolumes and snap‐
488              shots, because a snapshot is actually just a subvolume with some
489              blocks  shared with a different subvolume.  This means, that ssm
490              is not able to directly recognize a  btrfs  snapshot.   Instead,
491              ssm  will  try  to  recognize a special name format of the btrfs
492              volume that denotes it is a snapshot. However, if  the  NAME  is
493              specified  when  creating snapshot which does not match the spe‐
494              cial pattern, snapshot will not be recognized by the ssm and  it
495              will be listed as regular btrfs volume.
496
497              A new btrfs snapshot can be created with the snapshot command.
498
499       device Btrfs does not require a special device to be created on.
500
501   Lvm backend
502       Pools, volumes and snapshots can be created with lvm, which pretty much
503       match the lvm abstraction.
504
505       pool   An lvm pool is just a volume group in  lvm  language.  It  means
506              that  it is grouping devices and new logical volumes can be cre‐
507              ated out of  the  lvm  pool.   The  default  lvm  pool  name  is
508              lvm_pool.
509
510              An  lvm pool is created when the create or add commands are used
511              with specified devices and a non existing pool name.
512
513              Alternatively a thin pool can be created as a  result  of  using
514              --virtual-size option to create thin volume.
515
516       volume An  lvm  volume is just a logical volume in lvm language. An lvm
517              volume can be created with the create command.
518
519       snapshot
520              Lvm volumes can be snapshotted as well. When a snapshot is  cre‐
521              ated  from  the  lvm  volume,  a new snapshot volume is created,
522              which can be handled as any other lvm volume. Unlike  btrfs  lvm
523              is able to distinguish snapshot from regular volume, so there is
524              no need for a snapshot name to match special pattern.
525
526       device Lvm requires a physical device to be created on the device,  but
527              with ssm this is transparent for the user.
528
529   Crypt backend
530       The  crypt backend in ssm uses cryptsetup and dm-crypt target to manage
531       encrypted volumes. The crypt backend can be used as a  regular  backend
532       for creating encrypted volumes on top of regular block devices, or even
533       other volumes (lvm or md volumes for example). Or it  can  be  used  to
534       create encrypted lvm volumes right away in a single step.
535
536       Only  volumes  can be created with crypt backend. This backend does not
537       support pooling and does not require special devices.
538
539       pool   The crypt backend does not support pooling, and it is not possi‐
540              ble to create crypt pool or add a device into a pool.
541
542       volume A  volume in the crypt backend is the volume created by dm-crypt
543              which represents the data on the original  encrypted  device  in
544              unencrypted  form.   The crypt backend does not support pooling,
545              so only one device can be used to create crypt volume.  It  also
546              does not support raid or any device concatenation.
547
548              Currently  two  modes,  or  extensions  are  supported: luks and
549              plain. Luks is used by default. For more information  about  the
550              extensions, please see cryptsetup manual page.
551
552       snapshot
553              The  crypt backend does not support snapshotting, however if the
554              encrypted volume is created on top of an  lvm  volume,  the  lvm
555              volume  itself  can  be  snapshotted.  The  snapshot can be then
556              opened by using cryptsetup.  It  is  possible  that  this  might
557              change  in  the  future so that ssm will be able to activate the
558              volume directly without the extra step.
559
560       device The crypt backend does not require a special device to  be  cre‐
561              ated on.
562
563   MD backend
564       MD  backend  in ssm is currently limited to only gather the information
565       about MD volumes in the system. You can not create or manage MD volumes
566       or pools, but this functionality will be extended in the future.
567
568   Multipath backend
569       Multipath backend in ssm is currently limited to only gather the infor‐
570       mation about multipath volumes in the system. You  can  not  create  or
571       manage  multipath  volumes  or  pools,  but  this functionality will be
572       extended in the future.
573

EXAMPLES

575       List system storage information:
576
577          # ssm list
578
579       List all pools in the system:
580
581          # ssm list pools
582
583       Create a new 100GB volume with the default lvm backend  using  /dev/sda
584       and /dev/sdb with xfs file system:
585
586          # ssm create --size 100G --fs xfs /dev/sda /dev/sdb
587
588       Create  a  new  volume with a btrfs backend using /dev/sda and /dev/sdb
589       and let the volume to be RAID 1:
590
591          # ssm -b btrfs create --raid 1 /dev/sda /dev/sdb
592
593       Using the lvm backend create a RAID 0 volume with devices /dev/sda  and
594       /dev/sdb  with  128kB  stripe  size,  ext4  file system and mount it on
595       /home:
596
597          # ssm create --raid 0 --stripesize 128k /dev/sda /dev/sdb /home
598
599       Create a new thinly provisioned volume with a lvm backend using devices
600       /dev/sda and /dev/sdb using --virtual-size option:
601
602          # ssm create --virtual-size 1T /dev/sda /dev/sdb
603
604       Create  a  new  thinly provisioned volume with a defined thin pool size
605       and devices /dev/sda and /dev/sdb:
606
607          # ssm create --size 50G --virtual-size 1T /dev/sda /dev/sdb
608
609       Extend btrfs volume btrfs_pool by 500GB and use /dev/sdc  and  /dev/sde
610       to cover the resize:
611
612          # ssm resize -s +500G btrfs_pool /dev/sdc /dev/sde
613
614       Shrink volume /dev/lvm_pool/lvol001 by 1TB:
615
616          # ssm resize -s-1t /dev/lvm_pool/lvol001
617
618       Remove  /dev/sda  device  from the pool, remove the btrfs_pool pool and
619       also remove the volume /dev/lvm_pool/lvol001:
620
621          # ssm remove /dev/sda btrfs_pool /dev/lvm_pool/lvol001
622
623       Take a snapshot of the btrfs volume btrfs_pool:my_volume:
624
625          # ssm snapshot btrfs_pool:my_volume
626
627       Add devices /dev/sda and /dev/sdb into the btrfs_pool pool:
628
629          # ssm add -p btrfs_pool /dev/sda /dev/sdb
630
631       Mount btrfs subvolume btrfs_pool:vol001 on /mnt/test:
632
633          # ssm mount btrfs_pool:vol001 /mnt/test
634

ENVIRONMENT VARIABLES

636       SSM_DEFAULT_BACKEND
637              Specify which backend will be used by default. This can be over‐
638              ridden  by  specifying  the  -b or --backend argument. Currently
639              only lvm and btrfs are supported.
640
641       SSM_LVM_DEFAULT_POOL
642              Name of the default lvm pool to be used  if  the  -p  or  --pool
643              argument is omitted.
644
645       SSM_BTRFS_DEFAULT_POOL
646              Name  of  the  default btrfs pool to be used if the -p or --pool
647              argument is omitted.
648
649       SSM_PREFIX_FILTER
650              When this is set, ssm will filter out all devices,  volumes  and
651              pools  whose  name  does  not start with this prefix. It is used
652              mainly in the ssm test suite to make sure that we do not  scram‐
653              ble the local system configuration.
654

LICENCE

656       (C)2017  Red  Hat, Inc., Jan Tulak <jtulak@redhat.com> (C)2011 Red Hat,
657       Inc., Lukas Czerner <lczerner@redhat.com>
658
659       This program is free software: you can redistribute it and/or modify it
660       under  the  terms of the GNU General Public License as published by the
661       Free Software Foundation, either version 2 of the License, or (at  your
662       option) any later version.
663
664       This  program  is  distributed  in the hope that it will be useful, but
665       WITHOUT ANY  WARRANTY;  without  even  the  implied  warranty  of  MER‐
666       CHANTABILITY  or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General
667       Public License for more details.
668
669       You should have received a copy of the GNU General Public License along
670       with this program.  If not, see <http://www.gnu.org/licenses/>.
671

REQUIREMENTS

673       Python  2.6 or higher is required to run this tool. System Storage Man‐
674       ager can only be run as root since most of the  commands  require  root
675       privileges.
676
677       There  are  other  requirements  listed below, but note that you do not
678       necessarily need all dependencies for all backends. However if some  of
679       the  tools  required  by  a  backend are missing, that backend will not
680       work.
681
682   Python modules
683       · argparse
684
685       · atexit
686
687       · base64
688
689       · datetime
690
691       · fcntl
692
693       · getpass
694
695       · os
696
697       · pwquality
698
699       · re
700
701       · socket
702
703       · stat
704
705       · struct
706
707       · subprocess
708
709       · sys
710
711       · tempfile
712
713       · termios
714
715       · threading
716
717       · tty
718
719   System tools
720       · tune2fs
721
722       · fsck.SUPPORTED_FS
723
724       · resize2fs
725
726       · xfs_db
727
728       · xfs_check
729
730       · xfs_growfs
731
732       · mkfs.SUPPORTED_FS
733
734       · which
735
736       · mount
737
738       · blkid
739
740       · wipefs
741
742       · dd
743
744   Lvm backend
745       · lvm2 binaries
746
747       Some distributions (e.g. Debian) have thin provisioning tools  for  LVM
748       as  an optional dependency, while others install it automatically. Thin
749       provisioning without these tools installed is not supported by SSM.
750
751   Btrfs backend
752       · btrfs progs
753
754   Crypt backend
755       · dmsetup
756
757       · cryptsetup
758
759   Multipath backend
760       · multipath
761

AVAILABILITY

763       System       storage       manager       is       available        from
764       http://system-storage-manager.github.io.    You    can   subscribe   to
765       storagemanager-devel@lists.sourceforge.net to follow the current devel‐
766       opment.
767

AUTHOR

769       Jan Ťulák <jtulak@redhat.com>, Lukáš Czerner <lczerner@redhat.com>
770
772       2020, Red Hat, Inc., Jan Ťulák <jtulak@redhat.com>, Lukáš Czerner <lcz‐
773       erner@redhat.com>
774
775
776
777
7781.3                              Jan 31, 2020                           SSM(8)
Impressum