1SSM(8)                      System Storage Manager                      SSM(8)
2
3
4

NAME

6       ssm - System Storage Manager: a single tool to manage your storage
7

SYNOPSIS

9       ssm  [-h]  [--version]  [-v]  [-f] [-b BACKEND] [-n] {check,resize,cre‐
10       ate,list,add,remove,snapshot,mount} ...
11
12       ssm create [-h] [-s SIZE] [-n NAME] [--fstype FSTYPE]  [-r  LEVEL]  [-I
13       STRIPESIZE] [-i STRIPES] [-p POOL] [-e [{luks,plain}]] [-o MNT_OPTIONS]
14       [-v VIRTUAL_SIZE] [device [device ...]] [mount]
15
16       ssm    list    [-h]    [{volumes,vol,dev,devices,pool,pools,fs,filesys‐
17       tems,snap,snapshots}]
18
19       ssm remove [-h] [-a] [items [items ...]]
20
21       ssm resize [-h] [-s SIZE] volume [device [device ...]]
22
23       ssm check [-h] device [device ...]
24
25       ssm snapshot [-h] [-s SIZE] [-d DEST | -n NAME] volume
26
27       ssm add [-h] [-p POOL] device [device ...]
28
29       ssm mount [-h] [-o OPTIONS] volume directory
30

DESCRIPTION

32       System  Storage  Manager provides an easy to use command line interface
33       to manage your storage using  various  technologies  like  lvm,  btrfs,
34       encrypted volumes and more.
35
36       In  more sophisticated enterprise storage environments, management with
37       Device Mapper (dm), Logical Volume Manager (LVM), or  Multiple  Devices
38       (md)  is becoming increasingly more difficult.  With file systems added
39       to the mix, the number of tools needed to configure and manage  storage
40       has  grown  so large that it is simply not user friendly.  With so many
41       options for a system administrator to  consider,  the  opportunity  for
42       errors and problems is large.
43
44       The  btrfs  administration  tools have shown us that storage management
45       can be simplified, and we are working to bring  that  ease  of  use  to
46       Linux filesystems in general.
47

OPTIONS

49       -h, --help
50              show this help message and exit
51
52       --version
53              show program's version number and exit
54
55       -v, --verbose
56              Show aditional information while executing.
57
58       -f, --force
59              Force  execution  in the case where ssm has some doubts or ques‐
60              tions.
61
62       -b BACKEND, --backend BACKEND
63              Choose  backend  to  use.  Currently   you   can   choose   from
64              (lvm,btrfs,crypt).
65
66       -n, --dry-run
67              Dry run. Do not do anything, just parse the command line options
68              and gather system information if necessary. Note that with  this
69              option  ssm  will  not perform all the check as some of them are
70              done by the backends themselves. This option is mainly used  for
71              debugging purposes, but still requires root privileges.
72

SYSTEM STORAGE MANAGER COMMANDS

74   Introduction
75       System Storage Manager has several commands that you can specify on the
76       command line as a first argument to ssm. They all have a  specific  use
77       and their own arguments, but global ssm arguments are propagated to all
78       commands.
79
80   Create command
81       ssm create [-h] [-s SIZE] [-n NAME] [--fstype FSTYPE]  [-r  LEVEL]  [-I
82       STRIPESIZE] [-i STRIPES] [-p POOL] [-e [{luks,plain}]] [-o MNT_OPTIONS]
83       [-v VIRTUAL_SIZE] [device [device ...]] [mount]
84
85       This command creates a new volume with defined parameters. If a  device
86       is  provided  it  will  be  used to create the volume, hence it will be
87       added into the pool prior to volume creation (See Add command section).
88       More than one device can be used to create a volume.
89
90       If  the device is already being used in a different pool, then ssm will
91       ask you whether you want to remove it from the original  pool.  If  you
92       decline,  or  the  removal fails, then the volume creation fails if the
93       SIZE was not provided. On the other hand, if the SIZE is  provided  and
94       some  devices  can  not be added to the pool, the volume creation might
95       still succeed if there is enough space in the pool.
96
97       In addition to specifying size of the volume directly,  percentage  can
98       be specified as well. Specify --size 70% to indicate the volume size to
99       be 70% of total pool size. Additionally, percentage  of  the  used,  or
100       free  pool  space can be specified as well using keywords FREE, or USED
101       respectively.
102
103       The POOL name can be specified as well. If the pool exists, a new  vol‐
104       ume  will  be created from that pool (optionally adding device into the
105       pool).  However if the POOL does not exist, then ssm  will  attempt  to
106       create  a new pool with the provided device, and then create a new vol‐
107       ume from this pool. If the --backend argument is omitted,  the  default
108       ssm backend will be used. The default backend is lvm.
109
110       ssm also supports creating a RAID configuration, however some back-ends
111       might not support all RAID levels, or may not even support RAID at all.
112       In this case, volume creation will fail.
113
114       If  a  mount  point  is  provided, ssm will attempt to mount the volume
115       after it is created. However it will fail if mountable file  system  is
116       not present on the volume.
117
118       If  the  backend allows it (currently only supported with lvm backend),
119       ssm can be used to create  thinly  provisioned  volumes  by  specifying
120       --virtual-size  option. This will automatically create a thin pool of a
121       given size provided with --size option and thin volume of a given  size
122       provided  with  --virtual-size  option  and  name  provided with --name
123       option. Virtual size can be much bigger than  available  space  in  the
124       pool.
125
126       -h, --help
127              show this help message and exit
128
129       -s SIZE, --size SIZE
130              Gives  the  size to allocate for the new logical volume.  A size
131              suffix K|k, M|m, G|g, T|t, P|p, E|e can be used to define 'power
132              of two' units. If no unit is provided, it defaults to kilobytes.
133              This is optional and if not given, maximum possible size will be
134              used.   Additionally the new size can be specified as a percent‐
135              age of the total pool size (50%), as a percentage of  free  pool
136              space   (50%FREE),  or  as  a  percentage  of  used  pool  space
137              (50%USED).
138
139       -n NAME, --name NAME
140              The name for the new logical volume. This  is  optional  and  if
141              omitted, name will be generated by the corresponding backend.
142
143       --fstype FSTYPE
144              Gives  the file system type to create on the new logical volume.
145              Supported file systems are (ext3, ext4,  xfs,  btrfs).  This  is
146              optional and if not given file system will not be created.
147
148       -r LEVEL, --raid LEVEL
149              Specify a RAID level you want to use when creating a new volume.
150              Note that some backends might not implement all  supported  RAID
151              levels. This is optional and if no specified, linear volume will
152              be created.  You can choose from the following list of supported
153              levels (0,1,10).
154
155       -I STRIPESIZE, --stripesize STRIPESIZE
156              Gives  the  number  of kilobytes for the granularity of stripes.
157              This is optional and if not given, backend default will be used.
158              Note that you have to specify RAID level as well.
159
160       -i STRIPES, --stripes STRIPES
161              Gives  the  number  of  stripes.  This is equal to the number of
162              physical volumes to scatter the logical volume. This is optional
163              and  if  stripesize  is  set  and  multiple devices are provided
164              stripes is determined automatically from the number of  devices.
165              Note that you have to specify RAID level as well.
166
167       -p POOL, --pool POOL
168              Pool to use to create the new volume.
169
170       -e [{luks,plain}], --encrypt [{luks,plain}]
171              Create encrpted volume. Extension to use can be specified.
172
173       -o MNT_OPTIONS, --mnt-options MNT_OPTIONS
174              Mount  options  are specified with a -o flag followed by a comma
175              separated string of options. This option is equivalent to the -o
176              mount(8) option.
177
178       -v VIRTUAL_SIZE, --virtual-size VIRTUAL_SIZE
179              Gives  the virtual size for the new thinly provisioned volume. A
180              size suffix K|k, M|m, G|g, T|t, P|p, E|e can be used  to  define
181              'power  of  two'  units.  If no unit is provided, it defaults to
182              kilobytes.
183
184   List command
185       ssm    list    [-h]    [{volumes,vol,dev,devices,pool,pools,fs,filesys‐
186       tems,snap,snapshots}]
187
188       Lists  information about all detected devices, pools, volumes and snap‐
189       shots found on the system. The list command can be used either alone to
190       list all of the information, or you can request specific sections only.
191
192       The following sections can be specified:
193
194       {volumes | vol}
195              List information about all volumes found in the system.
196
197       {devices | dev}
198              List  information  about  all  devices found on the system. Some
199              devices are intentionally hidden,  like  for  example  cdrom  or
200              DM/MD devices since those are actually listed as volumes.
201
202       {pools | pool}
203              List information about all pools found in the system.
204
205       {filesystems | fs}
206              List  information about all volumes containing filesystems found
207              in the system.
208
209       {snapshots | snap}
210              List information about all snapshots found in the  system.  Note
211              that  some back-ends do not support snapshotting and some cannot
212              distinguish snapshot from regular volumes.  In  this  case,  ssm
213              will  try  to  recognize  the volume name in order to identify a
214              snapshot, but if the ssm regular expression does not  match  the
215              snapshot  pattern,  the  problematic snapshot will not be recog‐
216              nized.
217
218       -h, --help
219              show this help message and exit
220
221   Remove command
222       ssm remove [-h] [-a] [items [items ...]]
223
224       This command removes an item from the system.  Multiple  items  can  be
225       specified.   If  the item cannot be removed for some reason, it will be
226       skipped.
227
228       An item can be any of the following:
229
230       device Remove a device from the pool. Note that this cannot be done  in
231              some  cases  where the device is being used by the pool. You can
232              use the -f argument to force removal. If  the  device  does  not
233              belong to any pool, it will be skipped.
234
235       pool   Remove a pool from the system. This will also remove all volumes
236              created from that pool.
237
238       volume Remove a volume from the system. Note that this will fail if the
239              volume is mounted and cannot be forced with -f.
240
241       -h, --help
242              show this help message and exit
243
244       -a, --all
245              Remove all pools in the system.
246
247   Resize command
248       ssm resize [-h] [-s SIZE] volume [device [device ...]]
249
250       Change  size of the volume and file system. If there is no file system,
251       only the volume itself will be resized. You can specify a device to add
252       into  the  volume pool prior the resize. Note that the device will only
253       be added into the pool if the volume size is going to grow.
254
255       If the device is already used in a different pool, then  ssm  will  ask
256       you whether or not you want to remove it from the original pool.
257
258       In  some  cases,  the file system has to be mounted in order to resize.
259       This will be handled by ssm automatically by mounting the volume tempo‐
260       rarily.
261
262       In  addition  to specifying new size of the volume directly, percentage
263       can be specified as well. Specify --size 70% to resize  the  volume  to
264       70%  of  it's  original  size. Additionally, percentage of the used, or
265       free pool space can be specified as well using keywords FREE,  or  USED
266       respectively.
267
268       Note  that  resizing  btrfs  subvolume is not supported, only the whole
269       file system can be resized.
270
271       -h, --help
272              show this help message and exit
273
274       -s SIZE, --size SIZE
275              New size of the volume. With the + or - sign the value is  added
276              to  or subtracted from the actual size of the volume and without
277              it, the value will be set as the new volume size. A size  suffix
278              of  [k|K]  for  kilobytes,  [m|M] for megabytes, [g|G] for giga‐
279              bytes, [t|T] for terabytes or [p|P] for petabytes  is  optional.
280              If  no  unit is provided the default is kilobytes.  Additionally
281              the new size can be specified as a percentage  of  the  original
282              volume  size  ([+][-]50%),  as  a  percentage of free pool space
283              ([+][-]50%FREE),  or  as  a  percentage  of  used   pool   space
284              ([+][-]50%USED).
285
286   Check command
287       ssm check [-h] device [device ...]
288
289       Check the file system consistency on the volume. You can specify multi‐
290       ple volumes to check. If there is no file system on  the  volume,  this
291       volume will be skipped.
292
293       In  some  cases the file system has to be mounted in order to check the
294       file system.  This will be handled by ssm automatically by mounting the
295       volume temporarily.
296
297       -h, --help
298              show this help message and exit
299
300   Snapshot command
301       ssm snapshot [-h] [-s SIZE] [-d DEST | -n NAME] volume
302
303       Take  a snapshot of an existing volume. This operation will fail if the
304       back-end to which the volume belongs to does not support  snapshotting.
305       Note that you cannot specify both NAME and DEST since those options are
306       mutually exclusive.
307
308       In addition to specifying new size of the volume  directly,  percentage
309       can  be specified as well. Specify --size 70% to indicate the new snap‐
310       shot size to be 70% of the origin volume size. Additionally, percentage
311       of the used, or free pool space can be specified as well using keywords
312       FREE, or USED respectively.
313
314       In some cases the file system has to be mounted  in  order  to  take  a
315       snapshot  of  the  volume. This will be handled by ssm automatically by
316       mounting the volume temporarily.
317
318       -h, --help
319              show this help message and exit
320
321       -s SIZE, --size SIZE
322              Gives the size to allocate for the new snapshot volume.  A  size
323              suffix K|k, M|m, G|g, T|t, P|p, E|e can be used to define 'power
324              of two' units. If no unit is provided, it defaults to kilobytes.
325              This  is  optional and if not given, the size will be determined
326              automatically. Additionally the new size can be specified  as  a
327              percentage of the original volume size (50%), as a percentage of
328              free pool space (50%FREE), or as a percentage of used pool space
329              (50%USED).
330
331       -d DEST, --dest DEST
332              Destination  of  the snapshot specified with absolute path to be
333              used for the new snapshot. This is optional and if not specified
334              default backend policy will be performed.
335
336       -n NAME, --name NAME
337              Name  of the new snapshot. This is optional and if not specified
338              default backend policy will be performed.
339
340   Add command
341       ssm add [-h] [-p POOL] device [device ...]
342
343       This command adds a device into the pool. By default, the  device  will
344       not  be  added if it's already a part of a different pool, but the user
345       will be asked whether or not to remove the device from its  pool.  When
346       multiple  devices are provided, all of them are added into the pool. If
347       one of the devices cannot be added into the pool for  any  reason,  the
348       add  command  will fail. If no pool is specified, the default pool will
349       be chosen. In the case of a non existing pool, it will be created using
350       the provided devices.
351
352       -h, --help
353              show this help message and exit
354
355       -p POOL, --pool POOL
356              Pool  to  add  device into. If not specified the default pool is
357              used.
358
359   Mount command
360       ssm mount [-h] [-o OPTIONS] volume directory
361
362       This command will mount the volume at the specified directory. The vol‐
363       ume can be specified in the same way as with mount(8), however in addi‐
364       tion, one can also specify a volume in the format as it appears in  the
365       ssm list table.
366
367       For example, instead of finding out what the device and subvolume id of
368       the btrfs subvolume "btrfs_pool:vol001" is in order to  mount  it,  one
369       can simply call ssm mount btrfs_pool:vol001 /mnt/test.
370
371       One can also specify OPTIONS in the same way as with mount(8).
372
373       -h, --help
374              show this help message and exit
375
376       -o OPTIONS, --options OPTIONS
377              Options  are  specified with a -o flag followed by a comma sepa‐
378              rated string of options. This option is equivalent to  the  same
379              mount(8) option.
380

BACK-ENDS

382   Introduction
383       Ssm  aims  to  create a unified user interface for various technologies
384       like Device Mapper (dm), Btrfs file system, Multiple Devices  (md)  and
385       possibly  more.  In  order to do so we have a core abstraction layer in
386       ssmlib/main.py. This abstraction  layer  should  ideally  know  nothing
387       about  the  underlying  technology, but rather comply with device, pool
388       and volume abstractions.
389
390       Various backends can be registered in ssmlib/main.py in order to handle
391       specific  storage  technology,  implementing methods like create, snap‐
392       shot, or remove volumes and pools. The core will then call these  meth‐
393       ods  to manage the storage without needing to know what lies underneath
394       it. There are already several backends registered in ssm.
395
396   Btrfs backend
397       Btrfs is the file system with many advanced features  including  volume
398       management.  This  is  the reason why btrfs is handled differently than
399       other conventional file systems in ssm. It is used as a volume  manage‐
400       ment back-end.
401
402       Pools, volumes and snapshots can be created with btrfs backend and here
403       is what it means from the btrfs point of view:
404
405       pool   A pool is actually a btrfs file system itself, because it can be
406              extended  by  adding more devices, or shrunk by removing devices
407              from it. Subvolumes and snapshots can also be created. When  the
408              new  btrfs  pool  should  be created, ssm simply creates a btrfs
409              file system, which means that every new btrfs pool has one  vol‐
410              ume of the same name as the pool itself which can not be removed
411              without removing the entire pool. The default btrfs pool name is
412              btrfs_pool.
413
414              When  creating a new btrfs pool, the name of the pool is used as
415              the file system label. If there is  an  already  existing  btrfs
416              file  system  in  the  system without a label, a btrfs pool name
417              will be generated for  internal  use  in  the  following  format
418              "btrfs_{device base name}".
419
420              A  btrfs  pool is created when the create or add command is used
421              with specified devices and non existing pool name.
422
423       volume A volume in the btrfs back-end is actually just btrfs  subvolume
424              with  the  exception  of  the first volume created on btrfs pool
425              creation, which is the file system itself. Subvolumes  can  only
426              be  created on the btrfs file system when it is mounted, but the
427              user does not have to worry about that since ssm will  automati‐
428              cally mount the file system temporarily in order to create a new
429              subvolume.
430
431              The volume name is used as subvolume path in the btrfs file sys‐
432              tem  and every object in this path must exist in order to create
433              a volume. The volume name for internal tracking and that is vis‐
434              ible to the user is generated in the format "{pool_name}:{volume
435              name}", but volumes can be also referenced by its mount point.
436
437              The btrfs volumes are only shown in the list  output,  when  the
438              file  system  is  mounted,  with the exception of the main btrfs
439              volume - the file system itself.
440
441              Also note that btrfs volumes and subvolumes cannot  be  resized.
442              This  is mainly limitation of the btrfs tools which currently do
443              not work reliably.
444
445              A new btrfs volume can be created with the create command.
446
447       snapshot
448              The btrfs file system supports subvolume  snapshotting,  so  you
449              can  take a snapshot of any btrfs volume in the system with ssm.
450              However btrfs does not distinguish between subvolumes and  snap‐
451              shots, because a snapshot is actually just a subvolume with some
452              blocks shared with a different subvolume.  This means, that  ssm
453              is  not  able  to directly recognize a btrfs snapshot.  Instead,
454              ssm will try to recognize a special name  format  of  the  btrfs
455              volume  that  denotes  it is a snapshot. However, if the NAME is
456              specified when creating snapshot which does not match  the  spe‐
457              cial  pattern, snapshot will not be recognized by the ssm and it
458              will be listed as regular btrfs volume.
459
460              A new btrfs snapshot can be created with the snapshot command.
461
462       device Btrfs does not require a special device to be created on.
463
464   Lvm backend
465       Pools, volumes and snapshots can be created with lvm, which pretty much
466       match the lvm abstraction.
467
468       pool   An  lvm  pool  is  just a volume group in lvm language. It means
469              that it is grouping devices and new logical volumes can be  cre‐
470              ated  out  of  the  lvm  pool.   The  default  lvm  pool name is
471              lvm_pool.
472
473              An lvm pool is created when the create or add commands are  used
474              with specified devices and a non existing pool name.
475
476              Alternatively  a  thin  pool can be created as a result of using
477              --virtual-size option to create thin volume.
478
479       volume An lvm volume is just a logical volume in lvm language.  An  lvm
480              volume can be created with the create command.
481
482       snapshot
483              Lvm  volumes can be snapshotted as well. When a snapshot is cre‐
484              ated from the lvm volume, a  new  snapshot  volume  is  created,
485              which  can  be handled as any other lvm volume. Unlike btrfs lvm
486              is able to distinguish snapshot from regular volume, so there is
487              no need for a snapshot name to match special pattern.
488
489       device Lvm  requires a physical device to be created on the device, but
490              with ssm this is transparent for the user.
491
492   Crypt backend
493       The crypt backend in ssm uses cryptsetup and dm-crypt target to  manage
494       encrypted  volumes.  The crypt backend can be used as a regular backend
495       for creating encrypted volumes on top of regular block devices, or even
496       other  volumes  (lvm  or  md volumes for example). Or it can be used to
497       create encrypted lvm volumes right away in a single step.
498
499       Only volumes can be created with crypt backend. This backend  does  not
500       support pooling and does not require special devices.
501
502       pool   The crypt backend does not support pooling, and it is not possi‐
503              ble to create crypt pool or add a device into a pool.
504
505       volume A volume in the crypt backend is the volume created by  dm-crypt
506              which  represents  the  data on the original encrypted device in
507              unencrypted form.  The crypt backend does not  support  pooling,
508              so  only  one device can be used to create crypt volume. It also
509              does not support raid or any device concatenation.
510
511              Currently two modes,  or  extensions  are  supported:  luks  and
512              plain.  Luks  is used by default. For more information about the
513              extensions, please see cryptsetup manual page.
514
515       snapshot
516              The crypt backend does not support snapshotting, however if  the
517              encrypted  volume  is  created  on top of an lvm volume, the lvm
518              volume itself can be  snapshotted.  The  snapshot  can  be  then
519              opened  by  using  cryptsetup.   It  is possible that this might
520              change in the future so that ssm will be able  to  activate  the
521              volume directly without the extra step.
522
523       device The  crypt  backend does not require a special device to be cre‐
524              ated on.
525
526   MD backend
527       MD backend in ssm is currently limited to only gather  the  information
528       about MD volumes in the system. You can not create or manage MD volumes
529       or pools, but this functionality will be extended in the future.
530

EXAMPLES

532       List system storage information:
533
534          # ssm list
535
536       List all pools in the system:
537
538          # ssm list pools
539
540       Create a new 100GB volume with the default lvm backend  using  /dev/sda
541       and /dev/sdb with xfs file system:
542
543          # ssm create --size 100G --fs xfs /dev/sda /dev/sdb
544
545       Create  a  new  volume with a btrfs backend using /dev/sda and /dev/sdb
546       and let the volume to be RAID 1:
547
548          # ssm -b btrfs create --raid 1 /dev/sda /dev/sdb
549
550       Using the lvm backend create a RAID 0 volume with devices /dev/sda  and
551       /dev/sdb  with  128kB  stripe  size,  ext4  file system and mount it on
552       /home:
553
554          # ssm create --raid 0 --stripesize 128k /dev/sda /dev/sdb /home
555
556       Create a new thinly provisioned volume with a lvm backend using devices
557       /dev/sda and /dev/sdb using --virtual-size option:
558
559          # ssm create --virtual-size 1T /dev/sda /dev/sdb
560
561       Create  a  new  thinly provisioned volume with a defined thin pool size
562       and devices /dev/sda and /dev/sdb:
563
564          # ssm create --size 50G --virtual-size 1T /dev/sda /dev/sdb
565
566       Extend btrfs volume btrfs_pool by 500GB and use /dev/sdc  and  /dev/sde
567       to cover the resize:
568
569          # ssm resize -s +500G btrfs_pool /dev/sdc /dev/sde
570
571       Shrink volume /dev/lvm_pool/lvol001 by 1TB:
572
573          # ssm resize -s-1t /dev/lvm_pool/lvol001
574
575       Remove  /dev/sda  device  from the pool, remove the btrfs_pool pool and
576       also remove the volume /dev/lvm_pool/lvol001:
577
578          # ssm remove /dev/sda btrfs_pool /dev/lvm_pool/lvol001
579
580       Take a snapshot of the btrfs volume btrfs_pool:my_volume:
581
582          # ssm snapshot btrfs_pool:my_volume
583
584       Add devices /dev/sda and /dev/sdb into the btrfs_pool pool:
585
586          # ssm add -p btrfs_pool /dev/sda /dev/sdb
587
588       Mount btrfs subvolume btrfs_pool:vol001 on /mnt/test:
589
590          # ssm mount btrfs_pool:vol001 /mnt/test
591

ENVIRONMENT VARIABLES

593       SSM_DEFAULT_BACKEND
594              Specify which backend will be used by default. This can be over‐
595              ridden  by  specifying  the  -b or --backend argument. Currently
596              only lvm and btrfs are supported.
597
598       SSM_LVM_DEFAULT_POOL
599              Name of the default lvm pool to be used  if  the  -p  or  --pool
600              argument is omitted.
601
602       SSM_BTRFS_DEFAULT_POOL
603              Name  of  the  default btrfs pool to be used if the -p or --pool
604              argument is omitted.
605
606       SSM_PREFIX_FILTER
607              When this is set, ssm will filter out all devices,  volumes  and
608              pools  whose  name  does  not start with this prefix. It is used
609              mainly in the ssm test suite to make sure that we do not  scram‐
610              ble the local system configuration.
611

LICENCE

613       (C)2011 Red Hat, Inc., Lukas Czerner <lczerner@redhat.com>
614
615       This program is free software: you can redistribute it and/or modify it
616       under the terms of the GNU General Public License as published  by  the
617       Free  Software Foundation, either version 2 of the License, or (at your
618       option) any later version.
619
620       This program is distributed in the hope that it  will  be  useful,  but
621       WITHOUT  ANY  WARRANTY;  without  even  the  implied  warranty  of MER‐
622       CHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU  General
623       Public License for more details.
624
625       You should have received a copy of the GNU General Public License along
626       with this program.  If not, see <http://www.gnu.org/licenses/>.
627

REQUIREMENTS

629       Python 2.6 or higher is required to run this tool. System Storage  Man‐
630       ager  can  only  be run as root since most of the commands require root
631       privileges.
632
633       There are other requirements listed below, but note  that  you  do  not
634       necessarily  need all dependencies for all backends. However if some of
635       the tools required by a backend are  missing,  that  backend  will  not
636       work.
637
638   Python modules
639       · os
640
641       · re
642
643       · sys
644
645       · stat
646
647       · argparse
648
649       · datetime
650
651       · threading
652
653       · subprocess
654
655   System tools
656       · tune2fs
657
658       · fsck.SUPPORTED_FS
659
660       · resize2fs
661
662       · xfs_db
663
664       · xfs_check
665
666       · xfs_growfs
667
668       · mkfs.SUPPORTED_FS
669
670       · which
671
672       · mount
673
674       · blkid
675
676       · wipefs
677
678   Lvm backend
679       · lvm2 binaries
680
681   Btrfs backend
682       · btrfs progs
683
684   Crypt backend
685       · dmsetup
686
687       · cryptsetup
688

AVAILABILITY

690       System        storage       manager       is       available       from
691       http://storagemanager.sourceforge.net.    You    can    subscribe    to
692       storagemanager-devel@lists.sourceforge.net to follow the current devel‐
693       opment.
694

AUTHOR

696       Lukáš Czerner <lczerner@redhat.com>
697
699       2015, Red Hat, Inc., Lukáš Czerner <lczerner@redhat.com>
700
701
702
703
7040.4                              July 01, 2016                          SSM(8)
Impressum