1SSM(8) System Storage Manager SSM(8)
2
3
4
6 ssm - System Storage Manager: a single tool to manage your storage
7
9 ssm [-h] [--version] [-v] [-f] [-b BACKEND] [-n] {check,resize,cre‐
10 ate,list,add,remove,snapshot,mount,migrate} ...
11
12 ssm create [-h] [-s SIZE] [-n NAME] [--fstype FSTYPE] [-r LEVEL] [-I
13 STRIPESIZE] [-i STRIPES] [-p POOL] [-e [{luks,plain}]] [-o MNT_OPTIONS]
14 [-v VIRTUAL_SIZE] [device [device ...]] [mount]
15
16 ssm list [-h] [{volumes,vol,dev,devices,pool,pools,fs,filesys‐
17 tems,snap,snapshots}]
18
19 ssm remove [-h] [-a] [items [items ...]]
20
21 ssm resize [-h] [-s SIZE] volume [device [device ...]]
22
23 ssm check [-h] device [device ...]
24
25 ssm snapshot [-h] [-s SIZE] [-d DEST | -n NAME] volume
26
27 ssm add [-h] [-p POOL] device [device ...]
28
29 ssm mount [-h] [-o OPTIONS] volume directory
30
31 ssm migrate [-h] source target
32
34 System Storage Manager provides an easy to use command line interface
35 to manage your storage using various technologies like lvm, btrfs,
36 encrypted volumes and more.
37
38 In more sophisticated enterprise storage environments, management with
39 Device Mapper (dm), Logical Volume Manager (LVM), or Multiple Devices
40 (md) is becoming increasingly more difficult. With file systems added
41 to the mix, the number of tools needed to configure and manage storage
42 has grown so large that it is simply not user friendly. With so many
43 options for a system administrator to consider, the opportunity for
44 errors and problems is large.
45
46 The btrfs administration tools have shown us that storage management
47 can be simplified, and we are working to bring that ease of use to
48 Linux filesystems in general.
49
51 -h, --help
52 show this help message and exit
53
54 --version
55 show program's version number and exit
56
57 -v, --verbose
58 Show aditional information while executing.
59
60 -f, --force
61 Force execution in the case where ssm has some doubts or ques‐
62 tions.
63
64 -b BACKEND, --backend BACKEND
65 Choose backend to use. Currently you can choose from
66 (lvm,btrfs,crypt,multipath).
67
68 -n, --dry-run
69 Dry run. Do not do anything, just parse the command line options
70 and gather system information if necessary. Note that with this
71 option ssm will not perform all the check as some of them are
72 done by the backends themselves. This option is mainly used for
73 debugging purposes, but still requires root privileges.
74
76 Introduction
77 System Storage Manager has several commands that you can specify on the
78 command line as a first argument to ssm. They all have a specific use
79 and their own arguments, but global ssm arguments are propagated to all
80 commands.
81
82 Create command
83 ssm create [-h] [-s SIZE] [-n NAME] [--fstype FSTYPE] [-r LEVEL] [-I
84 STRIPESIZE] [-i STRIPES] [-p POOL] [-e [{luks,plain}]] [-o MNT_OPTIONS]
85 [-v VIRTUAL_SIZE] [device [device ...]] [mount]
86
87 This command creates a new volume with defined parameters. If a device
88 is provided it will be used to create the volume, hence it will be
89 added into the pool prior to volume creation (See Add command section).
90 More than one device can be used to create a volume.
91
92 If the device is already being used in a different pool, then ssm will
93 ask you whether you want to remove it from the original pool. If you
94 decline, or the removal fails, then the volume creation fails if the
95 SIZE was not provided. On the other hand, if the SIZE is provided and
96 some devices can not be added to the pool, the volume creation might
97 still succeed if there is enough space in the pool.
98
99 In addition to specifying size of the volume directly, percentage can
100 be specified as well. Specify --size 70% to indicate the volume size to
101 be 70% of total pool size. Additionally, percentage of the used, or
102 free pool space can be specified as well using keywords FREE, or USED
103 respectively.
104
105 The POOL name can be specified as well. If the pool exists, a new vol‐
106 ume will be created from that pool (optionally adding device into the
107 pool). However if the POOL does not exist, then ssm will attempt to
108 create a new pool with the provided device, and then create a new vol‐
109 ume from this pool. If the --backend argument is omitted, the default
110 ssm backend will be used. The default backend is lvm.
111
112 ssm also supports creating a RAID configuration, however some back-ends
113 might not support all RAID levels, or may not even support RAID at all.
114 In this case, volume creation will fail.
115
116 If a mount point is provided, ssm will attempt to mount the volume
117 after it is created. However it will fail if mountable file system is
118 not present on the volume.
119
120 If the backend allows it (currently only supported with lvm backend),
121 ssm can be used to create thinly provisioned volumes by specifying
122 --virtual-size option. This will automatically create a thin pool of a
123 given size provided with --size option and thin volume of a given size
124 provided with --virtual-size option and name provided with --name
125 option. Virtual size can be much bigger than available space in the
126 pool.
127
128 -h, --help
129 show this help message and exit
130
131 -s SIZE, --size SIZE
132 Gives the size to allocate for the new logical volume. A size
133 suffix K|k, M|m, G|g, T|t, P|p, E|e can be used to define 'power
134 of two' units. If no unit is provided, it defaults to kilobytes.
135 This is optional and if not given, maximum possible size will be
136 used. Additionally the new size can be specified as a percent‐
137 age of the total pool size (50%), as a percentage of free pool
138 space (50%FREE), or as a percentage of used pool space
139 (50%USED).
140
141 -n NAME, --name NAME
142 The name for the new logical volume. This is optional and if
143 omitted, name will be generated by the corresponding backend.
144
145 --fstype FSTYPE
146 Gives the file system type to create on the new logical volume.
147 Supported file systems are (ext3, ext4, xfs, btrfs). This is
148 optional and if not given file system will not be created.
149
150 -r LEVEL, --raid LEVEL
151 Specify a RAID level you want to use when creating a new volume.
152 Note that some backends might not implement all supported RAID
153 levels. This is optional and if no specified, linear volume will
154 be created. You can choose from the following list of supported
155 levels (0,1,10).
156
157 -I STRIPESIZE, --stripesize STRIPESIZE
158 Gives the number of kilobytes for the granularity of stripes.
159 This is optional and if not given, backend default will be used.
160 Note that you have to specify RAID level as well.
161
162 -i STRIPES, --stripes STRIPES
163 Gives the number of stripes. This is equal to the number of
164 physical volumes to scatter the logical volume. This is optional
165 and if stripesize is set and multiple devices are provided
166 stripes is determined automatically from the number of devices.
167 Note that you have to specify RAID level as well.
168
169 -p POOL, --pool POOL
170 Pool to use to create the new volume.
171
172 -e [{luks,plain}], --encrypt [{luks,plain}]
173 Create encrpted volume. Extension to use can be specified.
174
175 -o MNT_OPTIONS, --mnt-options MNT_OPTIONS
176 Mount options are specified with a -o flag followed by a comma
177 separated string of options. This option is equivalent to the -o
178 mount(8) option.
179
180 -v VIRTUAL_SIZE, --virtual-size VIRTUAL_SIZE
181 Gives the virtual size for the new thinly provisioned volume. A
182 size suffix K|k, M|m, G|g, T|t, P|p, E|e can be used to define
183 'power of two' units. If no unit is provided, it defaults to
184 kilobytes.
185
186 List command
187 ssm list [-h] [{volumes,vol,dev,devices,pool,pools,fs,filesys‐
188 tems,snap,snapshots}]
189
190 Lists information about all detected devices, pools, volumes and snap‐
191 shots found on the system. The list command can be used either alone to
192 list all of the information, or you can request specific sections only.
193
194 The following sections can be specified:
195
196 {volumes | vol}
197 List information about all volumes found in the system.
198
199 {devices | dev}
200 List information about all devices found on the system. Some
201 devices are intentionally hidden, like for example cdrom or
202 DM/MD devices since those are actually listed as volumes.
203
204 {pools | pool}
205 List information about all pools found in the system.
206
207 {filesystems | fs}
208 List information about all volumes containing filesystems found
209 in the system.
210
211 {snapshots | snap}
212 List information about all snapshots found in the system. Note
213 that some back-ends do not support snapshotting and some cannot
214 distinguish snapshot from regular volumes. In this case, ssm
215 will try to recognize the volume name in order to identify a
216 snapshot, but if the ssm regular expression does not match the
217 snapshot pattern, the problematic snapshot will not be recog‐
218 nized.
219
220 -h, --help
221 show this help message and exit
222
223 Remove command
224 ssm remove [-h] [-a] [items [items ...]]
225
226 This command removes an item from the system. Multiple items can be
227 specified. If the item cannot be removed for some reason, it will be
228 skipped.
229
230 An item can be any of the following:
231
232 device Remove a device from the pool. Note that this cannot be done in
233 some cases where the device is being used by the pool. You can
234 use the -f argument to force removal. If the device does not
235 belong to any pool, it will be skipped.
236
237 pool Remove a pool from the system. This will also remove all volumes
238 created from that pool.
239
240 volume Remove a volume from the system. Note that this will fail if the
241 volume is mounted and cannot be forced with -f.
242
243 -h, --help
244 show this help message and exit
245
246 -a, --all
247 Remove all pools in the system.
248
249 Resize command
250 ssm resize [-h] [-s SIZE] volume [device [device ...]]
251
252 Change size of the volume and file system. If there is no file system,
253 only the volume itself will be resized. You can specify a device to add
254 into the volume pool prior the resize. Note that the device will only
255 be added into the pool if the volume size is going to grow.
256
257 If the device is already used in a different pool, then ssm will ask
258 you whether or not you want to remove it from the original pool.
259
260 In some cases, the file system has to be mounted in order to resize.
261 This will be handled by ssm automatically by mounting the volume tempo‐
262 rarily.
263
264 In addition to specifying new size of the volume directly, percentage
265 can be specified as well. Specify --size 70% to resize the volume to
266 70% of it's original size. Additionally, percentage of the used, or
267 free pool space can be specified as well using keywords FREE, or USED
268 respectively.
269
270 Note that resizing btrfs subvolume is not supported, only the whole
271 file system can be resized.
272
273 -h, --help
274 show this help message and exit
275
276 -s SIZE, --size SIZE
277 New size of the volume. With the + or - sign the value is added
278 to or subtracted from the actual size of the volume and without
279 it, the value will be set as the new volume size. A size suffix
280 of [k|K] for kilobytes, [m|M] for megabytes, [g|G] for giga‐
281 bytes, [t|T] for terabytes or [p|P] for petabytes is optional.
282 If no unit is provided the default is kilobytes. Additionally
283 the new size can be specified as a percentage of the original
284 volume size ([+][-]50%), as a percentage of free pool space
285 ([+][-]50%FREE), or as a percentage of used pool space
286 ([+][-]50%USED).
287
288 Check command
289 ssm check [-h] device [device ...]
290
291 Check the file system consistency on the volume. You can specify multi‐
292 ple volumes to check. If there is no file system on the volume, this
293 volume will be skipped.
294
295 In some cases the file system has to be mounted in order to check the
296 file system. This will be handled by ssm automatically by mounting the
297 volume temporarily.
298
299 -h, --help
300 show this help message and exit
301
302 Snapshot command
303 ssm snapshot [-h] [-s SIZE] [-d DEST | -n NAME] volume
304
305 Take a snapshot of an existing volume. This operation will fail if the
306 back-end to which the volume belongs to does not support snapshotting.
307 Note that you cannot specify both NAME and DEST since those options are
308 mutually exclusive.
309
310 In addition to specifying new size of the volume directly, percentage
311 can be specified as well. Specify --size 70% to indicate the new snap‐
312 shot size to be 70% of the origin volume size. Additionally, percentage
313 of the used, or free pool space can be specified as well using keywords
314 FREE, or USED respectively.
315
316 In some cases the file system has to be mounted in order to take a
317 snapshot of the volume. This will be handled by ssm automatically by
318 mounting the volume temporarily.
319
320 -h, --help
321 show this help message and exit
322
323 -s SIZE, --size SIZE
324 Gives the size to allocate for the new snapshot volume. A size
325 suffix K|k, M|m, G|g, T|t, P|p, E|e can be used to define 'power
326 of two' units. If no unit is provided, it defaults to kilobytes.
327 This is optional and if not given, the size will be determined
328 automatically. Additionally the new size can be specified as a
329 percentage of the original volume size (50%), as a percentage of
330 free pool space (50%FREE), or as a percentage of used pool space
331 (50%USED).
332
333 -d DEST, --dest DEST
334 Destination of the snapshot specified with absolute path to be
335 used for the new snapshot. This is optional and if not specified
336 default backend policy will be performed.
337
338 -n NAME, --name NAME
339 Name of the new snapshot. This is optional and if not specified
340 default backend policy will be performed.
341
342 Add command
343 ssm add [-h] [-p POOL] device [device ...]
344
345 This command adds a device into the pool. By default, the device will
346 not be added if it's already a part of a different pool, but the user
347 will be asked whether or not to remove the device from its pool. When
348 multiple devices are provided, all of them are added into the pool. If
349 one of the devices cannot be added into the pool for any reason, the
350 add command will fail. If no pool is specified, the default pool will
351 be chosen. In the case of a non existing pool, it will be created using
352 the provided devices.
353
354 -h, --help
355 show this help message and exit
356
357 -p POOL, --pool POOL
358 Pool to add device into. If not specified the default pool is
359 used.
360
361 Mount command
362 ssm mount [-h] [-o OPTIONS] volume directory
363
364 This command will mount the volume at the specified directory. The vol‐
365 ume can be specified in the same way as with mount(8), however in addi‐
366 tion, one can also specify a volume in the format as it appears in the
367 ssm list table.
368
369 For example, instead of finding out what the device and subvolume id of
370 the btrfs subvolume "btrfs_pool:vol001" is in order to mount it, one
371 can simply call ssm mount btrfs_pool:vol001 /mnt/test.
372
373 One can also specify OPTIONS in the same way as with mount(8).
374
375 -h, --help
376 show this help message and exit
377
378 -o OPTIONS, --options OPTIONS
379 Options are specified with a -o flag followed by a comma sepa‐
380 rated string of options. This option is equivalent to the same
381 mount(8) option.
382
383 Migrate command
384 ssm migrate [-h] source target
385
386 Move data from one device to another. For btrfs and lvm their special‐
387 ized utilities are used, so the data are moved in an all-or-nothing
388 fashion and no other operation is needed to add/remove the devices or
389 rebalance the pool. Devices that do not belong to a backend that sup‐
390 ports specialized device migration tools will be migrated using dd.
391
392 This operation is not intended to be used for duplication, because the
393 process can change metadata and an access to the data may be difficult.
394
395 -h, --help
396 show this help message and exit
397
399 Introduction
400 Ssm aims to create a unified user interface for various technologies
401 like Device Mapper (dm), Btrfs file system, Multiple Devices (md) and
402 possibly more. In order to do so we have a core abstraction layer in
403 ssmlib/main.py. This abstraction layer should ideally know nothing
404 about the underlying technology, but rather comply with device, pool
405 and volume abstractions.
406
407 Various backends can be registered in ssmlib/main.py in order to handle
408 specific storage technology, implementing methods like create, snap‐
409 shot, or remove volumes and pools. The core will then call these meth‐
410 ods to manage the storage without needing to know what lies underneath
411 it. There are already several backends registered in ssm.
412
413 Btrfs backend
414 Btrfs is the file system with many advanced features including volume
415 management. This is the reason why btrfs is handled differently than
416 other conventional file systems in ssm. It is used as a volume manage‐
417 ment back-end.
418
419 Pools, volumes and snapshots can be created with btrfs backend and here
420 is what it means from the btrfs point of view:
421
422 pool A pool is actually a btrfs file system itself, because it can be
423 extended by adding more devices, or shrunk by removing devices
424 from it. Subvolumes and snapshots can also be created. When the
425 new btrfs pool should be created, ssm simply creates a btrfs
426 file system, which means that every new btrfs pool has one vol‐
427 ume of the same name as the pool itself which can not be removed
428 without removing the entire pool. The default btrfs pool name is
429 btrfs_pool.
430
431 When creating a new btrfs pool, the name of the pool is used as
432 the file system label. If there is an already existing btrfs
433 file system in the system without a label, a btrfs pool name
434 will be generated for internal use in the following format
435 "btrfs_{device base name}".
436
437 A btrfs pool is created when the create or add command is used
438 with specified devices and non existing pool name.
439
440 volume A volume in the btrfs back-end is actually just btrfs subvolume
441 with the exception of the first volume created on btrfs pool
442 creation, which is the file system itself. Subvolumes can only
443 be created on the btrfs file system when it is mounted, but the
444 user does not have to worry about that since ssm will automati‐
445 cally mount the file system temporarily in order to create a new
446 subvolume.
447
448 The volume name is used as subvolume path in the btrfs file sys‐
449 tem and every object in this path must exist in order to create
450 a volume. The volume name for internal tracking and that is vis‐
451 ible to the user is generated in the format "{pool_name}:{volume
452 name}", but volumes can be also referenced by its mount point.
453
454 The btrfs volumes are only shown in the list output, when the
455 file system is mounted, with the exception of the main btrfs
456 volume - the file system itself.
457
458 Also note that btrfs volumes and subvolumes cannot be resized.
459 This is mainly limitation of the btrfs tools which currently do
460 not work reliably.
461
462 A new btrfs volume can be created with the create command.
463
464 snapshot
465 The btrfs file system supports subvolume snapshotting, so you
466 can take a snapshot of any btrfs volume in the system with ssm.
467 However btrfs does not distinguish between subvolumes and snap‐
468 shots, because a snapshot is actually just a subvolume with some
469 blocks shared with a different subvolume. This means, that ssm
470 is not able to directly recognize a btrfs snapshot. Instead,
471 ssm will try to recognize a special name format of the btrfs
472 volume that denotes it is a snapshot. However, if the NAME is
473 specified when creating snapshot which does not match the spe‐
474 cial pattern, snapshot will not be recognized by the ssm and it
475 will be listed as regular btrfs volume.
476
477 A new btrfs snapshot can be created with the snapshot command.
478
479 device Btrfs does not require a special device to be created on.
480
481 Lvm backend
482 Pools, volumes and snapshots can be created with lvm, which pretty much
483 match the lvm abstraction.
484
485 pool An lvm pool is just a volume group in lvm language. It means
486 that it is grouping devices and new logical volumes can be cre‐
487 ated out of the lvm pool. The default lvm pool name is
488 lvm_pool.
489
490 An lvm pool is created when the create or add commands are used
491 with specified devices and a non existing pool name.
492
493 Alternatively a thin pool can be created as a result of using
494 --virtual-size option to create thin volume.
495
496 volume An lvm volume is just a logical volume in lvm language. An lvm
497 volume can be created with the create command.
498
499 snapshot
500 Lvm volumes can be snapshotted as well. When a snapshot is cre‐
501 ated from the lvm volume, a new snapshot volume is created,
502 which can be handled as any other lvm volume. Unlike btrfs lvm
503 is able to distinguish snapshot from regular volume, so there is
504 no need for a snapshot name to match special pattern.
505
506 device Lvm requires a physical device to be created on the device, but
507 with ssm this is transparent for the user.
508
509 Crypt backend
510 The crypt backend in ssm uses cryptsetup and dm-crypt target to manage
511 encrypted volumes. The crypt backend can be used as a regular backend
512 for creating encrypted volumes on top of regular block devices, or even
513 other volumes (lvm or md volumes for example). Or it can be used to
514 create encrypted lvm volumes right away in a single step.
515
516 Only volumes can be created with crypt backend. This backend does not
517 support pooling and does not require special devices.
518
519 pool The crypt backend does not support pooling, and it is not possi‐
520 ble to create crypt pool or add a device into a pool.
521
522 volume A volume in the crypt backend is the volume created by dm-crypt
523 which represents the data on the original encrypted device in
524 unencrypted form. The crypt backend does not support pooling,
525 so only one device can be used to create crypt volume. It also
526 does not support raid or any device concatenation.
527
528 Currently two modes, or extensions are supported: luks and
529 plain. Luks is used by default. For more information about the
530 extensions, please see cryptsetup manual page.
531
532 snapshot
533 The crypt backend does not support snapshotting, however if the
534 encrypted volume is created on top of an lvm volume, the lvm
535 volume itself can be snapshotted. The snapshot can be then
536 opened by using cryptsetup. It is possible that this might
537 change in the future so that ssm will be able to activate the
538 volume directly without the extra step.
539
540 device The crypt backend does not require a special device to be cre‐
541 ated on.
542
543 MD backend
544 MD backend in ssm is currently limited to only gather the information
545 about MD volumes in the system. You can not create or manage MD volumes
546 or pools, but this functionality will be extended in the future.
547
548 Multipath backend
549 Multipath backend in ssm is currently limited to only gather the infor‐
550 mation about multipath volumes in the system. You can not create or
551 manage multipath volumes or pools, but this functionality will be
552 extended in the future.
553
555 List system storage information:
556
557 # ssm list
558
559 List all pools in the system:
560
561 # ssm list pools
562
563 Create a new 100GB volume with the default lvm backend using /dev/sda
564 and /dev/sdb with xfs file system:
565
566 # ssm create --size 100G --fs xfs /dev/sda /dev/sdb
567
568 Create a new volume with a btrfs backend using /dev/sda and /dev/sdb
569 and let the volume to be RAID 1:
570
571 # ssm -b btrfs create --raid 1 /dev/sda /dev/sdb
572
573 Using the lvm backend create a RAID 0 volume with devices /dev/sda and
574 /dev/sdb with 128kB stripe size, ext4 file system and mount it on
575 /home:
576
577 # ssm create --raid 0 --stripesize 128k /dev/sda /dev/sdb /home
578
579 Create a new thinly provisioned volume with a lvm backend using devices
580 /dev/sda and /dev/sdb using --virtual-size option:
581
582 # ssm create --virtual-size 1T /dev/sda /dev/sdb
583
584 Create a new thinly provisioned volume with a defined thin pool size
585 and devices /dev/sda and /dev/sdb:
586
587 # ssm create --size 50G --virtual-size 1T /dev/sda /dev/sdb
588
589 Extend btrfs volume btrfs_pool by 500GB and use /dev/sdc and /dev/sde
590 to cover the resize:
591
592 # ssm resize -s +500G btrfs_pool /dev/sdc /dev/sde
593
594 Shrink volume /dev/lvm_pool/lvol001 by 1TB:
595
596 # ssm resize -s-1t /dev/lvm_pool/lvol001
597
598 Remove /dev/sda device from the pool, remove the btrfs_pool pool and
599 also remove the volume /dev/lvm_pool/lvol001:
600
601 # ssm remove /dev/sda btrfs_pool /dev/lvm_pool/lvol001
602
603 Take a snapshot of the btrfs volume btrfs_pool:my_volume:
604
605 # ssm snapshot btrfs_pool:my_volume
606
607 Add devices /dev/sda and /dev/sdb into the btrfs_pool pool:
608
609 # ssm add -p btrfs_pool /dev/sda /dev/sdb
610
611 Mount btrfs subvolume btrfs_pool:vol001 on /mnt/test:
612
613 # ssm mount btrfs_pool:vol001 /mnt/test
614
616 SSM_DEFAULT_BACKEND
617 Specify which backend will be used by default. This can be over‐
618 ridden by specifying the -b or --backend argument. Currently
619 only lvm and btrfs are supported.
620
621 SSM_LVM_DEFAULT_POOL
622 Name of the default lvm pool to be used if the -p or --pool
623 argument is omitted.
624
625 SSM_BTRFS_DEFAULT_POOL
626 Name of the default btrfs pool to be used if the -p or --pool
627 argument is omitted.
628
629 SSM_PREFIX_FILTER
630 When this is set, ssm will filter out all devices, volumes and
631 pools whose name does not start with this prefix. It is used
632 mainly in the ssm test suite to make sure that we do not scram‐
633 ble the local system configuration.
634
636 (C)2017 Red Hat, Inc., Jan Tulak <jtulak@redhat.com> (C)2011 Red Hat,
637 Inc., Lukas Czerner <lczerner@redhat.com>
638
639 This program is free software: you can redistribute it and/or modify it
640 under the terms of the GNU General Public License as published by the
641 Free Software Foundation, either version 2 of the License, or (at your
642 option) any later version.
643
644 This program is distributed in the hope that it will be useful, but
645 WITHOUT ANY WARRANTY; without even the implied warranty of MER‐
646 CHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
647 Public License for more details.
648
649 You should have received a copy of the GNU General Public License along
650 with this program. If not, see <http://www.gnu.org/licenses/>.
651
653 Python 2.6 or higher is required to run this tool. System Storage Man‐
654 ager can only be run as root since most of the commands require root
655 privileges.
656
657 There are other requirements listed below, but note that you do not
658 necessarily need all dependencies for all backends. However if some of
659 the tools required by a backend are missing, that backend will not
660 work.
661
662 Python modules
663 · argparse
664
665 · atexit
666
667 · base64
668
669 · datetime
670
671 · fcntl
672
673 · getpass
674
675 · os
676
677 · pwquality
678
679 · re
680
681 · socket
682
683 · stat
684
685 · struct
686
687 · subprocess
688
689 · sys
690
691 · tempfile
692
693 · termios
694
695 · threading
696
697 · tty
698
699 System tools
700 · tune2fs
701
702 · fsck.SUPPORTED_FS
703
704 · resize2fs
705
706 · xfs_db
707
708 · xfs_check
709
710 · xfs_growfs
711
712 · mkfs.SUPPORTED_FS
713
714 · which
715
716 · mount
717
718 · blkid
719
720 · wipefs
721
722 · dd
723
724 Lvm backend
725 · lvm2 binaries
726
727 Some distributions (e.g. Debian) have thin provisioning tools for LVM
728 as an optional dependency, while others install it automatically. Thin
729 provisioning without these tools installed is not supported by SSM.
730
731 Btrfs backend
732 · btrfs progs
733
734 Crypt backend
735 · dmsetup
736
737 · cryptsetup
738
739 Multipath backend
740 · multipath
741
743 System storage manager is available from
744 http://system-storage-manager.github.io. You can subscribe to
745 storagemanager-devel@lists.sourceforge.net to follow the current devel‐
746 opment.
747
749 Jan Ťulák <jtulak@redhat.com>, Lukáš Czerner <lczerner@redhat.com>
750
752 2015, Red Hat, Inc., Jan Ťulák <jtulak@redhat.com>, Lukáš Czerner <lcz‐
753 erner@redhat.com>
754
755
756
757
7581.2 May 11, 2019 SSM(8)