1SSM(8) System Storage Manager SSM(8)
2
3
4
6 ssm - System Storage Manager: a single tool to manage your storage
7
9 ssm [-h] [--version] [-v] [-v**v] [-v**vv] [-f] [-b BACKEND] [-n]
10 {check,resize,create,list,info,add,remove,snapshot,mount,migrate} ...
11
12 ssm create [-h] [-s SIZE] [-n NAME] [--fstype FSTYPE] [-r LEVEL] [-I
13 STRIPESIZE] [-i STRIPES] [-p POOL] [-e [{luks,plain}]] [-o MNT_OPTIONS]
14 [-v VIRTUAL_SIZE] [device ...] [mount]
15
16 ssm list [-h] [{volumes,vol,dev,devices,pool,pools,fs,filesys‐
17 tems,snap,snapshots}]
18
19 ssm info [-h] [item]
20
21 ssm remove [-h] [-a] [items ...]
22
23 ssm resize [-h] [-s SIZE] volume [device ...]
24
25 ssm check [-h] device [device ...]
26
27 ssm snapshot [-h] [-s SIZE] [-d DEST | -n NAME] volume
28
29 ssm add [-h] [-p POOL] device [device ...]
30
31 ssm mount [-h] [-o OPTIONS] volume directory
32
33 ssm migrate [-h] source target
34
36 System Storage Manager provides an easy to use command line interface
37 to manage your storage using various technologies like lvm, btrfs, en‐
38 crypted volumes and more.
39
40 In more sophisticated enterprise storage environments, management with
41 Device Mapper (dm), Logical Volume Manager (LVM), or Multiple Devices
42 (md) is becoming increasingly more difficult. With file systems added
43 to the mix, the number of tools needed to configure and manage storage
44 has grown so large that it is simply not user friendly. With so many
45 options for a system administrator to consider, the opportunity for er‐
46 rors and problems is large.
47
48 The btrfs administration tools have shown us that storage management
49 can be simplified, and we are working to bring that ease of use to
50 Linux filesystems in general.
51
54 Introduction
55 System Storage Manager has several commands that you can specify on the
56 command line as a first argument to ssm. They all have a specific use
57 and their own arguments, but global ssm arguments are propagated to all
58 commands.
59
60 Create command
61 ssm create [-h] [-s SIZE] [-n NAME] [--fstype FSTYPE] [-r LEVEL] [-I
62 STRIPESIZE] [-i STRIPES] [-p POOL] [-e [{luks,plain}]] [-o MNT_OPTIONS]
63 [-v VIRTUAL_SIZE] [device ...] [mount]
64
65 This command creates a new volume with defined parameters. If a device
66 is provided it will be used to create the volume, hence it will be
67 added into the pool prior to volume creation (See Add command section).
68 More than one device can be used to create a volume.
69
70 If the device is already being used in a different pool, then ssm will
71 ask you whether you want to remove it from the original pool. If you
72 decline, or the removal fails, then the volume creation fails if the
73 SIZE was not provided. On the other hand, if the SIZE is provided and
74 some devices can not be added to the pool, the volume creation might
75 still succeed if there is enough space in the pool.
76
77 In addition to specifying size of the volume directly, percentage can
78 be specified as well. Specify --size 70% to indicate the volume size to
79 be 70% of total pool size. Additionally, percentage of the used, or
80 free pool space can be specified as well using keywords FREE, or USED
81 respectively.
82
83 The POOL name can be specified as well. If the pool exists, a new vol‐
84 ume will be created from that pool (optionally adding device into the
85 pool). However if the POOL does not exist, then ssm will attempt to
86 create a new pool with the provided device, and then create a new vol‐
87 ume from this pool. If the --backend argument is omitted, the default
88 ssm backend will be used. The default backend is lvm.
89
90 ssm also supports creating a RAID configuration, however some back-ends
91 might not support all RAID levels, or may not even support RAID at all.
92 In this case, volume creation will fail.
93
94 If a mount point is provided, ssm will attempt to mount the volume af‐
95 ter it is created. However it will fail if mountable file system is not
96 present on the volume.
97
98 If the backend allows it (currently only supported with lvm backend),
99 ssm can be used to create thinly provisioned volumes by specifying
100 --virtual-size option. This will automatically create a thin pool of a
101 given size provided with --size option and thin volume of a given size
102 provided with --virtual-size option and name provided with --name op‐
103 tion. Virtual size can be much bigger than available space in the pool.
104
105 Info command
106 ssm info [-h] [item]
107
108 EXPERIMENTAL This feature is currently experimental. The output format
109 can change and fields can be added or removed.
110
111 Show detailed information about all detected devices, pools, volumes
112 and snapshots found on the system. The info command can be used either
113 alone to show all available items, or you can specify a device, pool,
114 or any other identifier to see information about the specific item.
115
116 List command
117 ssm list [-h] [{volumes,vol,dev,devices,pool,pools,fs,filesys‐
118 tems,snap,snapshots}]
119
120 Lists information about all detected devices, pools, volumes and snap‐
121 shots found on the system. The list command can be used either alone to
122 list all of the information, or you can request specific sections only.
123
124 The following sections can be specified:
125
126 {volumes | vol}
127 List information about all volumes found in the system.
128
129 {devices | dev}
130 List information about all devices found on the system. Some de‐
131 vices are intentionally hidden, like for example cdrom or DM/MD
132 devices since those are actually listed as volumes.
133
134 {pools | pool}
135 List information about all pools found in the system.
136
137 {filesystems | fs}
138 List information about all volumes containing filesystems found
139 in the system.
140
141 {snapshots | snap}
142 List information about all snapshots found in the system. Note
143 that some back-ends do not support snapshotting and some cannot
144 distinguish snapshot from regular volumes. In this case, ssm
145 will try to recognize the volume name in order to identify a
146 snapshot, but if the ssm regular expression does not match the
147 snapshot pattern, the problematic snapshot will not be recog‐
148 nized.
149
150 Remove command
151 ssm remove [-h] [-a] [items ...]
152
153 This command removes an item from the system. Multiple items can be
154 specified. If the item cannot be removed for some reason, it will be
155 skipped.
156
157 An item can be any of the following:
158
159 device Remove a device from the pool. Note that this cannot be done in
160 some cases where the device is being used by the pool. You can
161 use the -f argument to force removal. If the device does not be‐
162 long to any pool, it will be skipped.
163
164 pool Remove a pool from the system. This will also remove all volumes
165 created from that pool.
166
167 volume Remove a volume from the system. Note that this will fail if the
168 volume is mounted and cannot be forced with -f.
169
170 Resize command
171 ssm resize [-h] [-s SIZE] volume [device ...]
172
173 Change size of the volume and file system. If there is no file system,
174 only the volume itself will be resized. You can specify a device to add
175 into the volume pool prior the resize. Note that the device will only
176 be added into the pool if the volume size is going to grow.
177
178 If the device is already used in a different pool, then ssm will ask
179 you whether or not you want to remove it from the original pool.
180
181 In some cases, the file system has to be mounted in order to resize.
182 This will be handled by ssm automatically by mounting the volume tempo‐
183 rarily.
184
185 In addition to specifying new size of the volume directly, percentage
186 can be specified as well. Specify --size 70% to resize the volume to
187 70% of it's original size. Additionally, percentage of the used, or
188 free pool space can be specified as well using keywords FREE, or USED
189 respectively.
190
191 Note that resizing btrfs subvolume is not supported, only the whole
192 file system can be resized.
193
194 Check command
195 ssm check [-h] device [device ...]
196
197 Check the file system consistency on the volume. You can specify multi‐
198 ple volumes to check. If there is no file system on the volume, this
199 volume will be skipped.
200
201 In some cases the file system has to be mounted in order to check the
202 file system. This will be handled by ssm automatically by mounting the
203 volume temporarily.
204
205 Snapshot command
206 ssm snapshot [-h] [-s SIZE] [-d DEST | -n NAME] volume
207
208 Take a snapshot of an existing volume. This operation will fail if the
209 back-end to which the volume belongs to does not support snapshotting.
210 Note that you cannot specify both NAME and DEST since those options are
211 mutually exclusive.
212
213 In addition to specifying new size of the volume directly, percentage
214 can be specified as well. Specify --size 70% to indicate the new snap‐
215 shot size to be 70% of the origin volume size. Additionally, percentage
216 of the used, or free pool space can be specified as well using keywords
217 FREE, or USED respectively.
218
219 In some cases the file system has to be mounted in order to take a
220 snapshot of the volume. This will be handled by ssm automatically by
221 mounting the volume temporarily.
222
223 Add command
224 ssm add [-h] [-p POOL] device [device ...]
225
226 This command adds a device into the pool. By default, the device will
227 not be added if it's already a part of a different pool, but the user
228 will be asked whether or not to remove the device from its pool. When
229 multiple devices are provided, all of them are added into the pool. If
230 one of the devices cannot be added into the pool for any reason, the
231 add command will fail. If no pool is specified, the default pool will
232 be chosen. In the case of a non existing pool, it will be created using
233 the provided devices.
234
235 Mount command
236 ssm mount [-h] [-o OPTIONS] volume directory
237
238 This command will mount the volume at the specified directory. The vol‐
239 ume can be specified in the same way as with mount(8), however in addi‐
240 tion, one can also specify a volume in the format as it appears in the
241 ssm list table.
242
243 For example, instead of finding out what the device and subvolume id of
244 the btrfs subvolume "btrfs_pool:vol001" is in order to mount it, one
245 can simply call ssm mount btrfs_pool:vol001 /mnt/test.
246
247 One can also specify OPTIONS in the same way as with mount(8).
248
249 Migrate command
250 ssm migrate [-h] source target
251
252 Move data from one device to another. For btrfs and lvm their special‐
253 ized utilities are used, so the data are moved in an all-or-nothing
254 fashion and no other operation is needed to add/remove the devices or
255 rebalance the pool. Devices that do not belong to a backend that sup‐
256 ports specialized device migration tools will be migrated using dd.
257
258 This operation is not intended to be used for duplication, because the
259 process can change metadata and an access to the data may be difficult.
260
262 Introduction
263 Ssm aims to create a unified user interface for various technologies
264 like Device Mapper (dm), Btrfs file system, Multiple Devices (md) and
265 possibly more. In order to do so we have a core abstraction layer in
266 ssmlib/main.py. This abstraction layer should ideally know nothing
267 about the underlying technology, but rather comply with device, pool
268 and volume abstractions.
269
270 Various backends can be registered in ssmlib/main.py in order to handle
271 specific storage technology, implementing methods like create, snap‐
272 shot, or remove volumes and pools. The core will then call these meth‐
273 ods to manage the storage without needing to know what lies underneath
274 it. There are already several backends registered in ssm.
275
276 Btrfs backend
277 Btrfs is the file system with many advanced features including volume
278 management. This is the reason why btrfs is handled differently than
279 other conventional file systems in ssm. It is used as a volume manage‐
280 ment back-end.
281
282 Pools, volumes and snapshots can be created with btrfs backend and here
283 is what it means from the btrfs point of view:
284
285 pool A pool is actually a btrfs file system itself, because it can be
286 extended by adding more devices, or shrunk by removing devices
287 from it. Subvolumes and snapshots can also be created. When the
288 new btrfs pool should be created, ssm simply creates a btrfs
289 file system, which means that every new btrfs pool has one vol‐
290 ume of the same name as the pool itself which can not be removed
291 without removing the entire pool. The default btrfs pool name is
292 btrfs_pool.
293
294 When creating a new btrfs pool, the name of the pool is used as
295 the file system label. If there is an already existing btrfs
296 file system in the system without a label, a btrfs pool name
297 will be generated for internal use in the following format
298 "btrfs_{device base name}".
299
300 A btrfs pool is created when the create or add command is used
301 with specified devices and non existing pool name.
302
303 volume A volume in the btrfs back-end is actually just btrfs subvolume
304 with the exception of the first volume created on btrfs pool
305 creation, which is the file system itself. Subvolumes can only
306 be created on the btrfs file system when it is mounted, but the
307 user does not have to worry about that since ssm will automati‐
308 cally mount the file system temporarily in order to create a new
309 subvolume.
310
311 The volume name is used as subvolume path in the btrfs file sys‐
312 tem and every object in this path must exist in order to create
313 a volume. The volume name for internal tracking and that is vis‐
314 ible to the user is generated in the format "{pool_name}:{volume
315 name}", but volumes can be also referenced by its mount point.
316
317 The btrfs volumes are only shown in the list output, when the
318 file system is mounted, with the exception of the main btrfs
319 volume - the file system itself.
320
321 Also note that btrfs volumes and subvolumes cannot be resized.
322 This is mainly limitation of the btrfs tools which currently do
323 not work reliably.
324
325 A new btrfs volume can be created with the create command.
326
327 snapshot
328 The btrfs file system supports subvolume snapshotting, so you
329 can take a snapshot of any btrfs volume in the system with ssm.
330 However btrfs does not distinguish between subvolumes and snap‐
331 shots, because a snapshot is actually just a subvolume with some
332 blocks shared with a different subvolume. This means, that ssm
333 is not able to directly recognize a btrfs snapshot. Instead,
334 ssm will try to recognize a special name format of the btrfs
335 volume that denotes it is a snapshot. However, if the NAME is
336 specified when creating snapshot which does not match the spe‐
337 cial pattern, snapshot will not be recognized by the ssm and it
338 will be listed as regular btrfs volume.
339
340 A new btrfs snapshot can be created with the snapshot command.
341
342 device Btrfs does not require a special device to be created on.
343
344 Lvm backend
345 Pools, volumes and snapshots can be created with lvm, which pretty much
346 match the lvm abstraction.
347
348 pool An lvm pool is just a volume group in lvm language. It means
349 that it is grouping devices and new logical volumes can be cre‐
350 ated out of the lvm pool. The default lvm pool name is
351 lvm_pool.
352
353 An lvm pool is created when the create or add commands are used
354 with specified devices and a non existing pool name.
355
356 Alternatively a thin pool can be created as a result of using
357 --virtual-size option to create thin volume.
358
359 volume An lvm volume is just a logical volume in lvm language. An lvm
360 volume can be created with the create command.
361
362 snapshot
363 Lvm volumes can be snapshotted as well. When a snapshot is cre‐
364 ated from the lvm volume, a new snapshot volume is created,
365 which can be handled as any other lvm volume. Unlike btrfs lvm
366 is able to distinguish snapshot from regular volume, so there is
367 no need for a snapshot name to match special pattern.
368
369 device Lvm requires a physical device to be created on the device, but
370 with ssm this is transparent for the user.
371
372 Crypt backend
373 The crypt backend in ssm uses cryptsetup and dm-crypt target to manage
374 encrypted volumes. The crypt backend can be used as a regular backend
375 for creating encrypted volumes on top of regular block devices, or even
376 other volumes (lvm or md volumes for example). Or it can be used to
377 create encrypted lvm volumes right away in a single step.
378
379 Only volumes can be created with crypt backend. This backend does not
380 support pooling and does not require special devices.
381
382 pool The crypt backend does not support pooling, and it is not possi‐
383 ble to create crypt pool or add a device into a pool.
384
385 volume A volume in the crypt backend is the volume created by dm-crypt
386 which represents the data on the original encrypted device in
387 unencrypted form. The crypt backend does not support pooling,
388 so only one device can be used to create crypt volume. It also
389 does not support raid or any device concatenation.
390
391 Currently two modes, or extensions are supported: luks and
392 plain. Luks is used by default. For more information about the
393 extensions, please see cryptsetup manual page.
394
395 snapshot
396 The crypt backend does not support snapshotting, however if the
397 encrypted volume is created on top of an lvm volume, the lvm
398 volume itself can be snapshotted. The snapshot can be then
399 opened by using cryptsetup. It is possible that this might
400 change in the future so that ssm will be able to activate the
401 volume directly without the extra step.
402
403 device The crypt backend does not require a special device to be cre‐
404 ated on.
405
406 MD backend
407 MD backend in ssm is currently limited to only gather the information
408 about MD volumes in the system. You can not create or manage MD volumes
409 or pools, but this functionality will be extended in the future.
410
411 Multipath backend
412 Multipath backend in ssm is currently limited to only gather the infor‐
413 mation about multipath volumes in the system. You can not create or
414 manage multipath volumes or pools, but this functionality will be ex‐
415 tended in the future.
416
418 List system storage information:
419
420 # ssm list
421
422 List all pools in the system:
423
424 # ssm list pools
425
426 Create a new 100GB volume with the default lvm backend using /dev/sda
427 and /dev/sdb with xfs file system:
428
429 # ssm create --size 100G --fs xfs /dev/sda /dev/sdb
430
431 Create a new volume with a btrfs backend using /dev/sda and /dev/sdb
432 and let the volume to be RAID 1:
433
434 # ssm -b btrfs create --raid 1 /dev/sda /dev/sdb
435
436 Using the lvm backend create a RAID 0 volume with devices /dev/sda and
437 /dev/sdb with 128kB stripe size, ext4 file system and mount it on
438 /home:
439
440 # ssm create --raid 0 --stripesize 128k /dev/sda /dev/sdb /home
441
442 Create a new thinly provisioned volume with a lvm backend using devices
443 /dev/sda and /dev/sdb using --virtual-size option:
444
445 # ssm create --virtual-size 1T /dev/sda /dev/sdb
446
447 Create a new thinly provisioned volume with a defined thin pool size
448 and devices /dev/sda and /dev/sdb:
449
450 # ssm create --size 50G --virtual-size 1T /dev/sda /dev/sdb
451
452 Extend btrfs volume btrfs_pool by 500GB and use /dev/sdc and /dev/sde
453 to cover the resize:
454
455 # ssm resize -s +500G btrfs_pool /dev/sdc /dev/sde
456
457 Shrink volume /dev/lvm_pool/lvol001 by 1TB:
458
459 # ssm resize -s-1t /dev/lvm_pool/lvol001
460
461 Remove /dev/sda device from the pool, remove the btrfs_pool pool and
462 also remove the volume /dev/lvm_pool/lvol001:
463
464 # ssm remove /dev/sda btrfs_pool /dev/lvm_pool/lvol001
465
466 Take a snapshot of the btrfs volume btrfs_pool:my_volume:
467
468 # ssm snapshot btrfs_pool:my_volume
469
470 Add devices /dev/sda and /dev/sdb into the btrfs_pool pool:
471
472 # ssm add -p btrfs_pool /dev/sda /dev/sdb
473
474 Mount btrfs subvolume btrfs_pool:vol001 on /mnt/test:
475
476 # ssm mount btrfs_pool:vol001 /mnt/test
477
479 SSM_DEFAULT_BACKEND
480 Specify which backend will be used by default. This can be over‐
481 ridden by specifying the -b or --backend argument. Currently
482 only lvm and btrfs are supported.
483
484 SSM_LVM_DEFAULT_POOL
485 Name of the default lvm pool to be used if the -p or --pool ar‐
486 gument is omitted.
487
488 SSM_BTRFS_DEFAULT_POOL
489 Name of the default btrfs pool to be used if the -p or --pool
490 argument is omitted.
491
492 SSM_PREFIX_FILTER
493 When this is set, ssm will filter out all devices, volumes and
494 pools whose name does not start with this prefix. It is used
495 mainly in the ssm test suite to make sure that we do not scram‐
496 ble the local system configuration.
497
499 (C)2017 Red Hat, Inc., Jan Tulak <jtulak@redhat.com> (C)2011 Red Hat,
500 Inc., Lukas Czerner <lczerner@redhat.com>
501
502 This program is free software: you can redistribute it and/or modify it
503 under the terms of the GNU General Public License as published by the
504 Free Software Foundation, either version 2 of the License, or (at your
505 option) any later version.
506
507 This program is distributed in the hope that it will be useful, but
508 WITHOUT ANY WARRANTY; without even the implied warranty of MER‐
509 CHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
510 Public License for more details.
511
512 You should have received a copy of the GNU General Public License along
513 with this program. If not, see <http://www.gnu.org/licenses/>.
514
516 Python 2.6 or higher is required to run this tool. System Storage Man‐
517 ager can only be run as root since most of the commands require root
518 privileges.
519
520 There are other requirements listed below, but note that you do not
521 necessarily need all dependencies for all backends. However if some of
522 the tools required by a backend are missing, that backend will not
523 work.
524
525 Python modules
526 • argparse
527
528 • atexit
529
530 • base64
531
532 • datetime
533
534 • fcntl
535
536 • getpass
537
538 • os
539
540 • pwquality
541
542 • re
543
544 • socket
545
546 • stat
547
548 • struct
549
550 • subprocess
551
552 • sys
553
554 • tempfile
555
556 • termios
557
558 • threading
559
560 • tty
561
562 System tools
563 • tune2fs
564
565 • fsck.SUPPORTED_FS
566
567 • resize2fs
568
569 • xfs_db
570
571 • xfs_check
572
573 • xfs_growfs
574
575 • mkfs.SUPPORTED_FS
576
577 • which
578
579 • mount
580
581 • blkid
582
583 • wipefs
584
585 • dd
586
587 Lvm backend
588 • lvm2 binaries
589
590 Some distributions (e.g. Debian) have thin provisioning tools for LVM
591 as an optional dependency, while others install it automatically. Thin
592 provisioning without these tools installed is not supported by SSM.
593
594 Btrfs backend
595 • btrfs progs
596
597 Crypt backend
598 • dmsetup
599
600 • cryptsetup
601
602 Multipath backend
603 • multipath
604
606 System storage manager is available from
607 http://system-storage-manager.github.io. You can subscribe to
608 storagemanager-devel@lists.sourceforge.net to follow the current devel‐
609 opment.
610
612 Jan Ťulák <jtulak@redhat.com>, Lukáš Czerner <lczerner@redhat.com>
613
615 2023, Red Hat, Inc., Jan Ťulák <jtulak@redhat.com>, Lukáš Czerner <lcz‐
616 erner@redhat.com>
617
618
619
620
6211.3 Jan 21, 2023 SSM(8)