1zpool(1M) System Administration Commands zpool(1M)
2
3
4
6 zpool - configures ZFS storage pools
7
9 zpool [-?]
10
11
12 zpool add [-fn] pool vdev ...
13
14
15 zpool attach [-f] pool device new_device
16
17
18 zpool clear pool [device]
19
20
21 zpool create [-fn] [-o property=value] ... [-O file-system-property=value]
22 ... [-m mountpoint] [-R root] pool vdev ...
23
24
25 zpool destroy [-f] pool
26
27
28 zpool detach pool device
29
30
31 zpool export [-f] pool ...
32
33
34 zpool get "all" | property[,...] pool ...
35
36
37 zpool history [-il] [pool] ...
38
39
40 zpool import [-d dir] [-D]
41
42
43 zpool import [-o mntopts] [-o property=value] ... [-d dir | -c cachefile]
44 [-D] [-f] [-R root] -a
45
46
47 zpool import [-o mntopts] [-o property=value] ... [-d dir | -c cachefile]
48 [-D] [-f] [-R root] pool |id [newpool]
49
50
51 zpool iostat [-T u | d ] [-v] [pool] ... [interval[count]]
52
53
54 zpool list [-H] [-o property[,...]] [pool] ...
55
56
57 zpool offline [-t] pool device ...
58
59
60 zpool online pool device ...
61
62
63 zpool remove pool device ...
64
65
66 zpool replace [-f] pool device [new_device]
67
68
69 zpool scrub [-s] pool ...
70
71
72 zpool set property=value pool
73
74
75 zpool status [-xv] [pool] ...
76
77
78 zpool upgrade
79
80
81 zpool upgrade -v
82
83
84 zpool upgrade [-V version] -a | pool ...
85
86
88 The zpool command configures ZFS storage pools. A storage pool is a
89 collection of devices that provides physical storage and data replica‐
90 tion for ZFS datasets.
91
92
93 All datasets within a storage pool share the same space. See zfs(1M)
94 for information on managing datasets.
95
96 Virtual Devices (vdevs)
97 A "virtual device" describes a single device or a collection of devices
98 organized according to certain performance and fault characteristics.
99 The following virtual devices are supported:
100
101 disk A block device, typically located under /dev/dsk. ZFS can use
102 individual slices or partitions, though the recommended mode
103 of operation is to use whole disks. A disk can be specified
104 by a full path, or it can be a shorthand name (the relative
105 portion of the path under "/dev/dsk"). A whole disk can be
106 specified by omitting the slice or partition designation. For
107 example, "c0t0d0" is equivalent to "/dev/dsk/c0t0d0s2". When
108 given a whole disk, ZFS automatically labels the disk, if
109 necessary.
110
111
112 file A regular file. The use of files as a backing store is
113 strongly discouraged. It is designed primarily for experimen‐
114 tal purposes, as the fault tolerance of a file is only as
115 good as the file system of which it is a part. A file must be
116 specified by a full path.
117
118
119 mirror A mirror of two or more devices. Data is replicated in an
120 identical fashion across all components of a mirror. A mirror
121 with N disks of size X can hold X bytes and can withstand
122 (N-1) devices failing before data integrity is compromised.
123
124
125 raidz A variation on RAID-5 that allows for better distribution of
126 raidz1 parity and eliminates the "RAID-5 write hole" (in which data
127 raidz2 and parity become inconsistent after a power loss). Data and
128 raidz3 parity is striped across all disks within a raidz group.
129
130 A raidz group can have single-, double- , or triple parity,
131 meaning that the raidz group can sustain one, two, or three
132 failures, respectively, without losing any data. The raidz1
133 vdev type specifies a single-parity raidz group; the raidz2
134 vdev type specifies a double-parity raidz group; and the
135 raidz3 vdev type specifies a triple-parity raidz group. The
136 raidz vdev type is an alias for raidz1.
137
138 A raidz group with N disks of size X with P parity disks can
139 hold approximately (N-P)*X bytes and can withstand P
140 device(s) failing before data integrity is compromised. The
141 minimum number of devices in a raidz group is one more than
142 the number of parity disks. The recommended number is between
143 3 and 9 to help increase performance.
144
145
146 spare A special pseudo-vdev which keeps track of available hot
147 spares for a pool. For more information, see the "Hot Spares"
148 section.
149
150
151 log A separate-intent log device. If more than one log device is
152 specified, then writes are load-balanced between devices. Log
153 devices can be mirrored. However, raidz vdev types are not
154 supported for the intent log. For more information, see the
155 "Intent Log" section.
156
157
158 cache A device used to cache storage pool data. A cache device can‐
159 not be cannot be configured as a mirror or raidz group. For
160 more information, see the "Cache Devices" section.
161
162
163
164 Virtual devices cannot be nested, so a mirror or raidz virtual device
165 can only contain files or disks. Mirrors of mirrors (or other combina‐
166 tions) are not allowed.
167
168
169 A pool can have any number of virtual devices at the top of the config‐
170 uration (known as "root vdevs"). Data is dynamically distributed across
171 all top-level devices to balance data among devices. As new virtual
172 devices are added, ZFS automatically places data on the newly available
173 devices.
174
175
176 Virtual devices are specified one at a time on the command line, sepa‐
177 rated by whitespace. The keywords "mirror" and "raidz" are used to dis‐
178 tinguish where a group ends and another begins. For example, the fol‐
179 lowing creates two root vdevs, each a mirror of two disks:
180
181 # zpool create mypool mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0
182
183
184
185 Device Failure and Recovery
186 ZFS supports a rich set of mechanisms for handling device failure and
187 data corruption. All metadata and data is checksummed, and ZFS automat‐
188 ically repairs bad data from a good copy when corruption is detected.
189
190
191 In order to take advantage of these features, a pool must make use of
192 some form of redundancy, using either mirrored or raidz groups. While
193 ZFS supports running in a non-redundant configuration, where each root
194 vdev is simply a disk or file, this is strongly discouraged. A single
195 case of bit corruption can render some or all of your data unavailable.
196
197
198 A pool's health status is described by one of three states: online,
199 degraded, or faulted. An online pool has all devices operating nor‐
200 mally. A degraded pool is one in which one or more devices have failed,
201 but the data is still available due to a redundant configuration. A
202 faulted pool has corrupted metadata, or one or more faulted devices,
203 and insufficient replicas to continue functioning.
204
205
206 The health of the top-level vdev, such as mirror or raidz device, is
207 potentially impacted by the state of its associated vdevs, or component
208 devices. A top-level vdev or component device is in one of the follow‐
209 ing states:
210
211 DEGRADED One or more top-level vdevs is in the degraded state
212 because one or more component devices are offline. Suffi‐
213 cient replicas exist to continue functioning.
214
215 One or more component devices is in the degraded or faulted
216 state, but sufficient replicas exist to continue function‐
217 ing. The underlying conditions are as follows:
218
219 o The number of checksum errors exceeds acceptable
220 levels and the device is degraded as an indica‐
221 tion that something may be wrong. ZFS continues
222 to use the device as necessary.
223
224 o The number of I/O errors exceeds acceptable lev‐
225 els. The device could not be marked as faulted
226 because there are insufficient replicas to con‐
227 tinue functioning.
228
229
230 FAULTED One or more top-level vdevs is in the faulted state because
231 one or more component devices are offline. Insufficient
232 replicas exist to continue functioning.
233
234 One or more component devices is in the faulted state, and
235 insufficient replicas exist to continue functioning. The
236 underlying conditions are as follows:
237
238 o The device could be opened, but the contents did
239 not match expected values.
240
241 o The number of I/O errors exceeds acceptable lev‐
242 els and the device is faulted to prevent further
243 use of the device.
244
245
246 OFFLINE The device was explicitly taken offline by the "zpool off‐
247 line" command.
248
249
250 ONLINE The device is online and functioning.
251
252
253 REMOVED The device was physically removed while the system was run‐
254 ning. Device removal detection is hardware-dependent and
255 may not be supported on all platforms.
256
257
258 UNAVAIL The device could not be opened. If a pool is imported when
259 a device was unavailable, then the device will be identi‐
260 fied by a unique identifier instead of its path since the
261 path was never correct in the first place.
262
263
264
265 If a device is removed and later re-attached to the system, ZFS
266 attempts to put the device online automatically. Device attach detec‐
267 tion is hardware-dependent and might not be supported on all platforms.
268
269 Hot Spares
270 ZFS allows devices to be associated with pools as "hot spares". These
271 devices are not actively used in the pool, but when an active device
272 fails, it is automatically replaced by a hot spare. To create a pool
273 with hot spares, specify a "spare" vdev with any number of devices. For
274 example,
275
276 # zpool create pool mirror c0d0 c1d0 spare c2d0 c3d0
277
278
279
280
281 Spares can be shared across multiple pools, and can be added with the
282 "zpool add" command and removed with the "zpool remove" command. Once a
283 spare replacement is initiated, a new "spare" vdev is created within
284 the configuration that will remain there until the original device is
285 replaced. At this point, the hot spare becomes available again if
286 another device fails.
287
288
289 If a pool has a shared spare that is currently being used, the pool can
290 not be exported since other pools may use this shared spare, which may
291 lead to potential data corruption.
292
293
294 An in-progress spare replacement can be cancelled by detaching the hot
295 spare. If the original faulted device is detached, then the hot spare
296 assumes its place in the configuration, and is removed from the spare
297 list of all active pools.
298
299
300 Spares cannot replace log devices.
301
302 Intent Log
303 The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
304 transactions. For instance, databases often require their transactions
305 to be on stable storage devices when returning from a system call. NFS
306 and other applications can also use fsync() to ensure data stability.
307 By default, the intent log is allocated from blocks within the main
308 pool. However, it might be possible to get better performance using
309 separate intent log devices such as NVRAM or a dedicated disk. For
310 example:
311
312 # zpool create pool c0d0 c1d0 log c2d0
313
314
315
316
317 Multiple log devices can also be specified, and they can be mirrored.
318 See the EXAMPLES section for an example of mirroring multiple log
319 devices.
320
321
322 Log devices can be added, replaced, attached, detached, and imported
323 and exported as part of the larger pool. Mirrored log devices can be
324 removed by specifying the top-level mirror for the log.
325
326 Cache Devices
327 Devices can be added to a storage pool as "cache devices." These
328 devices provide an additional layer of caching between main memory and
329 disk. For read-heavy workloads, where the working set size is much
330 larger than what can be cached in main memory, using cache devices
331 allow much more of this working set to be served from low latency
332 media. Using cache devices provides the greatest performance improve‐
333 ment for random read-workloads of mostly static content.
334
335
336 To create a pool with cache devices, specify a "cache" vdev with any
337 number of devices. For example:
338
339 # zpool create pool c0d0 c1d0 cache c2d0 c3d0
340
341
342
343
344 Cache devices cannot be mirrored or part of a raidz configuration. If a
345 read error is encountered on a cache device, that read I/O is reissued
346 to the original storage pool device, which might be part of a mirrored
347 or raidz configuration.
348
349
350 The content of the cache devices is considered volatile, as is the case
351 with other system caches.
352
353 Properties
354 Each pool has several properties associated with it. Some properties
355 are read-only statistics while others are configurable and change the
356 behavior of the pool. The following are read-only properties:
357
358 available Amount of storage available within the pool. This
359 property can also be referred to by its shortened
360 column name, "avail".
361
362
363 capacity Percentage of pool space used. This property can
364 also be referred to by its shortened column name,
365 "cap".
366
367
368 health The current health of the pool. Health can be
369 "ONLINE", "DEGRADED", "FAULTED", " OFFLINE",
370 "REMOVED", or "UNAVAIL".
371
372
373 guid A unique identifier for the pool.
374
375
376 size Total size of the storage pool.
377
378
379 used Amount of storage space used within the pool.
380
381
382
383 These space usage properties report actual physical space available to
384 the storage pool. The physical space can be different from the total
385 amount of space that any contained datasets can actually use. The
386 amount of space used in a raidz configuration depends on the character‐
387 istics of the data being written. In addition, ZFS reserves some space
388 for internal accounting that the zfs(1M) command takes into account,
389 but the zpool command does not. For non-full pools of a reasonable
390 size, these effects should be invisible. For small pools, or pools that
391 are close to being completely full, these discrepancies may become more
392 noticeable.
393
394
395 The following property can be set at creation time and import time:
396
397 altroot
398
399 Alternate root directory. If set, this directory is prepended to
400 any mount points within the pool. This can be used when examining
401 an unknown pool where the mount points cannot be trusted, or in an
402 alternate boot environment, where the typical paths are not valid.
403 altroot is not a persistent property. It is valid only while the
404 system is up. Setting altroot defaults to using cachefile=none,
405 though this may be overridden using an explicit setting.
406
407
408
409 The following properties can be set at creation time and import time,
410 and later changed with the zpool set command:
411
412 autoexpand=on | off
413
414 Controls automatic pool expansion when the underlying LUN is grown.
415 If set to on, the pool will be resized according to the size of the
416 expanded device. If the device is part of a mirror or raidz then
417 all devices within that mirror/raidz group must be expanded before
418 the new space is made available to the pool. The default behavior
419 is off. This property can also be referred to by its shortened col‐
420 umn name, expand.
421
422
423 autoreplace=on | off
424
425 Controls automatic device replacement. If set to "off", device
426 replacement must be initiated by the administrator by using the
427 "zpool replace" command. If set to "on", any new device, found in
428 the same physical location as a device that previously belonged to
429 the pool, is automatically formatted and replaced. The default
430 behavior is "off". This property can also be referred to by its
431 shortened column name, "replace".
432
433
434 bootfs=pool/dataset
435
436 Identifies the default bootable dataset for the root pool. This
437 property is expected to be set mainly by the installation and
438 upgrade programs.
439
440
441 cachefile=path | none
442
443 Controls the location of where the pool configuration is cached.
444 Discovering all pools on system startup requires a cached copy of
445 the configuration data that is stored on the root file system. All
446 pools in this cache are automatically imported when the system
447 boots. Some environments, such as install and clustering, need to
448 cache this information in a different location so that pools are
449 not automatically imported. Setting this property caches the pool
450 configuration in a different location that can later be imported
451 with "zpool import -c". Setting it to the special value "none" cre‐
452 ates a temporary pool that is never cached, and the special value
453 '' (empty string) uses the default location.
454
455 Multiple pools can share the same cache file. Because the kernel
456 destroys and recreates this file when pools are added and removed,
457 care should be taken when attempting to access this file. When the
458 last pool using a cachefile is exported or destroyed, the file is
459 removed.
460
461
462 delegation=on | off
463
464 Controls whether a non-privileged user is granted access based on
465 the dataset permissions defined on the dataset. See zfs(1M) for
466 more information on ZFS delegated administration.
467
468
469 failmode=wait | continue | panic
470
471 Controls the system behavior in the event of catastrophic pool
472 failure. This condition is typically a result of a loss of connec‐
473 tivity to the underlying storage device(s) or a failure of all
474 devices within the pool. The behavior of such an event is deter‐
475 mined as follows:
476
477 wait Blocks all I/O access until the device connectivity is
478 recovered and the errors are cleared. This is the
479 default behavior.
480
481
482 continue Returns EIO to any new write I/O requests but allows
483 reads to any of the remaining healthy devices. Any
484 write requests that have yet to be committed to disk
485 would be blocked.
486
487
488 panic Prints out a message to the console and generates a
489 system crash dump.
490
491
492
493 listsnaps=on | off
494
495 Controls whether information about snapshots associated with this
496 pool is output when "zfs list" is run without the -t option. The
497 default value is "off".
498
499
500 version=version
501
502 The current on-disk version of the pool. This can be increased, but
503 never decreased. The preferred method of updating pools is with the
504 "zpool upgrade" command, though this property can be used when a
505 specific version is needed for backwards compatibility. This prop‐
506 erty can be any number between 1 and the current version reported
507 by "zpool upgrade -v".
508
509
510 Subcommands
511 All subcommands that modify state are logged persistently to the pool
512 in their original form.
513
514
515 The zpool command provides subcommands to create and destroy storage
516 pools, add capacity to storage pools, and provide information about the
517 storage pools. The following subcommands are supported:
518
519 zpool -?
520
521 Displays a help message.
522
523
524 zpool add [-fn] pool vdev ...
525
526 Adds the specified virtual devices to the given pool. The vdev
527 specification is described in the "Virtual Devices" section. The
528 behavior of the -f option, and the device checks performed are
529 described in the "zpool create" subcommand.
530
531 -f Forces use of vdevs, even if they appear in use or specify a
532 conflicting replication level. Not all devices can be over‐
533 ridden in this manner.
534
535
536 -n Displays the configuration that would be used without actu‐
537 ally adding the vdevs. The actual pool creation can still
538 fail due to insufficient privileges or device sharing.
539
540 Do not add a disk that is currently configured as a quorum device
541 to a zpool. After a disk is in the pool, that disk can then be con‐
542 figured as a quorum device.
543
544
545 zpool attach [-f] pool device new_device
546
547 Attaches new_device to an existing zpool device. The existing
548 device cannot be part of a raidz configuration. If device is not
549 currently part of a mirrored configuration, device automatically
550 transforms into a two-way mirror of device and new_device. If
551 device is part of a two-way mirror, attaching new_device creates a
552 three-way mirror, and so on. In either case, new_device begins to
553 resilver immediately.
554
555 -f Forces use of new_device, even if its appears to be in use.
556 Not all devices can be overridden in this manner.
557
558
559
560 zpool clear pool [device] ...
561
562 Clears device errors in a pool. If no arguments are specified, all
563 device errors within the pool are cleared. If one or more devices
564 is specified, only those errors associated with the specified
565 device or devices are cleared.
566
567
568 zpool create [-fn] [-o property=value] ... [-O file-system-prop‐
569 erty=value] ... [-m mountpoint] [-R root] pool vdev ...
570
571 Creates a new storage pool containing the virtual devices specified
572 on the command line. The pool name must begin with a letter, and
573 can only contain alphanumeric characters as well as underscore
574 ("_"), dash ("-"), and period ("."). The pool names "mirror",
575 "raidz", "spare" and "log" are reserved, as are names beginning
576 with the pattern "c[0-9]". The vdev specification is described in
577 the "Virtual Devices" section.
578
579 The command verifies that each device specified is accessible and
580 not currently in use by another subsystem. There are some uses,
581 such as being currently mounted, or specified as the dedicated dump
582 device, that prevents a device from ever being used by ZFS. Other
583 uses, such as having a preexisting UFS file system, can be overrid‐
584 den with the -f option.
585
586 The command also checks that the replication strategy for the pool
587 is consistent. An attempt to combine redundant and non-redundant
588 storage in a single pool, or to mix disks and files, results in an
589 error unless -f is specified. The use of differently sized devices
590 within a single raidz or mirror group is also flagged as an error
591 unless -f is specified.
592
593 Unless the -R option is specified, the default mount point is
594 "/pool". The mount point must not exist or must be empty, or else
595 the root dataset cannot be mounted. This can be overridden with the
596 -m option.
597
598 -f
599
600 Forces use of vdevs, even if they appear in use or specify a
601 conflicting replication level. Not all devices can be overrid‐
602 den in this manner.
603
604
605 -n
606
607 Displays the configuration that would be used without actually
608 creating the pool. The actual pool creation can still fail due
609 to insufficient privileges or device sharing.
610
611
612 -o property=value [-o property=value] ...
613
614 Sets the given pool properties. See the "Properties" section
615 for a list of valid properties that can be set.
616
617
618 -O file-system-property=value
619 [-O file-system-property=value] ...
620
621 Sets the given file system properties in the root file system
622 of the pool. See the "Properties" section of zfs(1M) for a list
623 of valid properties that can be set.
624
625
626 -R root
627
628 Equivalent to "-o cachefile=none,altroot=root"
629
630
631 -m mountpoint
632
633 Sets the mount point for the root dataset. The default mount
634 point is "/pool" or "altroot/pool" if altroot is specified. The
635 mount point must be an absolute path, "legacy", or "none". For
636 more information on dataset mount points, see zfs(1M).
637
638
639
640 zpool destroy [-f] pool
641
642 Destroys the given pool, freeing up any devices for other use. This
643 command tries to unmount any active datasets before destroying the
644 pool.
645
646 -f Forces any active datasets contained within the pool to be
647 unmounted.
648
649
650
651 zpool detach pool device
652
653 Detaches device from a mirror. The operation is refused if there
654 are no other valid replicas of the data.
655
656
657 zpool export [-f] pool ...
658
659 Exports the given pools from the system. All devices are marked as
660 exported, but are still considered in use by other subsystems. The
661 devices can be moved between systems (even those of different endi‐
662 anness) and imported as long as a sufficient number of devices are
663 present.
664
665 Before exporting the pool, all datasets within the pool are
666 unmounted. A pool can not be exported if it has a shared spare that
667 is currently being used.
668
669 For pools to be portable, you must give the zpool command whole
670 disks, not just slices, so that ZFS can label the disks with porta‐
671 ble EFI labels. Otherwise, disk drivers on platforms of different
672 endianness will not recognize the disks.
673
674 -f Forcefully unmount all datasets, using the "unmount -f" com‐
675 mand.
676
677 This command will forcefully export the pool even if it has a
678 shared spare that is currently being used. This may lead to
679 potential data corruption.
680
681
682
683 zpool get "all" | property[,...] pool ...
684
685 Retrieves the given list of properties (or all properties if "all"
686 is used) for the specified storage pool(s). These properties are
687 displayed with the following fields:
688
689 name Name of storage pool
690 property Property name
691 value Property value
692 source Property source, either 'default' or 'local'.
693
694
695 See the "Properties" section for more information on the available
696 pool properties.
697
698
699 zpool history [-il] [pool] ...
700
701 Displays the command history of the specified pools or all pools if
702 no pool is specified.
703
704 -i Displays internally logged ZFS events in addition to user
705 initiated events.
706
707
708 -l Displays log records in long format, which in addition to
709 standard format includes, the user name, the hostname, and
710 the zone in which the operation was performed.
711
712
713
714 zpool import [-d dir | -c cachefile] [-D]
715
716 Lists pools available to import. If the -d option is not specified,
717 this command searches for devices in "/dev/dsk". The -d option can
718 be specified multiple times, and all directories are searched. If
719 the device appears to be part of an exported pool, this command
720 displays a summary of the pool with the name of the pool, a numeric
721 identifier, as well as the vdev layout and current health of the
722 device for each device or file. Destroyed pools, pools that were
723 previously destroyed with the "zpool destroy" command, are not
724 listed unless the -D option is specified.
725
726 The numeric identifier is unique, and can be used instead of the
727 pool name when multiple exported pools of the same name are avail‐
728 able.
729
730 -c cachefile Reads configuration from the given cachefile that
731 was created with the "cachefile" pool property.
732 This cachefile is used instead of searching for
733 devices.
734
735
736 -d dir Searches for devices or files in dir. The -d option
737 can be specified multiple times.
738
739
740 -D Lists destroyed pools only.
741
742
743
744 zpool import [-o mntopts] [ -o property=value] ... [-d dir | -c
745 cachefile] [-D] [-f] [-R root] -a
746
747 Imports all pools found in the search directories. Identical to the
748 previous command, except that all pools with a sufficient number of
749 devices available are imported. Destroyed pools, pools that were
750 previously destroyed with the "zpool destroy" command, will not be
751 imported unless the -D option is specified.
752
753 -o mntopts Comma-separated list of mount options to use
754 when mounting datasets within the pool. See
755 zfs(1M) for a description of dataset proper‐
756 ties and mount options.
757
758
759 -o property=value Sets the specified property on the imported
760 pool. See the "Properties" section for more
761 information on the available pool properties.
762
763
764 -c cachefile Reads configuration from the given cachefile
765 that was created with the "cachefile" pool
766 property. This cachefile is used instead of
767 searching for devices.
768
769
770 -d dir Searches for devices or files in dir. The -d
771 option can be specified multiple times. This
772 option is incompatible with the -c option.
773
774
775 -D Imports destroyed pools only. The -f option is
776 also required.
777
778
779 -f Forces import, even if the pool appears to be
780 potentially active.
781
782
783 -a Searches for and imports all pools found.
784
785
786 -R root Sets the "cachefile" property to "none" and
787 the "altroot" property to "root".
788
789
790
791 zpool import [-o mntopts] [ -o property=value] ... [-d dir | -c
792 cachefile] [-D] [-f] [-R root] pool | id [newpool]
793
794 Imports a specific pool. A pool can be identified by its name or
795 the numeric identifier. If newpool is specified, the pool is
796 imported using the name newpool. Otherwise, it is imported with the
797 same name as its exported name.
798
799 If a device is removed from a system without running "zpool export"
800 first, the device appears as potentially active. It cannot be
801 determined if this was a failed export, or whether the device is
802 really in use from another host. To import a pool in this state,
803 the -f option is required.
804
805 -o mntopts
806
807 Comma-separated list of mount options to use when mounting
808 datasets within the pool. See zfs(1M) for a description of
809 dataset properties and mount options.
810
811
812 -o property=value
813
814 Sets the specified property on the imported pool. See the
815 "Properties" section for more information on the available pool
816 properties.
817
818
819 -c cachefile
820
821 Reads configuration from the given cachefile that was created
822 with the "cachefile" pool property. This cachefile is used
823 instead of searching for devices.
824
825
826 -d dir
827
828 Searches for devices or files in dir. The -d option can be
829 specified multiple times. This option is incompatible with the
830 -c option.
831
832
833 -D
834
835 Imports destroyed pool. The -f option is also required.
836
837
838 -f
839
840 Forces import, even if the pool appears to be potentially
841 active.
842
843
844 -R root
845
846 Sets the "cachefile" property to "none" and the "altroot" prop‐
847 erty to "root".
848
849
850
851 zpool iostat [-T u | d] [-v] [pool] ... [interval[count]]
852
853 Displays I/O statistics for the given pools. When given an inter‐
854 val, the statistics are printed every interval seconds until Ctrl-C
855 is pressed. If no pools are specified, statistics for every pool in
856 the system is shown. If count is specified, the command exits after
857 count reports are printed.
858
859 -T u | d Display a time stamp.
860
861 Specify u for a printed representation of the internal
862 representation of time. See time(2). Specify d for
863 standard date format. See date(1).
864
865
866 -v Verbose statistics. Reports usage statistics for indi‐
867 vidual vdevs within the pool, in addition to the pool-
868 wide statistics.
869
870
871
872 zpool list [-H] [-o props[,...]] [pool] ...
873
874 Lists the given pools along with a health status and space usage.
875 When given no arguments, all pools in the system are listed.
876
877 -H Scripted mode. Do not display headers, and separate
878 fields by a single tab instead of arbitrary space.
879
880
881 -o props Comma-separated list of properties to display. See the
882 "Properties" section for a list of valid properties.
883 The default list is "name, size, used, available,
884 capacity, health, altroot"
885
886
887
888 zpool offline [-t] pool device ...
889
890 Takes the specified physical device offline. While the device is
891 offline, no attempt is made to read or write to the device.
892
893 This command is not applicable to spares or cache devices.
894
895 -t Temporary. Upon reboot, the specified physical device reverts
896 to its previous state.
897
898
899
900 zpool online [-e] pool device...
901
902 Brings the specified physical device online.
903
904 This command is not applicable to spares or cache devices.
905
906 -e Expand the device to use all available space. If the device
907 is part of a mirror or raidz then all devices must be
908 expanded before the new space will become available to the
909 pool.
910
911
912
913 zpool remove pool device ...
914
915 Removes the specified device from the pool. This command currently
916 only supports removing hot spares, cache, and log devices. A mir‐
917 rored log device can be removed by specifying the top-level mirror
918 for the log. Non-log devices that are part of a mirrored configura‐
919 tion can be removed using the zpool detach command. Non-redundant
920 and raidz devices cannot be removed from a pool.
921
922
923 zpool replace [-f] pool old_device [new_device]
924
925 Replaces old_device with new_device. This is equivalent to attach‐
926 ing new_device, waiting for it to resilver, and then detaching
927 old_device.
928
929 The size of new_device must be greater than or equal to the minimum
930 size of all the devices in a mirror or raidz configuration.
931
932 new_device is required if the pool is not redundant. If new_device
933 is not specified, it defaults to old_device. This form of replace‐
934 ment is useful after an existing disk has failed and has been phys‐
935 ically replaced. In this case, the new disk may have the same
936 /dev/dsk path as the old device, even though it is actually a dif‐
937 ferent disk. ZFS recognizes this.
938
939 -f Forces use of new_device, even if its appears to be in use.
940 Not all devices can be overridden in this manner.
941
942
943
944 zpool scrub [-s] pool ...
945
946 Begins a scrub. The scrub examines all data in the specified pools
947 to verify that it checksums correctly. For replicated (mirror or
948 raidz) devices, ZFS automatically repairs any damage discovered
949 during the scrub. The "zpool status" command reports the progress
950 of the scrub and summarizes the results of the scrub upon comple‐
951 tion.
952
953 Scrubbing and resilvering are very similar operations. The differ‐
954 ence is that resilvering only examines data that ZFS knows to be
955 out of date (for example, when attaching a new device to a mirror
956 or replacing an existing device), whereas scrubbing examines all
957 data to discover silent errors due to hardware faults or disk fail‐
958 ure.
959
960 Because scrubbing and resilvering are I/O-intensive operations, ZFS
961 only allows one at a time. If a scrub is already in progress, the
962 "zpool scrub" command terminates it and starts a new scrub. If a
963 resilver is in progress, ZFS does not allow a scrub to be started
964 until the resilver completes.
965
966 -s Stop scrubbing.
967
968
969
970 zpool set property=value pool
971
972 Sets the given property on the specified pool. See the "Properties"
973 section for more information on what properties can be set and
974 acceptable values.
975
976
977 zpool status [-xv] [pool] ...
978
979 Displays the detailed health status for the given pools. If no pool
980 is specified, then the status of each pool in the system is dis‐
981 played. For more information on pool and device health, see the
982 "Device Failure and Recovery" section.
983
984 If a scrub or resilver is in progress, this command reports the
985 percentage done and the estimated time to completion. Both of these
986 are only approximate, because the amount of data in the pool and
987 the other workloads on the system can change.
988
989 -x Only display status for pools that are exhibiting errors or
990 are otherwise unavailable.
991
992
993 -v Displays verbose data error information, printing out a com‐
994 plete list of all data errors since the last complete pool
995 scrub.
996
997
998
999 zpool upgrade
1000
1001 Displays all pools formatted using a different ZFS on-disk version.
1002 Older versions can continue to be used, but some features may not
1003 be available. These pools can be upgraded using "zpool upgrade -a".
1004 Pools that are formatted with a more recent version are also dis‐
1005 played, although these pools will be inaccessible on the system.
1006
1007
1008 zpool upgrade -v
1009
1010 Displays ZFS versions supported by the current software. The cur‐
1011 rent ZFS versions and all previous supported versions are dis‐
1012 played, along with an explanation of the features provided with
1013 each version.
1014
1015
1016 zpool upgrade [-V version] -a | pool ...
1017
1018 Upgrades the given pool to the latest on-disk version. Once this is
1019 done, the pool will no longer be accessible on systems running
1020 older versions of the software.
1021
1022 -a Upgrades all pools.
1023
1024
1025 -V version Upgrade to the specified version. If the -V flag is
1026 not specified, the pool is upgraded to the most
1027 recent version. This option can only be used to
1028 increase the version number, and only up to the most
1029 recent version supported by this software.
1030
1031
1032
1034 Example 1 Creating a RAID-Z Storage Pool
1035
1036
1037 The following command creates a pool with a single raidz root vdev that
1038 consists of six disks.
1039
1040
1041 # zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
1042
1043
1044
1045 Example 2 Creating a Mirrored Storage Pool
1046
1047
1048 The following command creates a pool with two mirrors, where each mir‐
1049 ror contains two disks.
1050
1051
1052 # zpool create tank mirror c0t0d0 c0t1d0 mirror c0t2d0 c0t3d0
1053
1054
1055
1056 Example 3 Creating a ZFS Storage Pool by Using Slices
1057
1058
1059 The following command creates an unmirrored pool using two disk slices.
1060
1061
1062 # zpool create tank /dev/dsk/c0t0d0s1 c0t1d0s4
1063
1064
1065
1066 Example 4 Creating a ZFS Storage Pool by Using Files
1067
1068
1069 The following command creates an unmirrored pool using files. While not
1070 recommended, a pool based on files can be useful for experimental pur‐
1071 poses.
1072
1073
1074 # zpool create tank /path/to/file/a /path/to/file/b
1075
1076
1077
1078 Example 5 Adding a Mirror to a ZFS Storage Pool
1079
1080
1081 The following command adds two mirrored disks to the pool "tank",
1082 assuming the pool is already made up of two-way mirrors. The additional
1083 space is immediately available to any datasets within the pool.
1084
1085
1086 # zpool add tank mirror c1t0d0 c1t1d0
1087
1088
1089
1090 Example 6 Listing Available ZFS Storage Pools
1091
1092
1093 The following command lists all available pools on the system. In this
1094 case, the pool zion is faulted due to a missing device.
1095
1096
1097
1098 The results from this command are similar to the following:
1099
1100
1101 # zpool list
1102 NAME SIZE USED AVAIL CAP HEALTH ALTROOT
1103 pool 67.5G 2.92M 67.5G 0% ONLINE -
1104 tank 67.5G 2.92M 67.5G 0% ONLINE -
1105 zion - - - 0% FAULTED -
1106
1107
1108
1109 Example 7 Destroying a ZFS Storage Pool
1110
1111
1112 The following command destroys the pool "tank" and any datasets con‐
1113 tained within.
1114
1115
1116 # zpool destroy -f tank
1117
1118
1119
1120 Example 8 Exporting a ZFS Storage Pool
1121
1122
1123 The following command exports the devices in pool tank so that they can
1124 be relocated or later imported.
1125
1126
1127 # zpool export tank
1128
1129
1130
1131 Example 9 Importing a ZFS Storage Pool
1132
1133
1134 The following command displays available pools, and then imports the
1135 pool "tank" for use on the system.
1136
1137
1138
1139 The results from this command are similar to the following:
1140
1141
1142 # zpool import
1143 pool: tank
1144 id: 15451357997522795478
1145 state: ONLINE
1146 action: The pool can be imported using its name or numeric identifier.
1147 config:
1148
1149 tank ONLINE
1150 mirror ONLINE
1151 c1t2d0 ONLINE
1152 c1t3d0 ONLINE
1153
1154 # zpool import tank
1155
1156
1157
1158 Example 10 Upgrading All ZFS Storage Pools to the Current Version
1159
1160
1161 The following command upgrades all ZFS Storage pools to the current
1162 version of the software.
1163
1164
1165 # zpool upgrade -a
1166 This system is currently running ZFS version 2.
1167
1168
1169
1170 Example 11 Managing Hot Spares
1171
1172
1173 The following command creates a new pool with an available hot spare:
1174
1175
1176 # zpool create tank mirror c0t0d0 c0t1d0 spare c0t2d0
1177
1178
1179
1180
1181 If one of the disks were to fail, the pool would be reduced to the
1182 degraded state. The failed device can be replaced using the following
1183 command:
1184
1185
1186 # zpool replace tank c0t0d0 c0t3d0
1187
1188
1189
1190
1191 Once the data has been resilvered, the spare is automatically removed
1192 and is made available should another device fails. The hot spare can be
1193 permanently removed from the pool using the following command:
1194
1195
1196 # zpool remove tank c0t2d0
1197
1198
1199
1200 Example 12 Creating a ZFS Pool with Mirrored Separate Intent Logs
1201
1202
1203 The following command creates a ZFS storage pool consisting of two,
1204 two-way mirrors and mirrored log devices:
1205
1206
1207 # zpool create pool mirror c0d0 c1d0 mirror c2d0 c3d0 log mirror \
1208 c4d0 c5d0
1209
1210
1211
1212 Example 13 Adding Cache Devices to a ZFS Pool
1213
1214
1215 The following command adds two disks for use as cache devices to a ZFS
1216 storage pool:
1217
1218
1219 # zpool add pool cache c2d0 c3d0
1220
1221
1222
1223
1224 Once added, the cache devices gradually fill with content from main
1225 memory. Depending on the size of your cache devices, it could take over
1226 an hour for them to fill. Capacity and reads can be monitored using the
1227 iostat option as follows:
1228
1229
1230 # zpool iostat -v pool 5
1231
1232
1233
1234 Example 14 Removing a Mirrored Log Device
1235
1236
1237 The following command removes the mirrored log device mirror-2.
1238
1239
1240
1241 Given this configuration:
1242
1243
1244 pool: tank
1245 state: ONLINE
1246 scrub: none requested
1247 config:
1248
1249 NAME STATE READ WRITE CKSUM
1250 tank ONLINE 0 0 0
1251 mirror-0 ONLINE 0 0 0
1252 c6t0d0 ONLINE 0 0 0
1253 c6t1d0 ONLINE 0 0 0
1254 mirror-1 ONLINE 0 0 0
1255 c6t2d0 ONLINE 0 0 0
1256 c6t3d0 ONLINE 0 0 0
1257 logs
1258 mirror-2 ONLINE 0 0 0
1259 c4t0d0 ONLINE 0 0 0
1260 c4t1d0 ONLINE 0 0 0
1261
1262
1263
1264
1265 The command to remove the mirrored log mirror-2 is:
1266
1267
1268 # zpool remove tank mirror-2
1269
1270
1271
1273 The following exit values are returned:
1274
1275 0 Successful completion.
1276
1277
1278 1 An error occurred.
1279
1280
1281 2 Invalid command line options were specified.
1282
1283
1285 See attributes(5) for descriptions of the following attributes:
1286
1287
1288
1289
1290 ┌─────────────────────────────┬─────────────────────────────┐
1291 │ ATTRIBUTE TYPE │ ATTRIBUTE VALUE │
1292 ├─────────────────────────────┼─────────────────────────────┤
1293 │Availability │SUNWzfsu │
1294 ├─────────────────────────────┼─────────────────────────────┤
1295 │Interface Stability │Evolving │
1296 └─────────────────────────────┴─────────────────────────────┘
1297
1299 zfs(1M), attributes(5)
1300
1301
1302
1303SunOS 5.11 21 Sep 2009 zpool(1M)