1zpool(1M)               System Administration Commands               zpool(1M)
2
3
4

NAME

6       zpool - configures ZFS storage pools
7

SYNOPSIS

9       zpool [-?]
10
11
12       zpool add [-fn] pool vdev ...
13
14
15       zpool attach [-f] pool device new_device
16
17
18       zpool clear [-F [-n]] pool [device]
19
20
21       zpool create [-fn] [-o property=value] ... [-O file-system-property=value]
22            ... [-m mountpoint] [-R root] pool vdev ...
23
24
25       zpool destroy [-f] pool
26
27
28       zpool detach pool device
29
30
31       zpool export [-f] pool ...
32
33
34       zpool get "all" | property[,...] pool ...
35
36
37       zpool history [-il] [pool] ...
38
39
40       zpool import [-d dir] [-D]
41
42
43       zpool import [-o mntopts] [-o property=value] ... [-d dir | -c cachefile]
44            [-D] [-f] [-R root] [-F [-n]] -a
45
46
47       zpool import [-o mntopts] [-o property=value] ... [-d dir | -c cachefile]
48            [-D] [-f] [-R root] [-F [-n]] pool |id [newpool]
49
50
51       zpool iostat [-T u | d ] [-v] [pool] ... [interval[count]]
52
53
54       zpool list [-H] [-o property[,...]] [pool] ...
55
56
57       zpool offline [-t] pool device ...
58
59
60       zpool online pool device ...
61
62
63       zpool remove pool device ...
64
65
66       zpool replace [-f] pool device [new_device]
67
68
69       zpool scrub [-s] pool ...
70
71
72       zpool set property=value pool
73
74
75       zpool split [-R altroot] [-n] [-o mntopts] [-o property=value] pool
76            newpool [device ...]
77
78
79       zpool status [-xv] [pool] ...
80
81
82       zpool upgrade
83
84
85       zpool upgrade -v
86
87
88       zpool upgrade [-V version] -a | pool ...
89
90

DESCRIPTION

92       The  zpool  command  configures  ZFS storage pools. A storage pool is a
93       collection of devices that provides physical storage and data  replica‐
94       tion for ZFS datasets.
95
96
97       All  datasets  within  a storage pool share the same space. See zfs(1M)
98       for information on managing datasets.
99
100   Virtual Devices (vdevs)
101       A "virtual device" describes a single device or a collection of devices
102       organized  according  to certain performance and fault characteristics.
103       The following virtual devices are supported:
104
105       disk
106
107           A block device, typically located under /dev/dsk. ZFS can use indi‐
108           vidual  slices or partitions, though the recommended mode of opera‐
109           tion is to use whole disks. A disk can be specified by a full path,
110           or  it  can  be  a shorthand name (the relative portion of the path
111           under "/dev/dsk"). A whole disk can be specified  by  omitting  the
112           slice or partition designation. For example, "c0t0d0" is equivalent
113           to "/dev/dsk/c0t0d0s2". When given a whole disk, ZFS  automatically
114           labels the disk, if necessary.
115
116
117       file
118
119           A  regular  file.  The  use of files as a backing store is strongly
120           discouraged. It is designed primarily for experimental purposes, as
121           the fault tolerance of a file is only as good as the file system of
122           which it is a part. A file must be specified by a full path.
123
124
125       mirror
126
127           A mirror of two or more devices. Data is replicated in an identical
128           fashion across all components of a mirror. A mirror with N disks of
129           size X can hold X bytes and can  withstand  (N-1)  devices  failing
130           before data integrity is compromised.
131
132
133       raidz
134       raidz1
135       raidz2
136       raidz3
137
138           A variation on RAID-5 that allows for better distribution of parity
139           and eliminates the "RAID-5 write hole" (in which  data  and  parity
140           become inconsistent after a power loss). Data and parity is striped
141           across all disks within a raidz group.
142
143           A raidz group can have single-, double- , or triple parity, meaning
144           that  the  raidz  group  can  sustain  one, two, or three failures,
145           respectively, without losing any data. The raidz1 vdev type  speci‐
146           fies  a single-parity raidz group; the raidz2 vdev type specifies a
147           double-parity raidz group; and the raidz3  vdev  type  specifies  a
148           triple-parity  raidz  group.  The  raidz  vdev type is an alias for
149           raidz1.
150
151           A raidz group with N disks of size X with P parity disks  can  hold
152           approximately  (N-P)*X  bytes and can withstand P device(s) failing
153           before data integrity is compromised. The minimum number of devices
154           in  a  raidz group is one more than the number of parity disks. The
155           recommended number is between 3 and 9 to help increase performance.
156
157
158       spare
159
160           A special pseudo-vdev which keeps track of available hot spares for
161           a pool. For more information, see the "Hot Spares" section.
162
163
164       log
165
166           A separate-intent log device. If more than one log device is speci‐
167           fied, then writes are load-balanced between  devices.  Log  devices
168           can  be  mirrored.  However, raidz vdev types are not supported for
169           the intent log. For more information, see the "Intent Log" section.
170
171
172       cache
173
174           A device used to cache storage pool data. A cache device cannot  be
175           configured  as  a  mirror or raidz group. For more information, see
176           the "Cache Devices" section.
177
178
179
180       Virtual devices cannot be nested, so a mirror or raidz  virtual  device
181       can  only contain files or disks. Mirrors of mirrors (or other combina‐
182       tions) are not allowed.
183
184
185       A pool can have any number of virtual devices at the top of the config‐
186       uration (known as "root vdevs"). Data is dynamically distributed across
187       all top-level devices to balance data among  devices.  As  new  virtual
188       devices are added, ZFS automatically places data on the newly available
189       devices.
190
191
192       Virtual devices are specified one at a time on the command line,  sepa‐
193       rated by whitespace. The keywords "mirror" and "raidz" are used to dis‐
194       tinguish where a group ends and another begins. For example,  the  fol‐
195       lowing creates two root vdevs, each a mirror of two disks:
196
197         # zpool create mypool mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0
198
199
200
201   Device Failure and Recovery
202       ZFS  supports  a rich set of mechanisms for handling device failure and
203       data corruption. All metadata and data is checksummed, and ZFS automat‐
204       ically repairs bad data from a good copy when corruption is detected.
205
206
207       In  order  to take advantage of these features, a pool must make use of
208       some form of redundancy, using either mirrored or raidz  groups.  While
209       ZFS  supports running in a non-redundant configuration, where each root
210       vdev is simply a disk or file, this is strongly discouraged.  A  single
211       case of bit corruption can render some or all of your data unavailable.
212
213
214       A  pool's  health  status  is described by one of three states: online,
215       degraded, or faulted. An online pool has  all  devices  operating  nor‐
216       mally. A degraded pool is one in which one or more devices have failed,
217       but the data is still available due to  a  redundant  configuration.  A
218       faulted  pool  has  corrupted metadata, or one or more faulted devices,
219       and insufficient replicas to continue functioning.
220
221
222       The health of the top-level vdev, such as mirror or  raidz  device,  is
223       potentially impacted by the state of its associated vdevs, or component
224       devices. A top-level vdev or component device is in one of the  follow‐
225       ing states:
226
227       DEGRADED
228
229           One or more top-level vdevs is in the degraded state because one or
230           more component devices are offline. Sufficient  replicas  exist  to
231           continue functioning.
232
233           One  or more component devices is in the degraded or faulted state,
234           but sufficient replicas exist to continue functioning. The underly‐
235           ing conditions are as follows:
236
237               o      The  number of checksum errors exceeds acceptable levels
238                      and the device is degraded as an indication  that  some‐
239                      thing  may  be wrong. ZFS continues to use the device as
240                      necessary.
241
242               o      The number of I/O errors exceeds acceptable levels.  The
243                      device  could not be marked as faulted because there are
244                      insufficient replicas to continue functioning.
245
246
247       FAULTED
248
249           One or more top-level vdevs is in the faulted state because one  or
250           more  component devices are offline. Insufficient replicas exist to
251           continue functioning.
252
253           One or more component devices is in the faulted state, and insuffi‐
254           cient replicas exist to continue functioning. The underlying condi‐
255           tions are as follows:
256
257               o      The device could be opened, but  the  contents  did  not
258                      match expected values.
259
260               o      The  number  of I/O errors exceeds acceptable levels and
261                      the device is faulted to  prevent  further  use  of  the
262                      device.
263
264
265       OFFLINE
266
267           The device was explicitly taken offline by the "zpool offline" com‐
268           mand.
269
270
271       ONLINE
272
273           The device is online and functioning.
274
275
276       REMOVED
277
278           The device was physically removed while  the  system  was  running.
279           Device  removal detection is hardware-dependent and may not be sup‐
280           ported on all platforms.
281
282
283       UNAVAIL
284
285           The device could not be opened. If a pool is imported when a device
286           was  unavailable,  then  the  device will be identified by a unique
287           identifier instead of its path since the path was never correct  in
288           the first place.
289
290
291
292       If  a  device  is  removed  and  later  re-attached  to the system, ZFS
293       attempts to put the device online automatically. Device  attach  detec‐
294       tion is hardware-dependent and might not be supported on all platforms.
295
296   Hot Spares
297       ZFS  allows  devices to be associated with pools as "hot spares". These
298       devices are not actively used in the pool, but when  an  active  device
299       fails,  it  is  automatically replaced by a hot spare. To create a pool
300       with hot spares, specify a "spare" vdev with any number of devices. For
301       example,
302
303         # zpool create pool mirror c0d0 c1d0 spare c2d0 c3d0
304
305
306
307
308       Spares  can  be shared across multiple pools, and can be added with the
309       "zpool add" command and removed with the "zpool remove" command. Once a
310       spare  replacement  is  initiated, a new "spare" vdev is created within
311       the configuration that will remain there until the original  device  is
312       replaced.  At  this  point,  the  hot  spare becomes available again if
313       another device fails.
314
315
316       If a pool has a shared spare that is currently being used, the pool can
317       not  be exported since other pools may use this shared spare, which may
318       lead to potential data corruption.
319
320
321       An in-progress spare replacement can be cancelled by detaching the  hot
322       spare.  If  the original faulted device is detached, then the hot spare
323       assumes its place in the configuration, and is removed from  the  spare
324       list of all active pools.
325
326
327       Spares cannot replace log devices.
328
329   Intent Log
330       The  ZFS  Intent Log (ZIL) satisfies POSIX requirements for synchronous
331       transactions. For instance, databases often require their  transactions
332       to  be on stable storage devices when returning from a system call. NFS
333       and other applications can also use fsync() to ensure  data  stability.
334       By  default,  the  intent  log is allocated from blocks within the main
335       pool. However, it might be possible to  get  better  performance  using
336       separate  intent  log  devices  such  as NVRAM or a dedicated disk. For
337       example:
338
339         # zpool create pool c0d0 c1d0 log c2d0
340
341
342
343
344       Multiple log devices can also be specified, and they can  be  mirrored.
345       See  the  EXAMPLES  section  for  an  example of mirroring multiple log
346       devices.
347
348
349       Log devices can be added, replaced, attached,  detached,  and  imported
350       and  exported  as  part of the larger pool. Mirrored log devices can be
351       removed by specifying the top-level mirror for the log.
352
353   Cache Devices
354       Devices can be added to  a  storage  pool  as  "cache  devices."  These
355       devices  provide an additional layer of caching between main memory and
356       disk. For read-heavy workloads, where the  working  set  size  is  much
357       larger  than  what  can  be  cached in main memory, using cache devices
358       allow much more of this working set  to  be  served  from  low  latency
359       media.  Using  cache devices provides the greatest performance improve‐
360       ment for random read-workloads of mostly static content.
361
362
363       To create a pool with cache devices, specify a "cache"  vdev  with  any
364       number of devices. For example:
365
366         # zpool create pool c0d0 c1d0 cache c2d0 c3d0
367
368
369
370
371       Cache devices cannot be mirrored or part of a raidz configuration. If a
372       read error is encountered on a cache device, that read I/O is  reissued
373       to  the original storage pool device, which might be part of a mirrored
374       or raidz configuration.
375
376
377       The content of the cache devices is considered volatile, as is the case
378       with other system caches.
379
380   Processes
381       Each imported pool has an associated process, named zpool-poolname. The
382       threads in this process are the pool's I/O  processing  threads,  which
383       handle the compression, checksumming, and other tasks for all I/O asso‐
384       ciated with the pool. This process exists to provides  visibility  into
385       the  CPU  utilization  of  the system's storage pools. The existence of
386       this process is an unstable interface.
387
388   Properties
389       Each pool has several properties associated with  it.  Some  properties
390       are  read-only  statistics while others are configurable and change the
391       behavior of the pool. The following are read-only properties:
392
393       alloc
394
395           Amount of storage space within the pool that  has  been  physically
396           allocated.
397
398
399       capacity
400
401           Percentage  of  pool space used. This property can also be referred
402           to by its shortened column name, "cap".
403
404
405       dedupratio
406
407           The deduplication ratio specified for a pool, expressed  as a  mul‐
408           tiplier. Deduplication can be turned on by entering the command:
409
410             # zfs set dedup=on dataset
411
412
413           The default value is off.
414
415           dedupratio  is expressed as a single decimal number. For example, a
416           dedupratio value of 1.76 indicates that 1.76  units  of  data  were
417           stored but only 1 unit of disk space was actually consumed.
418
419
420       free
421
422           Number of blocks within the pool that are not allocated.
423
424
425       guid
426
427           A unique identifier for the pool.
428
429
430       health
431
432           The current health of the pool. Health can be "ONLINE", "DEGRADED",
433           "FAULTED", " OFFLINE", "REMOVED", or "UNAVAIL".
434
435
436       size
437
438           Total size of the storage pool.
439
440
441
442       These space usage properties report actual physical space available  to
443       the  storage  pool.  The physical space can be different from the total
444       amount of space that any  contained  datasets  can  actually  use.  The
445       amount of space used in a raidz configuration depends on the character‐
446       istics of the data being written. In addition, ZFS reserves some  space
447       for  internal  accounting  that the zfs(1M) command takes into account,
448       but the zpool command does not. For  non-full  pools  of  a  reasonable
449       size, these effects should be invisible. For small pools, or pools that
450       are close to being completely full, these discrepancies may become more
451       noticeable.
452
453
454       The following property can be set at creation time and import time:
455
456       altroot
457
458           Alternate  root  directory.  If set, this directory is prepended to
459           any mount points within the pool. This can be used  when  examining
460           an  unknown pool where the mount points cannot be trusted, or in an
461           alternate boot environment, where the typical paths are not  valid.
462           altroot  is  not  a persistent property. It is valid only while the
463           system is up. Setting altroot  defaults  to  using  cachefile=none,
464           though this may be overridden     using an explicit setting.
465
466
467
468       The  following  properties can be set at creation time and import time,
469       and later changed with the zpool set command:
470
471       autoexpand=on | off
472
473           Controls automatic pool expansion when the underlying LUN is grown.
474           If set to on, the pool will be resized according to the size of the
475           expanded device. If the device is part of a mirror  or  raidz  then
476           all  devices within that mirror/raidz group must be expanded before
477           the new space is made available to the pool. The  default  behavior
478           is off. This property can also be referred to by its shortened col‐
479           umn name, expand.
480
481
482       autoreplace=on | off
483
484           Controls automatic device replacement.  If  set  to  "off",  device
485           replacement  must  be  initiated  by the administrator by using the
486           "zpool replace" command. If set to "on", any new device,  found  in
487           the  same physical location as a device that previously belonged to
488           the pool, is automatically  formatted  and  replaced.  The  default
489           behavior  is  "off".  This  property can also be referred to by its
490           shortened column name, "replace".
491
492
493       bootfs=pool/dataset
494
495           Identifies the default bootable dataset for  the  root  pool.  This
496           property  is  expected  to  be  set  mainly by the installation and
497           upgrade programs.
498
499
500       cachefile=path | none
501
502           Controls the location of where the pool  configuration  is  cached.
503           Discovering  all  pools on system startup requires a cached copy of
504           the configuration data that is stored on the root file system.  All
505           pools  in  this  cache  are  automatically imported when the system
506           boots. Some environments, such as install and clustering,  need  to
507           cache  this  information  in a different location so that pools are
508           not automatically imported. Setting this property caches  the  pool
509           configuration  in  a  different location that can later be imported
510           with "zpool import -c". Setting it to the special value "none" cre‐
511           ates  a  temporary pool that is never cached, and the special value
512           '' (empty string) uses the default location.
513
514           Multiple pools can share the same cache file.  Because  the  kernel
515           destroys  and recreates this file when pools are added and removed,
516           care should be taken when attempting to access this file. When  the
517           last  pool  using a cachefile is exported or destroyed, the file is
518           removed.
519
520
521       delegation=on | off
522
523           Controls whether a non-privileged user is granted access  based  on
524           the  dataset  permissions  defined  on the dataset. See zfs(1M) for
525           more information on ZFS delegated administration.
526
527
528       failmode=wait | continue | panic
529
530           Controls the system behavior in  the  event  of  catastrophic  pool
531           failure.  This condition is typically a result of a loss of connec‐
532           tivity to the underlying storage device(s)  or  a  failure  of  all
533           devices  within  the  pool. The behavior of such an event is deter‐
534           mined as follows:
535
536           wait
537
538               Blocks all I/O access to the pool until the device connectivity
539               is  recovered and the errors are cleared. A pool remains in the
540               wait state until the device issue  is  resolved.  This  is  the
541               default behavior.
542
543
544           continue
545
546               Returns  EIO  to any new write I/O requests but allows reads to
547               any of the remaining healthy devices. Any write  requests  that
548               have yet to be committed to disk would be blocked.
549
550
551           panic
552
553               Prints  out  a  message  to  the console and generates a system
554               crash dump.
555
556
557
558       listsnaps=on | off
559
560           Controls whether information about snapshots associated  with  this
561           pool  is  output  when "zfs list" is run without the -t option. The
562           default value is "off".
563
564
565       version=version
566
567           The current on-disk version of the pool. This can be increased, but
568           never decreased. The preferred method of updating pools is with the
569           "zpool upgrade" command, though this property can be  used  when  a
570           specific  version is needed for backwards compatibility. This prop‐
571           erty can be any number between 1 and the current  version  reported
572           by "zpool upgrade -v".
573
574
575   Subcommands
576       All  subcommands  that modify state are logged persistently to the pool
577       in their original form.
578
579
580       The zpool command provides subcommands to create  and  destroy  storage
581       pools, add capacity to storage pools, and provide information about the
582       storage pools. The following subcommands are supported:
583
584       zpool -?
585
586           Displays a help message.
587
588
589       zpool add [-fn] pool vdev ...
590
591           Adds the specified virtual devices to  the  given  pool.  The  vdev
592           specification  is  described  in the "Virtual Devices" section. The
593           behavior of the -f option, and  the  device  checks  performed  are
594           described in the "zpool create" subcommand.
595
596           -f
597
598               Forces  use  of  vdevs, even if they appear in use or specify a
599               conflicting replication level. Not all devices can be  overrid‐
600               den in this manner.
601
602
603           -n
604
605               Displays  the configuration that would be used without actually
606               adding the vdevs. The actual pool creation can still  fail  due
607               to insufficient privileges or device sharing.
608
609           Do  not  add a disk that is currently configured as a quorum device
610           to a zpool. After a disk is in the pool, that disk can then be con‐
611           figured as a quorum device.
612
613
614       zpool attach [-f] pool device new_device
615
616           Attaches  new_device  to  an  existing  zpool  device. The existing
617           device cannot be part of a raidz configuration. If  device  is  not
618           currently  part  of  a mirrored configuration, device automatically
619           transforms into a two-way  mirror  of  device  and  new_device.  If
620           device  is part of a two-way mirror, attaching new_device creates a
621           three-way mirror, and so on. In either case, new_device  begins  to
622           resilver immediately.
623
624           -f
625
626               Forces use of new_device, even if its appears to be in use. Not
627               all devices can be overridden in this manner.
628
629
630
631       zpool clear [-F [-n]] pool [device] ...
632
633           Clears device errors in a pool. If no arguments are specified,  all
634           device  errors  within the pool are cleared. If one or more devices
635           is specified, only  those  errors  associated  with  the  specified
636           device or devices are cleared.
637
638           -F
639
640               Initiates  recovery  mode  for  an unopenable pool. Attempts to
641               discard the last few transactions in the pool to return  it  to
642               an  openable  state.  Not all damaged pools can be recovered by
643               using this option. If successful, the data from  the  discarded
644               transactions is irretrievably lost.
645
646
647           -n
648
649               Used  in combination with the -F flag. Check whether discarding
650               transactions would make the pool openable, but do not  actually
651               discard any transactions.
652
653
654
655       zpool create [-fn] [-o property=value] ... [-O file-system-prop‐
656       erty=value] ... [-m mountpoint] [-R root] pool vdev ...
657
658           Creates a new storage pool containing the virtual devices specified
659           on  the  command  line. The pool name must begin with a letter, and
660           can only contain alphanumeric  characters  as  well  as  underscore
661           ("_"),  dash ("-"), and period ("."). The pool names mirror, raidz,
662           spare, and log are reserved, as are names beginning with  the  pat‐
663           tern  c[0-9].  The  vdev specification is described in the "Virtual
664           Devices" section.
665
666           The command verifies that each device specified is  accessible  and
667           not  currently  in  use  by another subsystem. There are some uses,
668           such as being currently mounted, or specified as the dedicated dump
669           device,  that  prevents a device from ever being used by ZFS. Other
670           uses, such as having a preexisting UFS file system, can be overrid‐
671           den with the -f option.
672
673           The  command also checks that the replication strategy for the pool
674           is consistent. An attempt to combine  redundant  and  non-redundant
675           storage  in a single pool, or to mix disks and files, results in an
676           error unless -f is specified. The use of differently sized  devices
677           within  a  single raidz or mirror group is also flagged as an error
678           unless -f is specified.
679
680           Unless the -R option is  specified,  the  default  mount  point  is
681           "/pool".  The  mount point must not exist or must be empty, or else
682           the root dataset cannot be mounted. This can be overridden with the
683           -m option.
684
685           -f
686
687               Forces  use  of  vdevs, even if they appear in use or specify a
688               conflicting replication level. Not all devices can be  overrid‐
689               den in this manner.
690
691
692           -n
693
694               Displays  the configuration that would be used without actually
695               creating the pool. The actual pool creation can still fail  due
696               to insufficient privileges or device sharing.
697
698
699           -o property=value [-o property=value] ...
700
701               Sets  the  given  pool properties. See the "Properties" section
702               for a list of valid properties that can be set.
703
704
705           -O file-system-property=value
706           [-O file-system-property=value] ...
707
708               Sets the given file system properties in the root  file  system
709               of the pool. See the "Properties" section of zfs(1M) for a list
710               of valid properties that can be set.
711
712
713           -R root
714
715               Equivalent to "-o cachefile=none,altroot=root"
716
717
718           -m mountpoint
719
720               Sets the mount point for the root dataset.  The  default  mount
721               point is "/pool" or "altroot/pool" if altroot is specified. The
722               mount point must be an absolute path, "legacy", or "none".  For
723               more information on dataset mount points, see zfs(1M).
724
725
726
727       zpool destroy [-f] pool
728
729           Destroys the given pool, freeing up any devices for other use. This
730           command tries to unmount any active datasets before destroying  the
731           pool.
732
733           -f
734
735               Forces  any  active  datasets  contained  within the pool to be
736               unmounted.
737
738
739
740       zpool detach pool device
741
742           Detaches device from a mirror. The operation is  refused  if  there
743           are no other valid replicas of the data.
744
745
746       zpool export [-f] pool ...
747
748           Exports  the given pools from the system. All devices are marked as
749           exported, but are still considered in use by other subsystems.  The
750           devices can be moved between systems (even those of different endi‐
751           anness) and imported as long as a sufficient number of devices  are
752           present.
753
754           Before  exporting  the  pool,  all  datasets  within  the  pool are
755           unmounted. A pool can not be exported if it has a shared spare that
756           is currently being used.
757
758           For  pools  to  be  portable, you must give the zpool command whole
759           disks, not just slices, so that ZFS can label the disks with porta‐
760           ble  EFI  labels. Otherwise, disk drivers on platforms of different
761           endianness will not recognize the disks.
762
763           -f
764
765               Forcefully unmount all datasets, using the  "unmount  -f"  com‐
766               mand.
767
768               This  command  will forcefully export the pool even if it has a
769               shared spare that is currently being used.  This  may  lead  to
770               potential data corruption.
771
772
773
774       zpool get "all" | property[,...] pool ...
775
776           Retrieves  the given list of properties (or all properties if "all"
777           is used) for the specified storage pool(s).  These  properties  are
778           displayed with the following fields:
779
780                    name          Name of storage pool
781                     property      Property name
782                     value         Property value
783                     source        Property source, either 'default' or 'local'.
784
785
786           See  the "Properties" section for more information on the available
787           pool properties.
788
789
790       zpool history [-il] [pool] ...
791
792           Displays the command history of the specified pools or all pools if
793           no pool is specified.
794
795           -i
796
797               Displays  internally logged ZFS events in addition to user ini‐
798               tiated events.
799
800
801           -l
802
803               Displays log records in long format, which in addition to stan‐
804               dard format includes, the user name, the hostname, and the zone
805               in which the operation was performed.
806
807
808
809       zpool import [-d dir | -c cachefile] [-D]
810
811           Lists pools available to import. If the -d option is not specified,
812           this  command searches for devices in "/dev/dsk". The -d option can
813           be specified multiple times, and all directories are  searched.  If
814           the  device  appears  to  be part of an exported pool, this command
815           displays a summary of the pool with the name of the pool, a numeric
816           identifier,  as  well  as the vdev layout and current health of the
817           device for each device or file. Destroyed pools,  pools  that  were
818           previously  destroyed  with  the  "zpool  destroy" command, are not
819           listed unless the -D option is specified.
820
821           The numeric identifier is unique, and can be used  instead  of  the
822           pool  name when multiple exported pools of the same name are avail‐
823           able.
824
825           -c cachefile
826
827               Reads configuration from the given cachefile that  was  created
828               with  the  "cachefile"  pool  property.  This cachefile is used
829               instead of searching for devices.
830
831
832           -d dir
833
834               Searches for devices or files in dir.  The  -d  option  can  be
835               specified multiple times.
836
837
838           -D
839
840               Lists destroyed pools only.
841
842
843
844       zpool import [-o mntopts] [ -o property=value] ... [-d dir | -c
845       cachefile] [-D] [-f] [-R root] [-F [-n]] -a
846
847           Imports all pools found in the search directories. Identical to the
848           previous command, except that all pools with a sufficient number of
849           devices available are imported. Destroyed pools,  pools  that  were
850           previously  destroyed with the "zpool destroy" command, will not be
851           imported unless the -D option is specified.
852
853           -o mntopts
854
855               Comma-separated list of mount  options  to  use  when  mounting
856               datasets  within  the  pool.  See  zfs(1M) for a description of
857               dataset properties and mount options.
858
859
860           -o property=value
861
862               Sets the specified property  on  the  imported  pool.  See  the
863               "Properties" section for more information on the available pool
864               properties.
865
866
867           -c cachefile
868
869               Reads configuration from the given cachefile that  was  created
870               with  the  "cachefile"  pool  property.  This cachefile is used
871               instead of searching for devices.
872
873
874           -d dir
875
876               Searches for devices or files in dir.  The  -d  option  can  be
877               specified  multiple times. This option is incompatible with the
878               -c option.
879
880
881           -D
882
883               Imports destroyed pools only. The -f option is also required.
884
885
886           -f
887
888               Forces import, even if  the  pool  appears  to  be  potentially
889               active.
890
891
892           -F
893
894               Recovery  mode for a non-importable pool. Attempt to return the
895               pool to an importable state by discarding the last few transac‐
896               tions.  Not  all  damaged  pools can be recovered by using this
897               option. If successful, the data from the discarded transactions
898               is  irretrievably  lost.  This option is ignored if the pool is
899               importable or already imported.
900
901
902           -a
903
904               Searches for and imports all pools found.
905
906
907           -R root
908
909               Sets the "cachefile" property to "none" and the "altroot" prop‐
910               erty to "root".
911
912
913           -n
914
915               Used  with  the  -F  recovery option. Determines whether a non-
916               importable pool can be made  importable  again,  but  does  not
917               actually perform the pool recovery. For more details about pool
918               recovery mode, see the -F option, above.
919
920
921
922       zpool import [-o mntopts] [ -o property=value] ... [-d dir | -c
923       cachefile] [-D] [-f] [-R root] [-F [-n]] pool | id [newpool]
924
925           Imports  a  specific  pool. A pool can be identified by its name or
926           the numeric identifier.  If  newpool  is  specified,  the  pool  is
927           imported using the name newpool. Otherwise, it is imported with the
928           same name as its exported name.
929
930           If a device is removed from a system without running "zpool export"
931           first,  the  device  appears  as  potentially  active. It cannot be
932           determined if this was a failed export, or whether  the  device  is
933           really  in  use  from another host. To import a pool in this state,
934           the -f option is required.
935
936           -o mntopts
937
938               Comma-separated list of mount  options  to  use  when  mounting
939               datasets  within  the  pool.  See  zfs(1M) for a description of
940               dataset properties and mount options.
941
942
943           -o property=value
944
945               Sets the specified property  on  the  imported  pool.  See  the
946               "Properties" section for more information on the available pool
947               properties.
948
949
950           -c cachefile
951
952               Reads configuration from the given cachefile that  was  created
953               with  the  "cachefile"  pool  property.  This cachefile is used
954               instead of searching for devices.
955
956
957           -d dir
958
959               Searches for devices or files in dir.  The  -d  option  can  be
960               specified  multiple times. This option is incompatible with the
961               -c option.
962
963
964           -D
965
966               Imports destroyed pool. The -f option is also required.
967
968
969           -f
970
971               Forces import, even if  the  pool  appears  to  be  potentially
972               active.
973
974
975           -F
976
977               Recovery  mode for a non-importable pool. Attempt to return the
978               pool to an importable state by discarding the last few transac‐
979               tions.  Not  all  damaged  pools can be recovered by using this
980               option. If successful, the data from the discarded transactions
981               is  irretrievably  lost.  This option is ignored if the pool is
982               importable or already imported.
983
984
985           -R root
986
987               Sets the "cachefile" property to "none" and the "altroot" prop‐
988               erty to "root".
989
990
991           -n
992
993               Used  with  the  -F  recovery option. Determines whether a non-
994               importable pool can be made  importable  again,  but  does  not
995               actually perform the pool recovery. For more details about pool
996               recovery mode, see the -F option, above.
997
998
999
1000       zpool iostat [-T u | d] [-v] [pool] ... [interval[count]]
1001
1002           Displays I/O statistics for the given pools. When given  an  inter‐
1003           val, the statistics are printed every interval seconds until Ctrl-C
1004           is pressed. If no pools are specified, statistics for every pool in
1005           the system is shown. If count is specified, the command exits after
1006           count reports are printed.
1007
1008           -T u | d
1009
1010               Display a time stamp.
1011
1012               Specify u for a printed representation of the  internal  repre‐
1013               sentation  of  time.  See  time(2). Specify d for standard date
1014               format. See date(1).
1015
1016
1017           -v
1018
1019               Verbose statistics. Reports  usage  statistics  for  individual
1020               vdevs within the pool, in addition to the pool-wide statistics.
1021
1022
1023
1024       zpool list [-H] [-o props[,...]] [pool] ...
1025
1026           Lists  the  given pools along with a health status and space usage.
1027           When given no arguments, all pools in the system are listed.
1028
1029           -H
1030
1031               Scripted mode. Do not display headers, and separate fields by a
1032               single tab instead of arbitrary space.
1033
1034
1035           -o props
1036
1037               Comma-separated list of properties to display. See the "Proper‐
1038               ties" section for a list of valid properties. The default  list
1039               is name, size, allocated, free, capacity, health, altroot.
1040
1041
1042
1043       zpool offline [-t] pool device ...
1044
1045           Takes  the  specified  physical device offline. While the device is
1046           offline, no attempt is made to read or write to the device.
1047
1048           This command is not applicable to spares or cache devices.
1049
1050           -t
1051
1052               Temporary. Upon reboot, the specified physical  device  reverts
1053               to its previous state.
1054
1055
1056
1057       zpool online [-e] pool device...
1058
1059           Brings the specified physical device online.
1060
1061           This command is not applicable to spares or cache devices.
1062
1063           -e
1064
1065               Expand  the device to use all available space. If the device is
1066               part of a mirror or raidz then all  devices  must  be  expanded
1067               before the new space will become available to the pool.
1068
1069
1070
1071       zpool remove pool device ...
1072
1073           Removes  the specified device from the pool. This command currently
1074           only supports removing hot spares, cache, and log devices.  A  mir‐
1075           rored  log device can be removed by specifying the top-level mirror
1076           for the log. Non-log devices that are part of a mirrored configura‐
1077           tion  can  be removed using the zpool detach command. Non-redundant
1078           and raidz devices cannot be removed from a pool.
1079
1080
1081       zpool replace [-f] pool old_device [new_device]
1082
1083           Replaces old_device with new_device. This is equivalent to  attach‐
1084           ing  new_device,  waiting  for  it  to resilver, and then detaching
1085           old_device.
1086
1087           The size of new_device must be greater than or equal to the minimum
1088           size of all the devices in a mirror or raidz configuration.
1089
1090           new_device  is required if the pool is not redundant. If new_device
1091           is not specified, it defaults to old_device. This form of  replace‐
1092           ment is useful after an existing disk has failed and has been phys‐
1093           ically replaced. In this case, the  new  disk  may  have  the  same
1094           /dev/dsk  path as the old device, even though it is actually a dif‐
1095           ferent disk. ZFS recognizes this.
1096
1097           -f
1098
1099               Forces use of new_device, even if its appears to be in use. Not
1100               all devices can be overridden in this manner.
1101
1102
1103
1104       zpool scrub [-s] pool ...
1105
1106           Begins  a scrub. The scrub examines all data in the specified pools
1107           to verify that it checksums correctly. For  replicated  (mirror  or
1108           raidz)  devices,  ZFS  automatically  repairs any damage discovered
1109           during the scrub. The "zpool status" command reports  the  progress
1110           of  the  scrub and summarizes the results of the scrub upon comple‐
1111           tion.
1112
1113           Scrubbing and resilvering are very similar operations. The  differ‐
1114           ence  is  that  resilvering only examines data that ZFS knows to be
1115           out of date (for example, when attaching a new device to  a  mirror
1116           or  replacing  an  existing device), whereas scrubbing examines all
1117           data to discover silent errors due to hardware faults or disk fail‐
1118           ure.
1119
1120           Because scrubbing and resilvering are I/O-intensive operations, ZFS
1121           only allows one at a time. If a scrub is already in  progress,  the
1122           "zpool  scrub"  command  terminates it and starts a new scrub. If a
1123           resilver is in progress, ZFS does not allow a scrub to  be  started
1124           until the resilver completes.
1125
1126           -s
1127
1128               Stop scrubbing.
1129
1130
1131
1132       zpool set property=value pool
1133
1134           Sets the given property on the specified pool. See the "Properties"
1135           section for more information on what  properties  can  be  set  and
1136           acceptable values.
1137
1138
1139       zpool split [-R altroot] [-n] [-o mntopts] [-o property=value] pool
1140       newpool [device ...]
1141
1142           Splits off one disk from each mirrored top-level vdev in a pool and
1143           creates a new pool from the split-off disks. The original pool must
1144           be made up of one or more mirrors and must not be in the process of
1145           resilvering.  The  split subcommand chooses the last device in each
1146           mirror vdev unless overridden by a device specification on the com‐
1147           mand line.
1148
1149           When   using  a  device  argument,  split  includes  the  specified
1150           device(s) in a new pool and, should any devices remain unspecified,
1151           assigns  the  last  device  in each mirror vdev to that pool, as it
1152           does normally. If you are uncertain about the outcome  of  a  split
1153           command,  use the -n ("dry-run") option to ensure your command will
1154           have the effect you intend.
1155
1156           -R altroot
1157
1158               Automatically import the newly created  pool  after  splitting,
1159               using the specified altroot parameter for the new pool's alter‐
1160               nate root. See the altroot description in the "Properties" sec‐
1161               tion, above.
1162
1163
1164           -n
1165
1166               Displays  the configuration that would be created without actu‐
1167               ally splitting the pool. The actual pool split could still fail
1168               due to insufficient privileges or device status.
1169
1170
1171           -o mntopts
1172
1173               Comma-separated  list  of  mount  options  to use when mounting
1174               datasets within the pool. See  zfs(1M)  for  a  description  of
1175               dataset properties and mount options. Valid only in conjunction
1176               with the -R option.
1177
1178
1179           -o property=value
1180
1181               Sets the specified property on the new pool. See  the  "Proper‐
1182               ties"  section,  above,  for  more information on the available
1183               pool properties.
1184
1185
1186
1187       zpool status [-xv] [pool] ...
1188
1189           Displays the detailed health status for the given pools. If no pool
1190           is  specified,  then  the status of each pool in the system is dis‐
1191           played. For more information on pool and  device  health,  see  the
1192           "Device Failure and Recovery" section.
1193
1194           If  a  scrub  or  resilver is in progress, this command reports the
1195           percentage done and the estimated time to completion. Both of these
1196           are  only  approximate,  because the amount of data in the pool and
1197           the other workloads on the system can change.
1198
1199           -x
1200
1201               Only display status for pools that are exhibiting errors or are
1202               otherwise unavailable.
1203
1204
1205           -v
1206
1207               Displays  verbose  data  error information, printing out a com‐
1208               plete list of all data errors  since  the  last  complete  pool
1209               scrub.
1210
1211
1212
1213       zpool upgrade
1214
1215           Displays all pools formatted using a different ZFS on-disk version.
1216           Older versions can continue to be used, but some features  may  not
1217           be available. These pools can be upgraded using "zpool upgrade -a".
1218           Pools that are formatted with a more recent version are  also  dis‐
1219           played, although these pools will be inaccessible on the system.
1220
1221
1222       zpool upgrade -v
1223
1224           Displays  ZFS  versions supported by the current software. The cur‐
1225           rent ZFS versions and all  previous  supported  versions  are  dis‐
1226           played,  along  with  an  explanation of the features provided with
1227           each version.
1228
1229
1230       zpool upgrade [-V version] -a | pool ...
1231
1232           Upgrades the given pool to the latest on-disk version. Once this is
1233           done,  the  pool  will  no  longer be accessible on systems running
1234           older versions of the software.
1235
1236           -a
1237
1238               Upgrades all pools.
1239
1240
1241           -V version
1242
1243               Upgrade to the specified version. If the -V flag is not  speci‐
1244               fied,  the  pool  is  upgraded to the most recent version. This
1245               option can only be used to increase  the  version  number,  and
1246               only up to the most recent version supported by this software.
1247
1248
1249

EXAMPLES

1251       Example 1 Creating a RAID-Z Storage Pool
1252
1253
1254       The following command creates a pool with a single raidz root vdev that
1255       consists of six disks.
1256
1257
1258         # zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
1259
1260
1261
1262       Example 2 Creating a Mirrored Storage Pool
1263
1264
1265       The following command creates a pool with two mirrors, where each  mir‐
1266       ror contains two disks.
1267
1268
1269         # zpool create tank mirror c0t0d0 c0t1d0 mirror c0t2d0 c0t3d0
1270
1271
1272
1273       Example 3 Creating a ZFS Storage Pool by Using Slices
1274
1275
1276       The following command creates an unmirrored pool using two disk slices.
1277
1278
1279         # zpool create tank /dev/dsk/c0t0d0s1 c0t1d0s4
1280
1281
1282
1283       Example 4 Creating a ZFS Storage Pool by Using Files
1284
1285
1286       The following command creates an unmirrored pool using files. While not
1287       recommended, a pool based on files can be useful for experimental  pur‐
1288       poses.
1289
1290
1291         # zpool create tank /path/to/file/a /path/to/file/b
1292
1293
1294
1295       Example 5 Adding a Mirror to a ZFS Storage Pool
1296
1297
1298       The  following  command  adds  two  mirrored  disks to the pool "tank",
1299       assuming the pool is already made up of two-way mirrors. The additional
1300       space is immediately available to any datasets within the pool.
1301
1302
1303         # zpool add tank mirror c1t0d0 c1t1d0
1304
1305
1306
1307       Example 6 Listing Available ZFS Storage Pools
1308
1309
1310       The following command lists all available pools on the system.
1311
1312
1313         # zpool list
1314         NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
1315         pool    136G   109M   136G     0%  3.00x  ONLINE  -
1316         rpool  67.5G  12.6G  54.9G    18%  1.01x  ONLINE  -
1317
1318
1319
1320       Example 7 Listing All Properties for a Pool
1321
1322
1323       The following command lists all the properties for a pool.
1324
1325
1326         % zpool get all pool
1327         NAME  PROPERTY       VALUE       SOURCE
1328         pool  size           136G        -
1329         pool  capacity       0%          -
1330         pool  altroot        -           default
1331         pool  health         ONLINE      -
1332         pool  guid           15697759092019394988  default
1333         pool  version        21          default
1334         pool  bootfs         -           default
1335         pool  delegation     on          default
1336         pool  autoreplace    off         default
1337         pool  cachefile      -           default
1338         pool  failmode       wait        default
1339         pool  listsnapshots  off         default
1340         pool  autoexpand     off         default
1341         pool  dedupratio     3.00x       -
1342         pool  free           136G        -
1343         pool  allocated      109M        -
1344
1345
1346
1347       Example 8 Destroying a ZFS Storage Pool
1348
1349
1350       The  following  command  destroys the pool "tank" and any datasets con‐
1351       tained within.
1352
1353
1354         # zpool destroy -f tank
1355
1356
1357
1358       Example 9 Exporting a ZFS Storage Pool
1359
1360
1361       The following command exports the devices in pool tank so that they can
1362       be relocated or later imported.
1363
1364
1365         # zpool export tank
1366
1367
1368
1369       Example 10 Importing a ZFS Storage Pool
1370
1371
1372       The  following  command  displays available pools, and then imports the
1373       pool "tank" for use on the system.
1374
1375
1376
1377       The results from this command are similar to the following:
1378
1379
1380         # zpool import
1381           pool: tank
1382             id: 7678868315469843843
1383          state: ONLINE
1384         action: The pool can be imported using its name or numeric identifier.
1385         config:
1386
1387                 tank        ONLINE
1388                   mirror-0  ONLINE
1389                     c1t2d0  ONLINE
1390                     c1t3d0  ONLINE
1391
1392         # zpool import tank
1393
1394
1395
1396       Example 11 Upgrading All ZFS Storage Pools to the Current Version
1397
1398
1399       The following command upgrades all ZFS Storage  pools  to  the  current
1400       version of the software.
1401
1402
1403         # zpool upgrade -a
1404         This system is currently running ZFS pool version 19.
1405
1406         All pools are formatted using this version.
1407
1408
1409
1410       Example 12 Managing Hot Spares
1411
1412
1413       The following command creates a new pool with an available hot spare:
1414
1415
1416         # zpool create tank mirror c0t0d0 c0t1d0 spare c0t2d0
1417
1418
1419
1420
1421       If  one  of  the  disks  were to fail, the pool would be reduced to the
1422       degraded state. The failed device can be replaced using  the  following
1423       command:
1424
1425
1426         # zpool replace tank c0t0d0 c0t3d0
1427
1428
1429
1430
1431       Once  the  data has been resilvered, the spare is automatically removed
1432       and is made available should another device fails. The hot spare can be
1433       permanently removed from the pool using the following command:
1434
1435
1436         # zpool remove tank c0t2d0
1437
1438
1439
1440       Example 13 Creating a ZFS Pool with Mirrored Separate Intent Logs
1441
1442
1443       The  following  command  creates  a ZFS storage pool consisting of two,
1444       two-way mirrors and mirrored log devices:
1445
1446
1447         # zpool create pool mirror c0d0 c1d0 mirror c2d0 c3d0 log mirror \
1448            c4d0 c5d0
1449
1450
1451
1452       Example 14 Adding Cache Devices to a ZFS Pool
1453
1454
1455       The following command adds two disks for use as cache devices to a  ZFS
1456       storage pool:
1457
1458
1459         # zpool add pool cache c2d0 c3d0
1460
1461
1462
1463
1464       Once  added,  the  cache  devices gradually fill with content from main
1465       memory. Depending on the size of your cache devices, it could take over
1466       an hour for them to fill. Capacity and reads can be monitored using the
1467       iostat option as follows:
1468
1469
1470         # zpool iostat -v pool 5
1471
1472
1473
1474       Example 15 Removing a Mirrored Log Device
1475
1476
1477       The following command removes the mirrored log device mirror-2.
1478
1479
1480
1481       Given this configuration:
1482
1483
1484            pool: tank
1485           state: ONLINE
1486           scrub: none requested
1487         config:
1488
1489                  NAME        STATE     READ WRITE CKSUM
1490                  tank        ONLINE       0     0     0
1491                    mirror-0  ONLINE       0     0     0
1492                      c6t0d0  ONLINE       0     0     0
1493                      c6t1d0  ONLINE       0     0     0
1494                    mirror-1  ONLINE       0     0     0
1495                      c6t2d0  ONLINE       0     0     0
1496                      c6t3d0  ONLINE       0     0     0
1497                  logs
1498                    mirror-2  ONLINE       0     0     0
1499                      c4t0d0  ONLINE       0     0     0
1500                      c4t1d0  ONLINE       0     0     0
1501
1502
1503
1504
1505       The command to remove the mirrored log mirror-2 is:
1506
1507
1508         # zpool remove tank mirror-2
1509
1510
1511
1512       Example 16 Recovering a Faulted ZFS Pool
1513
1514
1515       If a pool is faulted but recoverable, a message indicating  this  state
1516       is  provided  by  zpool  status  if  the pool was cached (see cachefile
1517       above), or as part of the error output from a failed  zpool  import  of
1518       the pool.
1519
1520
1521
1522       Recover a cached pool with the zpool clear command:
1523
1524
1525         # zpool clear -F data
1526         Pool data returned to its state as of Tue Sep 08 13:23:35 2009.
1527         Discarded approximately 29 seconds of transactions.
1528
1529
1530
1531
1532       If  the  pool  configuration  was not cached, use zpool import with the
1533       recovery mode flag:
1534
1535
1536         # zpool import -F data
1537         Pool data returned to its state as of Tue Sep 08 13:23:35 2009.
1538         Discarded approximately 29 seconds of transactions.
1539
1540
1541

EXIT STATUS

1543       The following exit values are returned:
1544
1545       0
1546
1547           Successful completion.
1548
1549
1550       1
1551
1552           An error occurred.
1553
1554
1555       2
1556
1557           Invalid command line options were specified.
1558
1559

ATTRIBUTES

1561       See attributes(5) for descriptions of the following attributes:
1562
1563
1564
1565
1566       ┌─────────────────────────────┬─────────────────────────────┐
1567       │      ATTRIBUTE TYPE         │      ATTRIBUTE VALUE        │
1568       ├─────────────────────────────┼─────────────────────────────┤
1569       │Availability                 │SUNWzfsu                     │
1570       ├─────────────────────────────┼─────────────────────────────┤
1571       │Interface Stability          │Committed                    │
1572       └─────────────────────────────┴─────────────────────────────┘
1573

SEE ALSO

1575       zfs(1M), attributes(5)
1576
1577
1578
1579SunOS 5.11                        4 Jan 2010                         zpool(1M)
Impressum