1zpool(1M)               System Administration Commands               zpool(1M)
2
3
4

NAME

6       zpool - configures ZFS storage pools
7

SYNOPSIS

9       zpool [-?]
10
11
12       zpool add [-fn] [-o property=value] pool vdev ...
13
14
15       zpool attach [-f] [-o property=value] pool device new_device
16
17
18       zpool clear [-F [-n]] pool [device]
19
20
21       zpool create [-fn] [-o property=value] ... [-O file-system-property=value]
22            ... [-m mountpoint] [-R root] pool vdev ...
23
24
25       zpool destroy [-f] pool
26
27
28       zpool detach pool device
29
30
31       zpool export [-f] pool ...
32
33
34       zpool get "all" | property[,...] pool ...
35
36
37       zpool history [-il] [pool] ...
38
39
40       zpool import [-d dir] [-D]
41
42
43       zpool import [-o mntopts] [-o property=value] ... [-d dir | -c cachefile]
44            [-D] [-f] [-R root] [-F [-n]] -a
45
46
47       zpool import [-o mntopts] [-o property=value] ... [-d dir | -c cachefile]
48            [-D] [-f] [-R root] [-F [-n]] pool |id [newpool]
49
50
51       zpool iostat [-T u | d ] [-v] [pool] ... [interval[count]]
52
53
54       zpool list [-H] [-o property[,...]] [pool] ...
55
56
57       zpool offline [-t] pool device ...
58
59
60       zpool online pool device ...
61
62
63       zpool remove pool device ...
64
65
66       zpool replace [-f] pool device [new_device]
67
68
69       zpool scrub [-s] pool ...
70
71
72       zpool set property=value pool
73
74
75       zpool split [-R altroot] [-n] [-o mntopts] [-o property=value] pool
76            newpool [device ...]
77
78
79       zpool status [-xv] [pool] ...
80
81
82       zpool upgrade
83
84
85       zpool upgrade -v
86
87
88       zpool upgrade [-V version] -a | pool ...
89
90

DESCRIPTION

92       The  zpool  command  configures  ZFS storage pools. A storage pool is a
93       collection of devices that provides physical storage and data  replica‐
94       tion for ZFS datasets.
95
96
97       All  datasets  within  a storage pool share the same space. See zfs(1M)
98       for information on managing datasets.
99
100   Virtual Devices (vdevs)
101       A "virtual device" describes a single device or a collection of devices
102       organized  according  to certain performance and fault characteristics.
103       The following virtual devices are supported:
104
105       disk
106
107           A block device, typically located under /dev/dsk. ZFS can use indi‐
108           vidual  slices or partitions, though the recommended mode of opera‐
109           tion is to use whole disks. A disk can be specified by a full path,
110           or  it  can  be  a shorthand name (the relative portion of the path
111           under "/dev/dsk"). A whole disk can be specified  by  omitting  the
112           slice or partition designation. For example, "c0t0d0" is equivalent
113           to "/dev/dsk/c0t0d0s2". When given a whole disk, ZFS  automatically
114           labels the disk, if necessary.
115
116
117       file
118
119           A  regular  file.  The  use of files as a backing store is strongly
120           discouraged. It is designed primarily for experimental purposes, as
121           the fault tolerance of a file is only as good as the file system of
122           which it is a part. A file must be specified by a full path.
123
124
125       mirror
126
127           A mirror of two or more devices. Data is replicated in an identical
128           fashion across all components of a mirror. A mirror with N disks of
129           size X can hold X bytes and can  withstand  (N-1)  devices  failing
130           before data integrity is compromised.
131
132
133       raidz
134       raidz1
135       raidz2
136       raidz3
137
138           A variation on RAID-5 that allows for better distribution of parity
139           and eliminates the "RAID-5 write hole" (in which  data  and  parity
140           become inconsistent after a power loss). Data and parity is striped
141           across all disks within a raidz group.
142
143           A raidz group can have single-, double- , or triple parity, meaning
144           that  the  raidz  group  can  sustain  one, two, or three failures,
145           respectively, without losing any data. The raidz1 vdev type  speci‐
146           fies  a single-parity raidz group; the raidz2 vdev type specifies a
147           double-parity raidz group; and the raidz3  vdev  type  specifies  a
148           triple-parity  raidz  group.  The  raidz  vdev type is an alias for
149           raidz1.
150
151           A raidz group with N disks of size X with P parity disks  can  hold
152           approximately  (N-P)*X  bytes and can withstand P device(s) failing
153           before data integrity is compromised. The minimum number of devices
154           in  a  raidz group is one more than the number of parity disks. The
155           recommended number is between 3 and 9 to help increase performance.
156
157
158       spare
159
160           A special pseudo-vdev which keeps track of available hot spares for
161           a pool. For more information, see the "Hot Spares" section.
162
163
164       log
165
166           A separate-intent log device. If more than one log device is speci‐
167           fied, then writes are load-balanced between  devices.  Log  devices
168           can  be  mirrored.  However, raidz vdev types are not supported for
169           the intent log. For more information, see the "Intent Log" section.
170
171
172       cache
173
174           A device used to cache storage pool data. A cache device cannot  be
175           configured  as  a  mirror or raidz group. For more information, see
176           the "Cache Devices" section.
177
178
179
180       Virtual devices cannot be nested, so a mirror or raidz  virtual  device
181       can  only contain files or disks. Mirrors of mirrors (or other combina‐
182       tions) are not allowed.
183
184
185       A pool can have any number of virtual devices at the top of the config‐
186       uration (known as "root vdevs"). Data is dynamically distributed across
187       all top-level devices to balance data among  devices.  As  new  virtual
188       devices are added, ZFS automatically places data on the newly available
189       devices.
190
191
192       Virtual devices are specified one at a time on the command line,  sepa‐
193       rated by whitespace. The keywords "mirror" and "raidz" are used to dis‐
194       tinguish where a group ends and another begins. For example,  the  fol‐
195       lowing creates two root vdevs, each a mirror of two disks:
196
197         # zpool create mypool mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0
198
199
200
201   Device Failure and Recovery
202       ZFS  supports  a rich set of mechanisms for handling device failure and
203       data corruption. All metadata and data is checksummed, and ZFS automat‐
204       ically repairs bad data from a good copy when corruption is detected.
205
206
207       In  order  to take advantage of these features, a pool must make use of
208       some form of redundancy, using either mirrored or raidz  groups.  While
209       ZFS  supports running in a non-redundant configuration, where each root
210       vdev is simply a disk or file, this is strongly discouraged.  A  single
211       case of bit corruption can render some or all of your data unavailable.
212
213
214       A  pool's  health  status  is described by one of three states: online,
215       degraded, or faulted. An online pool has  all  devices  operating  nor‐
216       mally. A degraded pool is one in which one or more devices have failed,
217       but the data is still available due to  a  redundant  configuration.  A
218       faulted  pool  has  corrupted metadata, or one or more faulted devices,
219       and insufficient replicas to continue functioning.
220
221
222       The health of the top-level vdev, such as mirror or  raidz  device,  is
223       potentially impacted by the state of its associated vdevs, or component
224       devices. A top-level vdev or component device is in one of the  follow‐
225       ing states:
226
227       DEGRADED
228
229           One or more top-level vdevs is in the degraded state because one or
230           more component devices are offline. Sufficient  replicas  exist  to
231           continue functioning.
232
233           One  or more component devices is in the degraded or faulted state,
234           but sufficient replicas exist to continue functioning. The underly‐
235           ing conditions are as follows:
236
237               o      The  number of checksum errors exceeds acceptable levels
238                      and the device is degraded as an indication  that  some‐
239                      thing  may  be wrong. ZFS continues to use the device as
240                      necessary.
241
242               o      The number of I/O errors exceeds acceptable levels.  The
243                      device  could not be marked as faulted because there are
244                      insufficient replicas to continue functioning.
245
246
247       FAULTED
248
249           One or more top-level vdevs is in the faulted state because one  or
250           more  component devices are offline. Insufficient replicas exist to
251           continue functioning.
252
253           One or more component devices is in the faulted state, and insuffi‐
254           cient replicas exist to continue functioning. The underlying condi‐
255           tions are as follows:
256
257               o      The device could be opened, but  the  contents  did  not
258                      match expected values.
259
260               o      The  number  of I/O errors exceeds acceptable levels and
261                      the device is faulted to  prevent  further  use  of  the
262                      device.
263
264
265       OFFLINE
266
267           The device was explicitly taken offline by the "zpool offline" com‐
268           mand.
269
270
271       ONLINE
272
273           The device is online and functioning.
274
275
276       REMOVED
277
278           The device was physically removed while  the  system  was  running.
279           Device  removal detection is hardware-dependent and may not be sup‐
280           ported on all platforms.
281
282
283       UNAVAIL
284
285           The device could not be opened. If a pool is imported when a device
286           was  unavailable,  then  the  device will be identified by a unique
287           identifier instead of its path since the path was never correct  in
288           the first place.
289
290
291
292       If  a  device  is  removed  and  later  re-attached  to the system, ZFS
293       attempts to put the device online automatically. Device  attach  detec‐
294       tion is hardware-dependent and might not be supported on all platforms.
295
296   Hot Spares
297       ZFS  allows  devices to be associated with pools as "hot spares". These
298       devices are not actively used in the pool, but when  an  active  device
299       fails,  it  is  automatically replaced by a hot spare. To create a pool
300       with hot spares, specify a "spare" vdev with any number of devices. For
301       example,
302
303         # zpool create pool mirror c0d0 c1d0 spare c2d0 c3d0
304
305
306
307
308       Spares  can  be shared across multiple pools, and can be added with the
309       "zpool add" command and removed with the "zpool remove" command. Once a
310       spare  replacement  is  initiated, a new "spare" vdev is created within
311       the configuration that will remain there until the original  device  is
312       replaced.  At  this  point,  the  hot  spare becomes available again if
313       another device fails.
314
315
316       If a pool has a shared spare that is currently being used, the pool can
317       not  be exported since other pools may use this shared spare, which may
318       lead to potential data corruption.
319
320
321       An in-progress spare replacement can be cancelled by detaching the  hot
322       spare.  If  the original faulted device is detached, then the hot spare
323       assumes its place in the configuration, and is removed from  the  spare
324       list of all active pools.
325
326
327       Spares cannot replace log devices.
328
329   Intent Log
330       The  ZFS  Intent Log (ZIL) satisfies POSIX requirements for synchronous
331       transactions. For instance, databases often require their  transactions
332       to  be on stable storage devices when returning from a system call. NFS
333       and other applications can also use fsync() to ensure  data  stability.
334       By  default,  the  intent  log is allocated from blocks within the main
335       pool. However, it might be possible to  get  better  performance  using
336       separate  intent  log  devices  such  as NVRAM or a dedicated disk. For
337       example:
338
339         # zpool create pool c0d0 c1d0 log c2d0
340
341
342
343
344       Multiple log devices can also be specified, and they can  be  mirrored.
345       See  the  EXAMPLES  section  for  an  example of mirroring multiple log
346       devices.
347
348
349       Log devices can be added, replaced, attached,  detached,  and  imported
350       and  exported  as  part of the larger pool. Mirrored log devices can be
351       removed by specifying the top-level mirror for the log.
352
353   Cache Devices
354       Devices can be added to  a  storage  pool  as  "cache  devices."  These
355       devices  provide an additional layer of caching between main memory and
356       disk. For read-heavy workloads, where the  working  set  size  is  much
357       larger  than  what  can  be  cached in main memory, using cache devices
358       allow much more of this working set  to  be  served  from  low  latency
359       media.  Using  cache devices provides the greatest performance improve‐
360       ment for random read-workloads of mostly static content.
361
362
363       To create a pool with cache devices, specify a "cache"  vdev  with  any
364       number of devices. For example:
365
366         # zpool create pool c0d0 c1d0 cache c2d0 c3d0
367
368
369
370
371       Cache devices cannot be mirrored or part of a raidz configuration. If a
372       read error is encountered on a cache device, that read I/O is  reissued
373       to  the original storage pool device, which might be part of a mirrored
374       or raidz configuration.
375
376
377       The content of the cache devices is considered volatile, as is the case
378       with other system caches.
379
380   Processes
381       Each imported pool has an associated process, named zpool-poolname. The
382       threads in this process are the pool's I/O  processing  threads,  which
383       handle the compression, checksumming, and other tasks for all I/O asso‐
384       ciated with the pool. This process exists to provides  visibility  into
385       the  CPU  utilization  of  the system's storage pools. The existence of
386       this process is an unstable interface.
387
388   Properties
389       Each pool has several properties associated with  it.  Some  properties
390       are  read-only  statistics while others are configurable and change the
391       behavior of the pool. The following are read-only properties:
392
393       alloc
394
395           Amount of storage space within the pool that  has  been  physically
396           allocated.
397
398
399       capacity
400
401           Percentage  of  pool space used. This property can also be referred
402           to by its shortened column name, "cap".
403
404
405       dedupratio
406
407           The deduplication ratio specified for a pool, expressed  as a  mul‐
408           tiplier. Deduplication can be turned on by entering the command:
409
410             # zfs set dedup=on dataset
411
412
413           The default value is off.
414
415           dedupratio  is expressed as a single decimal number. For example, a
416           dedupratio value of 1.76 indicates that 1.76  units  of  data  were
417           stored but only 1 unit of disk space was actually consumed.
418
419
420       free
421
422           Number of blocks within the pool that are not allocated.
423
424
425       guid
426
427           A unique identifier for the pool.
428
429
430       health
431
432           The current health of the pool. Health can be "ONLINE", "DEGRADED",
433           "FAULTED", " OFFLINE", "REMOVED", or "UNAVAIL".
434
435
436       size
437
438           Total size of the storage pool.
439
440
441
442       These space usage properties report actual physical space available  to
443       the  storage  pool.  The physical space can be different from the total
444       amount of space that any  contained  datasets  can  actually  use.  The
445       amount of space used in a raidz configuration depends on the character‐
446       istics of the data being written. In addition, ZFS reserves some  space
447       for  internal  accounting  that the zfs(1M) command takes into account,
448       but the zpool command does not. For  non-full  pools  of  a  reasonable
449       size, these effects should be invisible. For small pools, or pools that
450       are close to being completely full, these discrepancies may become more
451       noticeable.  The following property can be set at creation time:
452
453       ashift
454
455           Pool  sector  size exponent, to the power of 2 (internally referred
456           to as "ashift"). I/O operations will be aligned  to  the  specified
457           size  boundaries.  Additionally, the minimum (disk) write size will
458           be set to the specified size, so this represents a space  vs.  per‐
459           formance  trade-off.  The typical case for setting this property is
460           when performance is important and the  underlying  disks  use  4KiB
461           sectors  but  report 512B sectors to the OS (for compatibility rea‐
462           sons); in that case, set ashift=12 (which is 1<<12 = 4096).   Since
463           most  large  disks  have had 4K sectors since 2011, ZFS defaults to
464           ashift=12 for all disks larger than 512 GB.
465
466           For optimal performance, the pool sector  size  should  be  greater
467           than or equal to the sector size of the underlying disks. Since the
468           property cannot be changed after pool creation, if in a given pool,
469           you  ever want to use drives that report 4KiB sectors, you must set
470           ashift=12 at pool creation time.
471
472
473
474       The following property can be set at creation time and import time:
475
476       altroot
477
478           Alternate root directory. If set, this directory  is  prepended  to
479           any  mount  points within the pool. This can be used when examining
480           an unknown pool where the mount points cannot be trusted, or in  an
481           alternate  boot environment, where the typical paths are not valid.
482           altroot is not a persistent property. It is valid  only  while  the
483           system  is  up.  Setting  altroot defaults to using cachefile=none,
484           though this may be overridden     using an explicit setting.
485
486
487
488       The following properties can be set at creation time and  import  time,
489       and later changed with the zpool set command:
490
491       autoexpand=on | off
492
493           Controls automatic pool expansion when the underlying LUN is grown.
494           If set to on, the pool will be resized according to the size of the
495           expanded  device.  If  the device is part of a mirror or raidz then
496           all devices within that mirror/raidz group must be expanded  before
497           the  new  space is made available to the pool. The default behavior
498           is off. This property can also be referred to by its shortened col‐
499           umn name, expand.
500
501
502       autoreplace=on | off
503
504           Controls  automatic  device  replacement.  If  set to "off", device
505           replacement must be initiated by the  administrator  by  using  the
506           "zpool  replace"  command. If set to "on", any new device, found in
507           the same physical location as a device that previously belonged  to
508           the  pool,  is  automatically  formatted  and replaced. The default
509           behavior is "off". This property can also be  referred  to  by  its
510           shortened column name, "replace".
511
512
513       bootfs=pool/dataset
514
515           Identifies  the  default  bootable  dataset for the root pool. This
516           property is expected to be  set  mainly  by  the  installation  and
517           upgrade programs.
518
519
520       cachefile=path | none
521
522           Controls  the  location  of where the pool configuration is cached.
523           Discovering all pools on system startup requires a cached  copy  of
524           the  configuration data that is stored on the root file system. All
525           pools in this cache are  automatically  imported  when  the  system
526           boots.  Some  environments, such as install and clustering, need to
527           cache this information in a different location so  that  pools  are
528           not  automatically  imported. Setting this property caches the pool
529           configuration in a different location that can  later  be  imported
530           with "zpool import -c". Setting it to the special value "none" cre‐
531           ates a temporary pool that is never cached, and the  special  value
532           '' (empty string) uses the default location.
533
534           Multiple  pools  can  share the same cache file. Because the kernel
535           destroys and recreates this file when pools are added and  removed,
536           care  should be taken when attempting to access this file. When the
537           last pool using a cachefile is exported or destroyed, the  file  is
538           removed.
539
540
541       delegation=on | off
542
543           Controls  whether  a non-privileged user is granted access based on
544           the dataset permissions defined on the  dataset.  See  zfs(1M)  for
545           more information on ZFS delegated administration.
546
547
548       failmode=wait | continue | panic
549
550           Controls  the  system  behavior  in  the event of catastrophic pool
551           failure. This condition is typically a result of a loss of  connec‐
552           tivity  to  the  underlying  storage  device(s) or a failure of all
553           devices within the pool. The behavior of such an  event  is  deter‐
554           mined as follows:
555
556           wait
557
558               Blocks all I/O access to the pool until the device connectivity
559               is recovered and the errors are cleared. A pool remains in  the
560               wait  state  until  the  device  issue is resolved. This is the
561               default behavior.
562
563
564           continue
565
566               Returns EIO to any new write I/O requests but allows  reads  to
567               any  of  the remaining healthy devices. Any write requests that
568               have yet to be committed to disk would be blocked.
569
570
571           panic
572
573               Prints out a message to the  console  and  generates  a  system
574               crash dump.
575
576
577
578       listsnaps=on | off
579
580           Controls  whether  information about snapshots associated with this
581           pool is output when "zfs list" is run without the  -t  option.  The
582           default value is "off".
583
584
585       version=version
586
587           The current on-disk version of the pool. This can be increased, but
588           never decreased. The preferred method of updating pools is with the
589           "zpool  upgrade"  command,  though this property can be used when a
590           specific version is needed for backwards compatibility. This  prop‐
591           erty  can  be any number between 1 and the current version reported
592           by "zpool upgrade -v".
593
594
595   Subcommands
596       All subcommands that modify state are logged persistently to  the  pool
597       in their original form.
598
599
600       The  zpool  command  provides subcommands to create and destroy storage
601       pools, add capacity to storage pools, and provide information about the
602       storage pools. The following subcommands are supported:
603
604       zpool -?
605
606           Displays a help message.
607
608
609       zpool add [-fn]  [-o property=value] pool vdev ...
610
611           Adds  the  specified  virtual  devices  to the given pool. The vdev
612           specification is described in the "Virtual  Devices"  section.  The
613           behavior  of  the  -f  option,  and the device checks performed are
614           described in the "zpool create" subcommand.
615
616           -f
617
618               Forces use of vdevs, even if they appear in use  or  specify  a
619               conflicting  replication level. Not all devices can be overrid‐
620               den in this manner.
621
622
623           -n
624
625               Displays the configuration that would be used without  actually
626               adding  the  vdevs. The actual pool creation can still fail due
627               to insufficient privileges or device sharing.
628
629
630           -o property=value
631
632               Sets the given pool properties. See  the  "Properties"  section
633               for  a list of valid properties that can be set. The only prop‐
634               erty supported at the moment is "ashift".
635
636           Do not add a disk that is currently configured as a  quorum  device
637           to a zpool. After a disk is in the pool, that disk can then be con‐
638           figured as a quorum device.
639
640
641       zpool attach [-f]  [-o property=value] pool device new_device
642
643           Attaches new_device to  an  existing  zpool  device.  The  existing
644           device  cannot  be  part of a raidz configuration. If device is not
645           currently part of a mirrored  configuration,  device  automatically
646           transforms  into  a  two-way  mirror  of  device and new_device. If
647           device is part of a two-way mirror, attaching new_device creates  a
648           three-way  mirror,  and so on. In either case, new_device begins to
649           resilver immediately.
650
651           -f
652
653               Forces use of new_device, even if its appears to be in use. Not
654               all devices can be overridden in this manner.
655
656
657           -o property=value
658
659               Sets  the  given  pool properties. See the "Properties" section
660               for a list of valid properties that can be set. The only  prop‐
661               erty supported at the moment is "ashift".
662
663
664
665       zpool clear [-F [-n]] pool [device] ...
666
667           Clears  device errors in a pool. If no arguments are specified, all
668           device errors within the pool are cleared. If one or  more  devices
669           is  specified,  only  those  errors  associated  with the specified
670           device or devices are cleared.
671
672           -F
673
674               Initiates recovery mode for an  unopenable  pool.  Attempts  to
675               discard  the  last few transactions in the pool to return it to
676               an openable state. Not all damaged pools can  be  recovered  by
677               using  this  option. If successful, the data from the discarded
678               transactions is irretrievably lost.
679
680
681           -n
682
683               Used in combination with the -F flag. Check whether  discarding
684               transactions  would make the pool openable, but do not actually
685               discard any transactions.
686
687
688
689       zpool create [-fn] [-o property=value] ... [-O file-system-prop‐
690       erty=value] ... [-m mountpoint] [-R root] pool vdev ...
691
692           Creates a new storage pool containing the virtual devices specified
693           on the command line. The pool name must begin with  a  letter,  and
694           can  only  contain  alphanumeric  characters  as well as underscore
695           ("_"), dash ("-"), and period ("."). The pool names mirror,  raidz,
696           spare,  and  log are reserved, as are names beginning with the pat‐
697           tern c[0-9]. The vdev specification is described  in  the  "Virtual
698           Devices" section.
699
700           The  command  verifies that each device specified is accessible and
701           not currently in use by another subsystem.  There  are  some  uses,
702           such as being currently mounted, or specified as the dedicated dump
703           device, that prevents a device from ever being used by  ZFS.  Other
704           uses, such as having a preexisting UFS file system, can be overrid‐
705           den with the -f option.
706
707           The command also checks that the replication strategy for the  pool
708           is  consistent.  An  attempt to combine redundant and non-redundant
709           storage in a single pool, or to mix disks and files, results in  an
710           error  unless -f is specified. The use of differently sized devices
711           within a single raidz or mirror group is also flagged as  an  error
712           unless -f is specified.
713
714           Unless  the  -R  option  is  specified,  the default mount point is
715           "/pool". The mount point must not exist or must be empty,  or  else
716           the root dataset cannot be mounted. This can be overridden with the
717           -m option.
718
719           -f
720
721               Forces use of vdevs, even if they appear in use  or  specify  a
722               conflicting  replication level. Not all devices can be overrid‐
723               den in this manner.
724
725
726           -n
727
728               Displays the configuration that would be used without  actually
729               creating  the pool. The actual pool creation can still fail due
730               to insufficient privileges or device sharing.
731
732
733           -o property=value [-o property=value] ...
734
735               Sets the given pool properties. See  the  "Properties"  section
736               for a list of valid properties that can be set.
737
738
739           -O file-system-property=value
740           [-O file-system-property=value] ...
741
742               Sets  the  given file system properties in the root file system
743               of the pool. See the "Properties" section of zfs(1M) for a list
744               of valid properties that can be set.
745
746
747           -R root
748
749               Equivalent to "-o cachefile=none,altroot=root"
750
751
752           -m mountpoint
753
754               Sets  the  mount  point for the root dataset. The default mount
755               point is "/pool" or "altroot/pool" if altroot is specified. The
756               mount  point must be an absolute path, "legacy", or "none". For
757               more information on dataset mount points, see zfs(1M).
758
759
760
761       zpool destroy [-f] pool
762
763           Destroys the given pool, freeing up any devices for other use. This
764           command  tries to unmount any active datasets before destroying the
765           pool.
766
767           -f
768
769               Forces any active datasets contained  within  the  pool  to  be
770               unmounted.
771
772
773
774       zpool detach pool device
775
776           Detaches  device  from  a mirror. The operation is refused if there
777           are no other valid replicas of the data.
778
779
780       zpool export [-f] pool ...
781
782           Exports the given pools from the system. All devices are marked  as
783           exported,  but are still considered in use by other subsystems. The
784           devices can be moved between systems (even those of different endi‐
785           anness)  and imported as long as a sufficient number of devices are
786           present.
787
788           Before exporting  the  pool,  all  datasets  within  the  pool  are
789           unmounted. A pool can not be exported if it has a shared spare that
790           is currently being used.
791
792           For pools to be portable, you must give  the  zpool  command  whole
793           disks, not just slices, so that ZFS can label the disks with porta‐
794           ble EFI labels. Otherwise, disk drivers on platforms  of  different
795           endianness will not recognize the disks.
796
797           -f
798
799               Forcefully  unmount  all  datasets, using the "unmount -f" com‐
800               mand.
801
802               This command will forcefully export the pool even if it  has  a
803               shared  spare  that  is  currently being used. This may lead to
804               potential data corruption.
805
806
807
808       zpool get "all" | property[,...] pool ...
809
810           Retrieves the given list of properties (or all properties if  "all"
811           is  used)  for  the specified storage pool(s). These properties are
812           displayed with the following fields:
813
814                    name          Name of storage pool
815                     property      Property name
816                     value         Property value
817                     source        Property source, either 'default' or 'local'.
818
819
820           See the "Properties" section for more information on the  available
821           pool properties.
822
823
824       zpool history [-il] [pool] ...
825
826           Displays the command history of the specified pools or all pools if
827           no pool is specified.
828
829           -i
830
831               Displays internally logged ZFS events in addition to user  ini‐
832               tiated events.
833
834
835           -l
836
837               Displays log records in long format, which in addition to stan‐
838               dard format includes, the user name, the hostname, and the zone
839               in which the operation was performed.
840
841
842
843       zpool import [-d dir | -c cachefile] [-D]
844
845           Lists pools available to import. If the -d option is not specified,
846           this command searches for devices in "/dev/dsk". The -d option  can
847           be  specified  multiple times, and all directories are searched. If
848           the device appears to be part of an  exported  pool,  this  command
849           displays a summary of the pool with the name of the pool, a numeric
850           identifier, as well as the vdev layout and current  health  of  the
851           device  for  each  device or file. Destroyed pools, pools that were
852           previously destroyed with the  "zpool  destroy"  command,  are  not
853           listed unless the -D option is specified.
854
855           The  numeric  identifier  is unique, and can be used instead of the
856           pool name when multiple exported pools of the same name are  avail‐
857           able.
858
859           -c cachefile
860
861               Reads  configuration  from the given cachefile that was created
862               with the "cachefile" pool  property.  This  cachefile  is  used
863               instead of searching for devices.
864
865
866           -d dir
867
868               Searches  for  devices  or  files  in dir. The -d option can be
869               specified multiple times.
870
871
872           -D
873
874               Lists destroyed pools only.
875
876
877
878       zpool import [-o mntopts] [ -o property=value] ... [-d dir | -c
879       cachefile] [-D] [-f] [-R root] [-F [-n]] -a
880
881           Imports all pools found in the search directories. Identical to the
882           previous command, except that all pools with a sufficient number of
883           devices  available  are  imported. Destroyed pools, pools that were
884           previously destroyed with the "zpool destroy" command, will not  be
885           imported unless the -D option is specified.
886
887           -o mntopts
888
889               Comma-separated  list  of  mount  options  to use when mounting
890               datasets within the pool. See  zfs(1M)  for  a  description  of
891               dataset properties and mount options.
892
893
894           -o property=value
895
896               Sets  the  specified  property  on  the  imported pool. See the
897               "Properties" section for more information on the available pool
898               properties.
899
900
901           -c cachefile
902
903               Reads  configuration  from the given cachefile that was created
904               with the "cachefile" pool  property.  This  cachefile  is  used
905               instead of searching for devices.
906
907
908           -d dir
909
910               Searches  for  devices  or  files  in dir. The -d option can be
911               specified multiple times. This option is incompatible with  the
912               -c option.
913
914
915           -D
916
917               Imports destroyed pools only. The -f option is also required.
918
919
920           -f
921
922               Forces  import,  even  if  the  pool  appears to be potentially
923               active.
924
925
926           -F
927
928               Recovery mode for a non-importable pool. Attempt to return  the
929               pool to an importable state by discarding the last few transac‐
930               tions. Not all damaged pools can be  recovered  by  using  this
931               option. If successful, the data from the discarded transactions
932               is irretrievably lost. This option is ignored if  the  pool  is
933               importable or already imported.
934
935
936           -a
937
938               Searches for and imports all pools found.
939
940
941           -R root
942
943               Sets the "cachefile" property to "none" and the "altroot" prop‐
944               erty to "root".
945
946
947           -n
948
949               Used with the -F recovery option.  Determines  whether  a  non-
950               importable  pool  can  be  made  importable again, but does not
951               actually perform the pool recovery. For more details about pool
952               recovery mode, see the -F option, above.
953
954
955
956       zpool import [-o mntopts] [ -o property=value] ... [-d dir | -c
957       cachefile] [-D] [-f] [-R root] [-F [-n]] pool | id [newpool]
958
959           Imports a specific pool. A pool can be identified by  its  name  or
960           the  numeric  identifier.  If  newpool  is  specified,  the pool is
961           imported using the name newpool. Otherwise, it is imported with the
962           same name as its exported name.
963
964           If a device is removed from a system without running "zpool export"
965           first, the device appears  as  potentially  active.  It  cannot  be
966           determined  if  this  was a failed export, or whether the device is
967           really in use from another host. To import a pool  in  this  state,
968           the -f option is required.
969
970           -o mntopts
971
972               Comma-separated  list  of  mount  options  to use when mounting
973               datasets within the pool. See  zfs(1M)  for  a  description  of
974               dataset properties and mount options.
975
976
977           -o property=value
978
979               Sets  the  specified  property  on  the  imported pool. See the
980               "Properties" section for more information on the available pool
981               properties.
982
983
984           -c cachefile
985
986               Reads  configuration  from the given cachefile that was created
987               with the "cachefile" pool  property.  This  cachefile  is  used
988               instead of searching for devices.
989
990
991           -d dir
992
993               Searches  for  devices  or  files  in dir. The -d option can be
994               specified multiple times. This option is incompatible with  the
995               -c option.
996
997
998           -D
999
1000               Imports destroyed pool. The -f option is also required.
1001
1002
1003           -f
1004
1005               Forces  import,  even  if  the  pool  appears to be potentially
1006               active.
1007
1008
1009           -F
1010
1011               Recovery mode for a non-importable pool. Attempt to return  the
1012               pool to an importable state by discarding the last few transac‐
1013               tions. Not all damaged pools can be  recovered  by  using  this
1014               option. If successful, the data from the discarded transactions
1015               is irretrievably lost. This option is ignored if  the  pool  is
1016               importable or already imported.
1017
1018
1019           -R root
1020
1021               Sets the "cachefile" property to "none" and the "altroot" prop‐
1022               erty to "root".
1023
1024
1025           -n
1026
1027               Used with the -F recovery option.  Determines  whether  a  non-
1028               importable  pool  can  be  made  importable again, but does not
1029               actually perform the pool recovery. For more details about pool
1030               recovery mode, see the -F option, above.
1031
1032
1033
1034       zpool iostat [-T u | d] [-v] [pool] ... [interval[count]]
1035
1036           Displays  I/O  statistics for the given pools. When given an inter‐
1037           val, the statistics are printed every interval seconds until Ctrl-C
1038           is pressed. If no pools are specified, statistics for every pool in
1039           the system is shown. If count is specified, the command exits after
1040           count reports are printed.
1041
1042           -T u | d
1043
1044               Display a time stamp.
1045
1046               Specify  u  for a printed representation of the internal repre‐
1047               sentation of time. See time(2). Specify  d  for  standard  date
1048               format. See date(1).
1049
1050
1051           -v
1052
1053               Verbose  statistics.  Reports  usage  statistics for individual
1054               vdevs within the pool, in addition to the pool-wide statistics.
1055
1056
1057
1058       zpool list [-H] [-o props[,...]] [pool] ...
1059
1060           Lists the given pools along with a health status and  space  usage.
1061           When given no arguments, all pools in the system are listed.
1062
1063           -H
1064
1065               Scripted mode. Do not display headers, and separate fields by a
1066               single tab instead of arbitrary space.
1067
1068
1069           -o props
1070
1071               Comma-separated list of properties to display. See the "Proper‐
1072               ties"  section for a list of valid properties. The default list
1073               is name, size, allocated, free, capacity, health, altroot.
1074
1075
1076
1077       zpool offline [-t] pool device ...
1078
1079           Takes the specified physical device offline. While  the  device  is
1080           offline, no attempt is made to read or write to the device.
1081
1082           This command is not applicable to spares or cache devices.
1083
1084           -t
1085
1086               Temporary.  Upon  reboot, the specified physical device reverts
1087               to its previous state.
1088
1089
1090
1091       zpool online [-e] pool device...
1092
1093           Brings the specified physical device online.
1094
1095           This command is not applicable to spares or cache devices.
1096
1097           -e
1098
1099               Expand the device to use all available space. If the device  is
1100               part  of  a  mirror  or raidz then all devices must be expanded
1101               before the new space will become available to the pool.
1102
1103
1104
1105       zpool remove pool device ...
1106
1107           Removes the specified device from the pool. This command  currently
1108           only  supports  removing hot spares, cache, and log devices. A mir‐
1109           rored log device can be removed by specifying the top-level  mirror
1110           for the log. Non-log devices that are part of a mirrored configura‐
1111           tion can be removed using the zpool detach  command.  Non-redundant
1112           and raidz devices cannot be removed from a pool.
1113
1114
1115       zpool replace [-f] pool old_device [new_device]
1116
1117           Replaces  old_device with new_device. This is equivalent to attach‐
1118           ing new_device, waiting for it  to  resilver,  and  then  detaching
1119           old_device.
1120
1121           The size of new_device must be greater than or equal to the minimum
1122           size of all the devices in a mirror or raidz configuration.
1123
1124           new_device is required if the pool is not redundant. If  new_device
1125           is  not specified, it defaults to old_device. This form of replace‐
1126           ment is useful after an existing disk has failed and has been phys‐
1127           ically  replaced.  In  this  case,  the  new disk may have the same
1128           /dev/dsk path as the old device, even though it is actually a  dif‐
1129           ferent disk. ZFS recognizes this.
1130
1131           -f
1132
1133               Forces use of new_device, even if its appears to be in use. Not
1134               all devices can be overridden in this manner.
1135
1136
1137
1138       zpool scrub [-s] pool ...
1139
1140           Begins a scrub. The scrub examines all data in the specified  pools
1141           to  verify  that  it checksums correctly. For replicated (mirror or
1142           raidz) devices, ZFS automatically  repairs  any  damage  discovered
1143           during  the  scrub. The "zpool status" command reports the progress
1144           of the scrub and summarizes the results of the scrub  upon  comple‐
1145           tion.
1146
1147           Scrubbing  and resilvering are very similar operations. The differ‐
1148           ence is that resilvering only examines data that ZFS  knows  to  be
1149           out  of  date (for example, when attaching a new device to a mirror
1150           or replacing an existing device), whereas  scrubbing  examines  all
1151           data to discover silent errors due to hardware faults or disk fail‐
1152           ure.
1153
1154           Because scrubbing and resilvering are I/O-intensive operations, ZFS
1155           only  allows  one at a time. If a scrub is already in progress, the
1156           "zpool scrub" command terminates it and starts a new  scrub.  If  a
1157           resilver  is  in progress, ZFS does not allow a scrub to be started
1158           until the resilver completes.
1159
1160           -s
1161
1162               Stop scrubbing.
1163
1164
1165
1166       zpool set property=value pool
1167
1168           Sets the given property on the specified pool. See the "Properties"
1169           section  for  more  information  on  what properties can be set and
1170           acceptable values.
1171
1172
1173       zpool split [-R altroot] [-n] [-o mntopts] [-o property=value] pool
1174       newpool [device ...]
1175
1176           Splits off one disk from each mirrored top-level vdev in a pool and
1177           creates a new pool from the split-off disks. The original pool must
1178           be made up of one or more mirrors and must not be in the process of
1179           resilvering. The split subcommand chooses the last device  in  each
1180           mirror vdev unless overridden by a device specification on the com‐
1181           mand line.
1182
1183           When  using  a  device  argument,  split  includes  the   specified
1184           device(s) in a new pool and, should any devices remain unspecified,
1185           assigns the last device in each mirror vdev to  that  pool,  as  it
1186           does  normally.  If  you are uncertain about the outcome of a split
1187           command, use the -n ("dry-run") option to ensure your command  will
1188           have the effect you intend.
1189
1190           -R altroot
1191
1192               Automatically  import  the  newly created pool after splitting,
1193               using the specified altroot parameter for the new pool's alter‐
1194               nate root. See the altroot description in the "Properties" sec‐
1195               tion, above.
1196
1197
1198           -n
1199
1200               Displays the configuration that would be created without  actu‐
1201               ally splitting the pool. The actual pool split could still fail
1202               due to insufficient privileges or device status.
1203
1204
1205           -o mntopts
1206
1207               Comma-separated list of mount  options  to  use  when  mounting
1208               datasets  within  the  pool.  See  zfs(1M) for a description of
1209               dataset properties and mount options. Valid only in conjunction
1210               with the -R option.
1211
1212
1213           -o property=value
1214
1215               Sets  the  specified property on the new pool. See the "Proper‐
1216               ties" section, above, for more  information  on  the  available
1217               pool properties.
1218
1219
1220
1221       zpool status [-xv] [pool] ...
1222
1223           Displays the detailed health status for the given pools. If no pool
1224           is specified, then the status of each pool in the  system  is  dis‐
1225           played.  For  more  information  on pool and device health, see the
1226           "Device Failure and Recovery" section.
1227
1228           If a scrub or resilver is in progress,  this  command  reports  the
1229           percentage done and the estimated time to completion. Both of these
1230           are only approximate, because the amount of data in  the  pool  and
1231           the other workloads on the system can change.
1232
1233           -x
1234
1235               Only display status for pools that are exhibiting errors or are
1236               otherwise unavailable.
1237
1238
1239           -v
1240
1241               Displays verbose data error information, printing  out  a  com‐
1242               plete  list  of  all  data  errors since the last complete pool
1243               scrub.
1244
1245
1246
1247       zpool upgrade
1248
1249           Displays all pools formatted using a different ZFS on-disk version.
1250           Older  versions  can continue to be used, but some features may not
1251           be available. These pools can be upgraded using "zpool upgrade -a".
1252           Pools  that  are formatted with a more recent version are also dis‐
1253           played, although these pools will be inaccessible on the system.
1254
1255
1256       zpool upgrade -v
1257
1258           Displays ZFS versions supported by the current software.  The  cur‐
1259           rent  ZFS  versions  and  all  previous supported versions are dis‐
1260           played, along with an explanation of  the  features  provided  with
1261           each version.
1262
1263
1264       zpool upgrade [-V version] -a | pool ...
1265
1266           Upgrades the given pool to the latest on-disk version. Once this is
1267           done, the pool will no longer  be  accessible  on  systems  running
1268           older versions of the software.
1269
1270           -a
1271
1272               Upgrades all pools.
1273
1274
1275           -V version
1276
1277               Upgrade  to the specified version. If the -V flag is not speci‐
1278               fied, the pool is upgraded to the  most  recent  version.  This
1279               option  can  only  be  used to increase the version number, and
1280               only up to the most recent version supported by this software.
1281
1282
1283

EXAMPLES

1285       Example 1 Creating a RAID-Z Storage Pool
1286
1287
1288       The following command creates a pool with a single raidz root vdev that
1289       consists of six disks.
1290
1291
1292         # zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
1293
1294
1295
1296       Example 2 Creating a Mirrored Storage Pool
1297
1298
1299       The  following command creates a pool with two mirrors, where each mir‐
1300       ror contains two disks.
1301
1302
1303         # zpool create tank mirror c0t0d0 c0t1d0 mirror c0t2d0 c0t3d0
1304
1305
1306
1307       Example 3 Creating a ZFS Storage Pool by Using Slices
1308
1309
1310       The following command creates an unmirrored pool using two disk slices.
1311
1312
1313         # zpool create tank /dev/dsk/c0t0d0s1 c0t1d0s4
1314
1315
1316
1317       Example 4 Creating a ZFS Storage Pool by Using Files
1318
1319
1320       The following command creates an unmirrored pool using files. While not
1321       recommended,  a pool based on files can be useful for experimental pur‐
1322       poses.
1323
1324
1325         # zpool create tank /path/to/file/a /path/to/file/b
1326
1327
1328
1329       Example 5 Adding a Mirror to a ZFS Storage Pool
1330
1331
1332       The following command adds two  mirrored  disks  to  the  pool  "tank",
1333       assuming the pool is already made up of two-way mirrors. The additional
1334       space is immediately available to any datasets within the pool.
1335
1336
1337         # zpool add tank mirror c1t0d0 c1t1d0
1338
1339
1340
1341       Example 6 Listing Available ZFS Storage Pools
1342
1343
1344       The following command lists all available pools on the system.
1345
1346
1347         # zpool list
1348         NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
1349         pool    136G   109M   136G     0%  3.00x  ONLINE  -
1350         rpool  67.5G  12.6G  54.9G    18%  1.01x  ONLINE  -
1351
1352
1353
1354       Example 7 Listing All Properties for a Pool
1355
1356
1357       The following command lists all the properties for a pool.
1358
1359
1360         % zpool get all pool
1361         NAME  PROPERTY       VALUE       SOURCE
1362         pool  size           136G        -
1363         pool  capacity       0%          -
1364         pool  altroot        -           default
1365         pool  health         ONLINE      -
1366         pool  guid           15697759092019394988  default
1367         pool  version        21          default
1368         pool  bootfs         -           default
1369         pool  delegation     on          default
1370         pool  autoreplace    off         default
1371         pool  cachefile      -           default
1372         pool  failmode       wait        default
1373         pool  listsnapshots  off         default
1374         pool  autoexpand     off         default
1375         pool  dedupratio     3.00x       -
1376         pool  free           136G        -
1377         pool  allocated      109M        -
1378
1379
1380
1381       Example 8 Destroying a ZFS Storage Pool
1382
1383
1384       The following command destroys the pool "tank" and  any  datasets  con‐
1385       tained within.
1386
1387
1388         # zpool destroy -f tank
1389
1390
1391
1392       Example 9 Exporting a ZFS Storage Pool
1393
1394
1395       The following command exports the devices in pool tank so that they can
1396       be relocated or later imported.
1397
1398
1399         # zpool export tank
1400
1401
1402
1403       Example 10 Importing a ZFS Storage Pool
1404
1405
1406       The following command displays available pools, and  then  imports  the
1407       pool "tank" for use on the system.
1408
1409
1410
1411       The results from this command are similar to the following:
1412
1413
1414         # zpool import
1415           pool: tank
1416             id: 7678868315469843843
1417          state: ONLINE
1418         action: The pool can be imported using its name or numeric identifier.
1419         config:
1420
1421                 tank        ONLINE
1422                   mirror-0  ONLINE
1423                     c1t2d0  ONLINE
1424                     c1t3d0  ONLINE
1425
1426         # zpool import tank
1427
1428
1429
1430       Example 11 Upgrading All ZFS Storage Pools to the Current Version
1431
1432
1433       The  following  command  upgrades  all ZFS Storage pools to the current
1434       version of the software.
1435
1436
1437         # zpool upgrade -a
1438         This system is currently running ZFS pool version 19.
1439
1440         All pools are formatted using this version.
1441
1442
1443
1444       Example 12 Managing Hot Spares
1445
1446
1447       The following command creates a new pool with an available hot spare:
1448
1449
1450         # zpool create tank mirror c0t0d0 c0t1d0 spare c0t2d0
1451
1452
1453
1454
1455       If one of the disks were to fail, the pool  would  be  reduced  to  the
1456       degraded  state.  The failed device can be replaced using the following
1457       command:
1458
1459
1460         # zpool replace tank c0t0d0 c0t3d0
1461
1462
1463
1464
1465       Once the data has been resilvered, the spare is  automatically  removed
1466       and is made available should another device fails. The hot spare can be
1467       permanently removed from the pool using the following command:
1468
1469
1470         # zpool remove tank c0t2d0
1471
1472
1473
1474       Example 13 Creating a ZFS Pool with Mirrored Separate Intent Logs
1475
1476
1477       The following command creates a ZFS storage  pool  consisting  of  two,
1478       two-way mirrors and mirrored log devices:
1479
1480
1481         # zpool create pool mirror c0d0 c1d0 mirror c2d0 c3d0 log mirror \
1482            c4d0 c5d0
1483
1484
1485
1486       Example 14 Adding Cache Devices to a ZFS Pool
1487
1488
1489       The  following command adds two disks for use as cache devices to a ZFS
1490       storage pool:
1491
1492
1493         # zpool add pool cache c2d0 c3d0
1494
1495
1496
1497
1498       Once added, the cache devices gradually fill  with  content  from  main
1499       memory. Depending on the size of your cache devices, it could take over
1500       an hour for them to fill. Capacity and reads can be monitored using the
1501       iostat option as follows:
1502
1503
1504         # zpool iostat -v pool 5
1505
1506
1507
1508       Example 15 Removing a Mirrored Log Device
1509
1510
1511       The following command removes the mirrored log device mirror-2.
1512
1513
1514
1515       Given this configuration:
1516
1517
1518            pool: tank
1519           state: ONLINE
1520           scrub: none requested
1521         config:
1522
1523                  NAME        STATE     READ WRITE CKSUM
1524                  tank        ONLINE       0     0     0
1525                    mirror-0  ONLINE       0     0     0
1526                      c6t0d0  ONLINE       0     0     0
1527                      c6t1d0  ONLINE       0     0     0
1528                    mirror-1  ONLINE       0     0     0
1529                      c6t2d0  ONLINE       0     0     0
1530                      c6t3d0  ONLINE       0     0     0
1531                  logs
1532                    mirror-2  ONLINE       0     0     0
1533                      c4t0d0  ONLINE       0     0     0
1534                      c4t1d0  ONLINE       0     0     0
1535
1536
1537
1538
1539       The command to remove the mirrored log mirror-2 is:
1540
1541
1542         # zpool remove tank mirror-2
1543
1544
1545
1546       Example 16 Recovering a Faulted ZFS Pool
1547
1548
1549       If  a  pool is faulted but recoverable, a message indicating this state
1550       is provided by zpool status if  the  pool  was  cached  (see  cachefile
1551       above),  or  as  part of the error output from a failed zpool import of
1552       the pool.
1553
1554
1555
1556       Recover a cached pool with the zpool clear command:
1557
1558
1559         # zpool clear -F data
1560         Pool data returned to its state as of Tue Sep 08 13:23:35 2009.
1561         Discarded approximately 29 seconds of transactions.
1562
1563
1564
1565
1566       If the pool configuration was not cached, use  zpool  import  with  the
1567       recovery mode flag:
1568
1569
1570         # zpool import -F data
1571         Pool data returned to its state as of Tue Sep 08 13:23:35 2009.
1572         Discarded approximately 29 seconds of transactions.
1573
1574
1575

EXIT STATUS

1577       The following exit values are returned:
1578
1579       0
1580
1581           Successful completion.
1582
1583
1584       1
1585
1586           An error occurred.
1587
1588
1589       2
1590
1591           Invalid command line options were specified.
1592
1593

ATTRIBUTES

1595       See attributes(5) for descriptions of the following attributes:
1596
1597
1598
1599
1600       ┌─────────────────────────────┬─────────────────────────────┐
1601       │      ATTRIBUTE TYPE         │      ATTRIBUTE VALUE        │
1602       ├─────────────────────────────┼─────────────────────────────┤
1603       │Availability                 │SUNWzfsu                     │
1604       ├─────────────────────────────┼─────────────────────────────┤
1605       │Interface Stability          │Committed                    │
1606       └─────────────────────────────┴─────────────────────────────┘
1607

SEE ALSO

1609       zfs(1M), attributes(5)
1610
1611
1612
1613SunOS 5.11                        4 Jan 2010                         zpool(1M)
Impressum