1metaset(1M)             System Administration Commands             metaset(1M)
2
3
4

NAME

6       metaset - configure disk sets
7

SYNOPSIS

9       /usr/sbin/metaset -s setname [-M-a -h hostname]
10
11
12       /usr/sbin/metaset -s setname -A {enable | disable}
13
14
15       /usr/sbin/metaset -s setname [-A {enable | disable}] -a -h hostname...
16
17
18       /usr/sbin/metaset -s setname -a [-l length] [-L] drivename...
19
20
21       /usr/sbin/metaset -s setname -C {take | release | purge}
22
23
24       /usr/sbin/metaset -s setname -d [-f] -h hostname...
25
26
27       /usr/sbin/metaset -s setname -d [-f] drivename...
28
29
30       /usr/sbin/metaset -s setname -j
31
32
33       /usr/sbin/metaset -s setname -r
34
35
36       /usr/sbin/metaset -s setname -w
37
38
39       /usr/sbin/metaset -s setname -t [-f] [-u tagnumber] [y]
40
41
42       /usr/sbin/metaset -s setname -b
43
44
45       /usr/sbin/metaset -s setname -P
46
47
48       /usr/sbin/metaset -s setname -q
49
50
51       /usr/sbin/metaset -s setname -o [-h hostname]
52
53
54       /usr/sbin/metaset [-s setname]
55
56
57       /usr/sbin/metaset [-s setname] -a | -d
58            [ [m] mediator_host_list]
59
60

DESCRIPTION

62       The metaset command administers sets of disks in named disk sets. Named
63       disk sets include any disk set that is not in the local set. While disk
64       sets  enable  a high-availability configuration, Solaris Volume Manager
65       itself does not actually provide a high-availability environment.
66
67
68       A single-owner disk set configuration manages storage on a SAN or  fab‐
69       ric-attached  storage, or provides namespace control and state database
70       replica management for a specified set of disks.
71
72
73       In a shared disk set configuration, multiple hosts are physically  con‐
74       nected  to the same set of disks. When one host fails, another host has
75       exclusive access to the disks. Each host can control a shared disk set,
76       but only one host can control it at a time.
77
78
79       When  you add a new disk to any disk set, Solaris Volume Manager checks
80       the disk format. If necessary, it repartitions the disk to ensure  that
81       the  disk  has an appropriately configured reserved slice 7 (or slice 6
82       on an EFI labelled device) with adequate space  for  a  state  database
83       replica.  The  precise  size  of slice 7 (or slice 6 on an EFI labelled
84       device) depends on the disk geometry. For  tradtional  disk  sets,  the
85       slice  is  no  less  than  4  Mbytes,  and probably closer to 6 Mbytes,
86       depending on where the cylinder boundaries lie.  For  multi-owner  disk
87       sets,  the slice is a minimum of 256 Mbytes. The minimal size for slice
88       7 might change in the future. This change is based on a variety of fac‐
89       tors,  including the size of the state database replica and information
90       to be stored in the state database replica.
91
92
93       For use in disk sets, disks must have a dedicated slice (six or  seven)
94       that meets specific criteria:
95
96           o      The slice must start at sector 0
97
98           o      The slice must include enough space for disk label
99
100           o      The state database replicas cannot be mounted
101
102           o      The  slice does not overlap with any other slices, including
103                  slice 2
104
105
106       If the existing partition table does not meet these criteria, or if the
107       -L  flag  is specified, Solaris Volume Manager repartitions the disk. A
108       small portion of each drive is reserved in slice 7 (or slice  6  on  an
109       EFI  labelled  device) for use by Solaris Volume Manager. The remainder
110       of the space on each drive is placed into slice 0. Any existing data on
111       the disks is lost by repartitioning.
112
113
114       After  you add a drive to a disk set, it can be repartitioned as neces‐
115       sary, with the exception that slice 7 (or slice 6 on  an  EFI  labelled
116       device) is not altered in any way.
117
118
119       After  a disk set is created and metadevices are set up within the set,
120       the metadevice name is in the following form:
121
122
123       /dev/md/setname/{dsk,rdsk}/dnumber
124
125
126       where setname is the name of the disk set, and number is the number  of
127       the metadevice (0-127).
128
129
130       If  you  have disk sets that you upgraded from Solstice DiskSuite soft‐
131       ware, the default state database replica size on  those  sets  is  1034
132       blocks,  not  the  8192  block  size from Solaris Volume Manager. Also,
133       slice 7 on the disks that were added under Solstice DiskSuite are  cor‐
134       respondingly  smaller  than  slice  7  on  disks  that were added under
135       Solaris Volume Manager.
136
137
138       If disks you add to a disk set have acceptable slice 7s (that start  at
139       cylinder  0  and  that  have  sufficient  space  for the state database
140       replica), they are not reformatted.
141
142
143       Hot spare pools within local disk sets use standard Solaris Volume Man‐
144       ager  naming conventions. Hot spare pools with shared disk sets use the
145       following convention:
146
147
148       setname/hot_spare_pool
149
150
151       where setname is the name of the disk set, and  hot_spare_pool  is  the
152       name of the hot spare pool associated with the disk set.
153
154   Multi-node Environment
155       To  create  and  work with a disk set in a multi—node environment, root
156       must be a member of Group 14 on all hosts, or the  /.rhosts  file  must
157       contain  an  entry  for all other host names. This is not required in a
158       SunCluster 3.x enviroment.
159
160   Tagged data
161       Tagged data occurs when there are different versions of  a  disk  set's
162       replicas.  This  tagged  data consists of the set owner's nodename, the
163       hardware serial number of the owner and the time it was written out  to
164       the  available replicas. The system administer can use this information
165       to determine which replica contains the correct data.
166
167
168       When a disk set is configured with an even number of storage enclosures
169       and has replicas balanced across them evenly, it is possible that up to
170       half of the replicas can be lost (for example, through a power  failure
171       of  half of the storage enclosures). After the enclosure that went down
172       is rebooted, half of the replicas are not recognized by SVM.  When  the
173       set  is  retaken,  the metaset command returns an error of "stale data‐
174       bases", and all of the metadevices are in a read-only state.
175
176
177       Some of the replicas that are not recognized need to  be  deleted.  The
178       action  of  deleting  the  replicas also causes updates to the replicas
179       that are not being deleted. In a dual hosted disk set environment,  the
180       second  node  can  access  the deleted replicas instead of the existing
181       replicas when it takes the set. This leads to the possibility  of  get‐
182       ting  the  wrong replica record on a disk set take. An error message is
183       displayed, and user intervention is required.
184
185
186       Use the -q to query the disk set and the -t, -u,  and  -y,  options  to
187       select the tag and take the disk set. See OPTIONS.
188
189   Mediator Configuration
190       SVM  provides support for a low-end HA solution consisting of two hosts
191       that share only two strings of drives. The hosts in this type  of  con‐
192       figuration,  referred  to as mediators or mediator hosts, run a special
193       daemon, rpc.metamedd(1M). The mediator hosts take on additional respon‐
194       sibilities  to  ensure  that  data  is available in the case of host or
195       drive failures.
196
197
198       A mediator configuration can survive the failure of a single host or  a
199       single string of drives, without administrative intervention. If both a
200       host and a string of drives fail (multiple failures), the integrity  of
201       the  data cannot be guaranteed. At this point, administrative interven‐
202       tion is required to make the data accessible. See mediator(7D) for fur‐
203       ther details.
204
205
206       Use the -m option to add or delete a mediator host. See OPTIONS.
207

OPTIONS

209       The following options are supported:
210
211       -a drivename
212
213           Add  drives  or  hosts to the named set. For a drive to be accepted
214           into a set, the drive must not be in use within another  metadevice
215           or  disk set, mounted on, or swapped on. When the drive is accepted
216           into the set, it is repartitioned and the metadevice state database
217           replica  (for  the  set) can be placed on it. However, if a slice 7
218           (or slice 6 on an EFI labelled device), starts at cylinder  0,  and
219           is  large enough to hold a state database replica, then the disk is
220           not repartitioned. Also, a drive is not accepted if  it  cannot  be
221           found on all hosts specified as part of the set. This means that if
222           a host within the specified set is unreachable due to network prob‐
223           lems, or is administratively down, the add fails.
224
225           Specify  a  drive  name  in the form cnumtnumdnum. Do not specify a
226           slice number (snum). For drives in a Sun Cluster, you must  specify
227           a complete pathname for each drive. Such a name has the form:
228
229             /dev/did/[r]dsk/dnum
230
231
232
233
234       -a | -d | -m mediator_host_list
235
236           Add (-a) or delete (-d) mediator hosts to the specified disk set. A
237           mediator_host_list is the nodename(4) of the mediator  host  to  be
238           added  and  (for  adding)  up to two other aliases for the mediator
239           host. The nodename and aliases for each mediator host are separated
240           only by commas. Up to three mediator hosts can be specified for the
241           named disk set. Specify only the nodename of that host as the argu‐
242           ment to -m to delete a mediator host.
243
244           In a single metaset command you can add or delete up to three medi‐
245           ator hosts. See EXAMPLES.
246
247
248       -A {enable | disable}
249
250           Specify auto-take status for a disk set. If  auto-take  is  enabled
251           for  a  set,  the disk set is automatically taken at boot, and file
252           systems on volumes within the  disk  set  can  be  mounted  through
253           /etc/vfstab  entries.  Only a single host can be associated with an
254           auto-take set, so attempts to add a second host to an auto-take set
255           or  attempts  to  configure a disk set with multiple hosts as auto-
256           take fails with an error message. Disabling auto-take status for  a
257           specific disk set causes the disk set to revert to normal behavior.
258           That is, the disk  set  is  potentially  shared  (non-concurrently)
259           among hosts, and unavailable for mounting through /etc/vfstab.
260
261
262       -b
263
264           Insure  that  the replicas are distributed according to the replica
265           layout algorithm. This can be invoked at any time, and does nothing
266           if  the replicas are correctly distributed. In cases where the user
267           has used the metadb command to manually  remove  or  add  replicas,
268           this  command can be used to insure that the distribution of repli‐
269           cas matches the replica layout algorithm.
270
271
272       -C {take | release | purge}
273
274           Do not interact with the Cluster Framework when used in a Sun Clus‐
275           ter  3 environment. In effect, this means do not modify the Cluster
276           Configuration Repository. These options should only be used to  fix
277           a broken disk set configuration.
278
279           take
280
281               Take  ownership  of  the disk set but do not inform the Cluster
282               Framework that the disk set is available. This  option  is  not
283               for use with a multi-owner disk set.
284
285
286           release
287
288               Release ownership of the disk set without informing the Cluster
289               Framework. This option should only be used if the disk set own‐
290               ership  was  taken  with the corresponding -C take option. This
291               option is not for use with a multi-owner disk set.
292
293
294           purge
295
296               Remove the disk set without  informing  the  Cluster  Framework
297               that  the  disk set has been purged. This option should only be
298               used when the disk set is not accessible and requires  rebuild‐
299               ing.
300
301
302
303       -d drivename
304
305           Delete  drives  or hosts from the named disk set. For a drive to be
306           deleted, it must not be in use within the set. The last host cannot
307           be  deleted  unless  all  of the drives within the set are deleted.
308           Deleting the last host in a disk set destroys the disk set.
309
310           Specify a drive name in the form cnumtnumdnum.  Do  not  specify  a
311           slice  number (snum). For drives in a Sun Cluster, you must specify
312           a complete pathname for each drive. Such a name has the form:
313
314             /dev/did/[r]dsk/dnum
315
316
317           This option fails on a multi-owner disk set if attempting to  with‐
318           draw the master node while other nodes are in the set.
319
320
321       -f
322
323           Force  one of three actions to occur: takes ownership of a disk set
324           when used with -t; deletes the last disk drive from the  disk  set;
325           or deletes the last host from the disk set. Deleting the last drive
326           or host from a disk set requires the -d option.
327
328           When used to forcibly take ownership of the disk set,  this  causes
329           the  disk  set  to  be grabbed whether or not another host owns the
330           set. All of the disks within the set are taken over (reserved)  and
331           fail  fast  is  enabled,  causing the other host to panic if it had
332           disk set ownership. The metadevice state database is read in by the
333           host  performing  the take, and the shared metadevices contained in
334           the set are accessible.
335
336           You can use this option to delete the last drive in the  disk  set,
337           because this drive would implicitly contain the last state database
338           replica.
339
340           You can use -f option to delete hosts from a  set.  When  specified
341           with  a partial list of hosts, it can be used for one-host adminis‐
342           tration. One-host administration could be useful  when  a  host  is
343           known  to be non-functional, thus avoiding timeouts and failed com‐
344           mands. When specified with a complete list of  hosts,  the  set  is
345           completely  deleted. It is generally specified with a complete list
346           of hosts to clean up after one-host administration  has  been  per‐
347           formed.
348
349
350       -h hostname...
351
352           Specify  one  or  more  host names to be added to or deleted from a
353           disk set. Adding the first host creates the set. The last host can‐
354           not  be  deleted  unless all of the drives within the set have been
355           deleted. The host name is not accepted if all of the drives  within
356           the set cannot be found on the specified host. The host name is the
357           same name found in /etc/nodename.
358
359
360       -j
361
362           Join a host to the owner list for a multi-owner disk set. The  con‐
363           cepts  of take and release, used with traditional disk sets, do not
364           apply to multi-owner sets, because multiple owners are allowed.
365
366           As a host boots and is brought online, it  must  go  through  three
367           configuration levels to be able to use a multi-owner disk set:
368
369               1.     It  must be included in the cluster nodelist, which hap‐
370                      pens automatically in a cluster or  single-node  sitatu‐
371                      ion.
372
373               2.     It must be added to the multi-owner disk set with the -a
374                      -h options documented elsewhere in this man page
375
376               3.     It must join the set. When the host is  first  added  to
377                      the set, it is automatically joined.
378           On manual restarts, the administrator must manually issue
379
380             metaset -s multinodesetname -j
381
382
383           to  join the host to the owner list. After the cluster reconfigura‐
384           tion, when the host reenters the cluster, the node is automatically
385           joined  to  the  set.  The metaset -j command joins the host to all
386           multi-owner sets that the host has been added to. In a single  node
387           situation,  joining  the  node to the disk set starts any necessary
388           resynchronizations.
389
390
391       -L
392
393           When adding a disk to a disk set, force the  disk  to  be  reparti‐
394           tioned  using  the  standard  Solaris Volume Manager algorithm. See
395           DESCRIPTION.
396
397
398       -l length
399
400           Set the size (in blocks) for the metadevice state database replica.
401           The  length  can  only be set when adding a new drive; it cannot be
402           changed on an existing drive. The default  (and  maximum)  size  is
403           8192  blocks,  which should be appropriate for most configurations.
404           Replica sizes of less than 128 blocks are not recommended.
405
406
407       -M
408
409           Specify that the disk set to be created or  modified  is  a  multi-
410           owner disk set that supports multiple concurrent owners.
411
412           This  option  is required when creating a multi-owner disk set. Its
413           use is optional on all other operations on a multi-owner  disk  set
414           and has no effect. Existing disk sets cannot be converted to multi-
415           owner sets.
416
417
418       -o
419
420           Return an exit status of 0 if the local host or the host  specified
421           with the -h option is the owner of the disk set.
422
423
424       -P
425
426           Purge the named disk set from the node on which the metaset command
427           is run. The disk set must not be owned by the node that  runs  this
428           command. If the node does own the disk set, the command fails.
429
430           If  you  need to delete a disk set but cannot take ownership of the
431           set, use the -P option.
432
433
434       -q
435
436           Displays an enumerated list of tags pertaining to  ``tagged  data''
437           that  can  be  encountered during a take of the ownership of a disk
438           set.
439
440           This option is not for use with a multi-owner disk set.
441
442
443       -r
444
445           Release ownership of a disk set. All of the disks  within  the  set
446           are  released.  The metadevices set up within the set are no longer
447           accessible.
448
449           This option is not for use with a multi-owner disk set.
450
451
452       -s setname
453
454           Specify the name of a disk set on which metaset works. If  no  set‐
455           name is specified, all disk sets are returned.
456
457
458       -t
459
460           Take  ownership of a disk set safely. If metaset finds that another
461           host owns the set, this host is not be allowed to take ownership of
462           the  set.  If the set is not owned by any other host, all the disks
463           within the set are owned by the host on which metaset was executed.
464           The  metadevice  state  database is read in, and the shared metade‐
465           vices contained in the set become accessible. The -t option takes a
466           disk  set  that  has stale databases. When the databases are stale,
467           metaset exits with code 66, and prints a message.  At  that  point,
468           the  only  operations  permitted  are  the addition and deletion of
469           replicas. Once the addition or deletion of the  replicas  has  been
470           completed, the disk set should be released and retaken to gain full
471           access to the data.
472
473           This option is not for use with a multi-owner disk set.
474
475
476       -u tagnumber
477
478           Once a tag has been selected, a subsequent take with  -u  tagnumber
479           can  be  executed to select the data associated with the given tag‐
480           number.
481
482
483       w
484
485           Withdraws a host from the owner list for a  multi-owner  disk  set.
486           The  concepts of take and release, used with traditional disk sets,
487           do not apply to  multi-owner  sets,  because  multiple  owners  are
488           allowed.
489
490           Instead of releasing a set, a host can issue
491
492             metaset -s multinodesetname -w
493
494
495           to  withdraw from the owner list. A host automatically withdraws on
496           a reboot, but can be manually withdrawn if it should not be able to
497           use  the  set, but should be able to rejoin at a later time. A host
498           that withdrew due to a reboot can still appear  joined  from  other
499           hosts in the set until a reconfiguration cycle occurs.
500
501           metaset  -w  withdraws  from  ownership  of all multi-owner sets of
502           which the host is a member. This option fails  if  you  attempt  to
503           withdraw  the  master  node  while  other nodes are in the disk set
504           owner list. This option cancels all resyncs running on the node.  A
505           cluster  reconfiguration  process  that is removing a node from the
506           cluster membership list effectively withdraws  the  host  from  the
507           ownership list.
508
509
510       -y
511
512           Execute  a  subsequent  take.  If  the  take  operation  encounters
513           ``tagged data,'' the take operation exits with code 2. You can then
514           run  the  metaset  command  with the -q option to see an enumerated
515           list of tags.
516
517

EXAMPLES

519       Example 1 Defining a Disk Set
520
521
522       This example defines a disk set.
523
524
525         # metaset -s relo-red -a -h red blue
526
527
528
529
530       The name of the disk set is relo-red. The names of the first and second
531       hosts added to the set are red and blue, respectively. (The hostname is
532       found in /etc/nodename.) Adding the first host creates the disk set.  A
533       disk  set  can  be  created  with  just one host, with the second added
534       later. The last host cannot be deleted until all of the  drives  within
535       the set have been deleted.
536
537
538       Example 2 Adding Drives to a Disk Set
539
540
541       This example adds drives to a disk set.
542
543
544         # metaset -s relo-red -a c2t0d0 c2t1d0 c2t2d0 c2t3d0 c2t4d0 c2t5d0
545
546
547
548
549       The  name  of the previously created disk set is relo-red. The names of
550       the drives are c2t0d0, c2t1d0,  c2t2d0,  c2t3d0,  c2t4d0,  and  c2t5d0.
551       There is no slice identifier ("sx") at the end of the drive names.
552
553
554       Example 3 Adding Multiple Mediator Hosts
555
556
557       The  following  command adds three mediator hosts to the specified disk
558       set.
559
560
561         # metaset -s mydiskset -a -m myhost1,alias1 myhost2,alias2 myhost3,alias3
562
563
564
565       Example 4 Purging a Disk Set from the Node
566
567
568       The following command purges the disk set relo-red from the node:
569
570
571         # metaset -s relo-red -P
572
573
574
575       Example 5 Querying a Disk Set for Tagged Data
576
577
578       The following command queries the disk set relo-red for a list  of  the
579       tagged data:
580
581
582         # metaset -s relo-red -q
583
584
585
586
587       This command produces the following results:
588
589         The following tag(s) were found:
590          1 - vha-1000c - Fri Sep 20 17:20:08 2002
591          2 - vha-1000c - Mon Sep 23 11:01:27 2002
592
593
594
595       Example 6 Selecting a tag and taking a Disk set
596
597
598       The following command selects a tag and takes the disk set relo-red:
599
600
601         # metaset -s relo-red -t -u 2
602
603
604
605       Example 7 Defining a Multi-Owner Disk Set
606
607
608       The following command defines a multi-owner disk set:
609
610
611         # metaset -s blue -M -a -h hahost1 hahost2
612
613
614
615
616       The  name  of  the  disk set is blue. The names of the first and second
617       hosts added to the set are hahost1 and hahost2, respectively. The host‐
618       name  is  found  in  /etc/nodename.  Adding  the first host creates the
619       multi-owner disk set. A disk set can be created  with  just  one  host,
620       with  additional  hosts  added  later.  The last host cannot be deleted
621       until all of the drives within the set have been deleted.
622
623

FILES

625       /etc/lvm/md.tab
626
627           Contains list of metadevice configurations.
628
629

EXIT STATUS

631       The following exit values are returned:
632
633       0
634
635           Successful completion.
636
637
638       >0
639
640           An error occurred.
641
642

ATTRIBUTES

644       See attributes(5) for descriptions of the following attributes:
645
646
647
648
649       ┌─────────────────────────────┬─────────────────────────────┐
650       │      ATTRIBUTE TYPE         │      ATTRIBUTE VALUE        │
651       ├─────────────────────────────┼─────────────────────────────┤
652       │Availability                 │SUNWmdu                      │
653       ├─────────────────────────────┼─────────────────────────────┤
654       │Interface Stability          │Stable                       │
655       └─────────────────────────────┴─────────────────────────────┘
656

SEE ALSO

658       mdmonitord(1M), metaclear(1M), metadb(1M), metadetach(1M),  metahs(1M),
659       metainit(1M),  metaoffline(1M),  metaonline(1M), metaparam(1M), metare‐
660       cover(1M),  metarename(1M),   metareplace(1M),   metaroot(1M),   metas‐
661       sist(1M),   metastat(1M),   metasync(1M),   metattach(1M),   md.tab(4),
662       md.cf(4), mddb.cf(4), md.tab(4), attributes(5), md(7D)
663
664
665
666

NOTES

668       Disk set administration, including the addition and deletion  of  hosts
669       and  drives,  requires  all  hosts in the set to be accessible from the
670       network.
671
672
673
674SunOS 5.11                        4 Mar 2009                       metaset(1M)
Impressum