1metadb(1M) System Administration Commands metadb(1M)
2
3
4
6 metadb - create and delete replicas of the metadevice state database
7
9 /sbin/metadb -h
10
11
12 /sbin/metadb [-s setname]
13
14
15 /sbin/metadb [-s setname] -a [-f] [-k system-file] mddbnn
16
17
18 /sbin/metadb [-s setname] -a [-f] [-k system-file]
19 [-c number] [-l length] slice...
20
21
22 /sbin/metadb [-s setname] -d [-f] [-k system-file] mddbnn
23
24
25 /sbin/metadb [-s setname] -d [-f] [-k system-file] slice...
26
27
28 /sbin/metadb [-s setname] -i
29
30
31 /sbin/metadb [-s setname] -p [-k system-file]
32 [mddb.cf-file]
33
34
36 The metadb command creates and deletes replicas of the metadevice state
37 database. State database replicas can be created on dedicated slices,
38 or on slices that will later become part of a simple metadevice (con‐
39 catenation or stripe) or RAID5 metadevice. Do not place state database
40 replicas on fabric-attached storage, SANs, or other storage that is not
41 directly attached to the system and available at the same point in the
42 boot process as traditional SCSI or IDE drives. See NOTES.
43
44
45 The metadevice state database contains the configuration of all metade‐
46 vices and hot spare pools in the system. Additionally, the metadevice
47 state database keeps track of the current state of metadevices and hot
48 spare pools, and their components. Solaris Volume Manager automatically
49 updates the metadevice state database when a configuration or state
50 change occurs. A submirror failure is an example of a state change.
51 Creating a new metadevice is an example of a configuration change.
52
53
54 The metadevice state database is actually a collection of multiple,
55 replicated database copies. Each copy, referred to as a replica, is
56 subject to strict consistency checking to ensure correctness.
57
58
59 Replicated databases have an inherent problem in determining which
60 database has valid and correct data. To solve this problem, Volume Man‐
61 ager uses a majority consensus algorithm. This algorithm requires that
62 a majority of the database replicas be available before any of them are
63 declared valid. This algorithm strongly encourages the presence of at
64 least three initial replicas, which you create. A consensus can then be
65 reached as long as at least two of the three replicas are available. If
66 there is only one replica and the system crashes, it is possible that
67 all metadevice configuration data can be lost.
68
69
70 The majority consensus algorithm is conservative in the sense that it
71 will fail if a majority consensus cannot be reached, even if one
72 replica actually does contain the most up-to-date data. This approach
73 guarantees that stale data will not be accidentally used, regardless of
74 the failure scenario. The majority consensus algorithm accounts for the
75 following: the system will stay running with exactly half or more
76 replicas; the system will panic when less than half the replicas are
77 available; the system will not reboot without one more than half the
78 total replicas.
79
80
81 When used with no options, the metadb command gives a short form of the
82 status of the metadevice state database. Use metadb -i for an explana‐
83 tion of the flags field in the output.
84
85
86 The initial state database is created using the metadb command with
87 both the -a and -f options, followed by the slice where the replica is
88 to reside. The -a option specifies that a replica (in this case, the
89 initial) state database should be created. The -f option forces the
90 creation to occur, even though a state database does not exist. (The -a
91 and -f options should be used together only when no state databases
92 exist.)
93
94
95 Additional replicas beyond those initially created can be added to the
96 system. They contain the same information as the existing replicas, and
97 help to prevent the loss of the configuration information. Loss of the
98 configuration makes operation of the metadevices impossible. To create
99 additional replicas, use the metadb -a command, followed by the name of
100 the new slice(s) where the replicas will reside. All replicas that are
101 located on the same slice must be created at the same time.
102
103
104 To delete all replicas that are located on the same slice, the metadb
105 -d command is used, followed by the slice name.
106
107
108 When used with the -i option, metadb displays the status of the metade‐
109 vice state databases. The status can change if a hardware failure
110 occurs or when state databases have been added or deleted.
111
112
113 To fix a replica in an error state, delete the replica and add it back
114 again.
115
116
117 The metadevice state database (mddb) also contains a list of the
118 replica locations for this set (local or shared diskset).
119
120
121 The local set mddb can also contain host and drive information for each
122 of the shared disksets of which this node is a member. Other than the
123 diskset host and drive information stored in the local set mddb, the
124 local and shared diskset mddbs are functionality identical.
125
126
127 The mddbs are written to during the resync of a mirror or during a com‐
128 ponent failure or configuration change. A configuration change or fail‐
129 ure can also occur on a single replica (removal of a mddb or a failed
130 disk) and this causes the other replicas to be updated with this fail‐
131 ure information.
132
134 Root privileges are required for all of the following options except -h
135 and -i.
136
137
138 The following options can be used with the metadb command. Not all the
139 options are compatible on the same command line. Refer to the SYNOPSIS
140 to see the supported use of the options.
141
142 -a Attach a new database device. The /kernel/drv/md.conf
143 file is automatically updated with the new informa‐
144 tion and the /etc/lvm/mddb.cf file is updated as
145 well. An alternate way to create replicas is by
146 defining them in the /etc/lvm/md.tab file and speci‐
147 fying the assigned name at the command line in the
148 form, mddbnn, where nn is a two-digit number given to
149 the replica definitions. Refer to the md.tab(4) man
150 page for instructions on setting up replicas in that
151 file.
152
153
154 -c number Specifies the number of replicas to be placed on each
155 device. The default number of replicas is 1.
156
157
158 -d Deletes all replicas that are located on the speci‐
159 fied slice. The /kernel/drv/md.conf file is automati‐
160 cally updated with the new information and the
161 /etc/lvm/mddb.cf file is updated as well.
162
163
164 -f The -f option is used to create the initial state
165 database. It is also used to force the deletion of
166 replicas below the minimum of one. (The -a and -f
167 options should be used together only when no state
168 databases exist.)
169
170
171 -h Displays a usage message.
172
173
174 -i Inquire about the status of the replicas. The output
175 of the -i option includes characters in front of the
176 device name that represent the status of the state
177 database. Explanations of the characters are dis‐
178 played following the replica status and are as fol‐
179 lows:
180
181 d replica does not have an associated device ID.
182
183
184 o replica active prior to last mddb configuration
185 change
186
187
188 u replica is up to date
189
190
191 l locator for this replica was read successfully
192
193
194 c replica's location was in /etc/lvm/mddb.cf
195
196
197 p replica's location was patched in kernel
198
199
200 m replica is master, this is replica selected as
201 input
202
203
204 r replica does not have device relocation informa‐
205 tion
206
207
208 t tagged data is associated with the replica
209
210
211 W replica has device write errors
212
213
214 a replica is active, commits are occurring to this
215
216
217 M replica had problem with master blocks
218
219
220 D replica had problem with data blocks
221
222
223 F replica had format problems
224
225
226 S replica is too small to hold current database
227
228
229 R replica had device read errors
230
231
232 B tagged data associated with the replica is not
233 valid
234
235
236
237 -k system-file Specifies the name of the kernel file where the
238 replica information should be written. The default
239 system-file is /kernel/drv/md.conf. This option is
240 for use with the local diskset only.
241
242
243 -l length Specifies the size of each replica. The default
244 length is 8192 blocks, which should be appropriate
245 for most configurations. "Replica" sizes of less than
246 128 blocks are not recommended.
247
248
249 -p Specifies updating the system file (/ker‐
250 nel/drv/md.conf) with entries from the
251 /etc/lvm/mddb.cf file. This option is normally used
252 to update a newly built system before it is booted
253 for the first time. If the system has been built on a
254 system other than the one where it will run, the
255 location of the mddb.cf on the local machine can be
256 passed as an argument. The system file to be updated
257 can be changed using the -k option. This option is
258 for use with the local diskset only.
259
260
261 -s setname Specifies the name of the diskset on which the metadb
262 command will work. Using the -s option will cause the
263 command to perform its administrative function within
264 the specified diskset. Without this option, the com‐
265 mand will perform its function on local database
266 replicas.
267
268
269 slice Specifies the logical name of the physical slice
270 (partition), such as /dev/dsk/c0t0d0s3.
271
272
274 Example 1 Creating Initial State Database Replicas
275
276
277 The following example creates the initial state database replicas on a
278 new system.
279
280
281 # metadb -a -f c0t0d0s7 c0t1d0s3 c1t0d0s7 c1t1d0s3
282
283
284
285
286 The -a and -f options force the creation of the initial database and
287 replicas. You could then create metadevices with these same slices,
288 making efficient use of the system.
289
290
291 Example 2 Adding Two Replicas on Two New Disks
292
293
294 This example shows how to add two replicas on two new disks that have
295 been connected to a system currently running Volume Manager.
296
297
298 # metadb -a c0t2d0s3 c1t1d0s3
299
300
301
302 Example 3 Deleting Two Replicas
303
304
305 This example shows how to delete two replicas from the system. Assume
306 that replicas have been set up on /dev/dsk/c0t2d0s3 and
307 /dev/dsk/c1t1d0s3.
308
309
310 # metadb -d c0t2d0s3 c1t1d0s3
311
312
313
314
315 Although you can delete all replicas, you should never do so while
316 metadevices still exist. Removing all replicas causes existing metade‐
317 vices to become inoperable.
318
319
321 /etc/lvm/mddb.cf Contains the location of each copy of the
322 metadevice state database.
323
324
325 /etc/lvm/md.tab Workspace file for metadevice database configu‐
326 ration.
327
328
329 /kernel/drv/md.conf Contains database replica information for all
330 metadevices on a system. Also contains Solaris
331 Volume Manager configuration information.
332
333
335 The following exit values are returned:
336
337 0 successful completion
338
339
340 >0 an error occurred
341
342
344 See attributes(5) for descriptions of the following attributes:
345
346
347
348
349 ┌─────────────────────────────┬─────────────────────────────┐
350 │ ATTRIBUTE TYPE │ ATTRIBUTE VALUE │
351 ├─────────────────────────────┼─────────────────────────────┤
352 │Availability │SUNWmdr │
353 ├─────────────────────────────┼─────────────────────────────┤
354 │Interface Stability │Stable │
355 └─────────────────────────────┴─────────────────────────────┘
356
358 mdmonitord(1M), metaclear(1M), metadetach(1M), metahs(1M),
359 metainit(1M), metaoffline(1M), metaonline(1M), metaparam(1M), metare‐
360 cover(1M), metarename(1M), metareplace(1M), metaroot(1M), metaset(1M),
361 metassist(1M), metastat(1M), metasync(1M), metattach(1M), md.tab(4),
362 md.cf(4), mddb.cf(4), md.tab(4), attributes(5), md(7D)
363
364
365
366
368 Replicas cannot be stored on fabric-attached storage, SANs, or other
369 storage that is not directly attached to the system. Replicas must be
370 on storage that is available at the same point in the boot process as
371 traditional SCSI or IDE drives. A replica can be stored on a:
372
373 o Dedicated local disk partition
374
375 o Local partition that will be part of a volume
376
377 o Local partition that will be part of a UFS logging device
378
379
380
381SunOS 5.11 26 Mar 2006 metadb(1M)