1raidctl(1M) System Administration Commands raidctl(1M)
2
3
4
6 raidctl - RAID hardware utility
7
9 raidctl -C "disks" [-r raid_level] [-z capacity] [-s stripe_size] [-f]
10 controller
11
12
13 raidctl -d [-f] volume
14
15
16 raidctl -F filename [-f] controller...
17
18
19 raidctl -a {set | unset} -g disk {volume | controller}
20
21
22 raidctl -p "param=value" [-f] volume
23
24
25 raidctl -c [-f] [-r raid_level] disk1 disk2 [disk3...]
26
27
28 raidctl -l -g disk controller
29
30
31 raidctl -l volume
32
33
34 raidctl -l controller...
35
36
37 raidctl [-l]
38
39
40 raidctl -S [volume | controller]
41
42
43 raidctl -S -g disk controller
44
45
46 raidctl -h
47
48
50 The raidctl utility is a hardware RAID configuration tool that supports
51 different RAID controllers by providing a CLI (command-line interface)
52 to end-users to create, delete or display RAID volume(s). The utility
53 can also used to set properties of a volume, assign hot-spare (HSP)
54 disks to volumes or controllers, and to update firmware/fcode/BIOS for
55 RAID controllers.
56
57
58 The raidctl utility requires privileges that are controlled by the
59 underlying file-system permissions. Only privileged users can manipu‐
60 late the RAID system configuration. If a non-privileged user attempts
61 to run raidctl, the command fails with an exit status of 1.
62
63
64 The raidctl utility, as described in this man page, defines a broad set
65 of command line options to provide management for full-featured RAID
66 controllers. However, support for a given option depends on two ele‐
67 ments:
68
69 o the presence of a software driver
70
71 o the firmware level of the RAID device
72
73
74 The dependency on a software driver is due to the design of raidctl.
75 The utility is built on a common library that enables the insertion of
76 plug-in modules for different drivers. Currently, the Solaris operating
77 system is shipped with a plug-in for the mpt driver. This plug-in does
78 not support all of the raidctl options. On a given storage device,
79 options might be further limited by the device's firmware level.
80
81
82 The level of support for the various raidctl options cannot be deter‐
83 mined by raidctl. The user must rely on the documentation for his RAID
84 controller or hardware platform.
85
86
87 Currently, raidctl Currently, raidctl provides some level of support
88 for the following RAID controllers:
89
90 o LSI1020 SCSI HBA
91
92 o LSI1030 SCSI HBA
93
94 o LSI1064 SAS HBA
95
96 o LSI1068 SAS HBA
97
98
99 All of the above HBAs are maintained by the mpt driver, on X86-32/64
100 and SPARC platforms.
101
103 The following options are supported:
104
105 -C "disks" [-r raid_level] [-z capacity] [-s stripe_size] [-f] con‐
106 troller
107
108 Create a RAID volume using specified disks.
109
110 When creating a RAID volume using this option, the identity of the
111 newly created volume is automatically generated and raidctl reports
112 it to the user.
113
114 The argument specified by this option contains the elements used to
115 form the volume that will be created. Elements can be either disks
116 or sub-volumes, where disks are separated by space(s) and a sub-
117 volume is a set of disks grouped by parenthesis. All disks should
118 be in C.ID.L expression (for example, 0.1.2 represents a physical
119 disk of channel 0, target id 1, and logical unit number 2). The
120 argument must match the RAID level specified by the -r option, even
121 if it's omitted. This means the argument can only be:
122
123 for RAID 0
124
125 At least 2 disks
126
127
128 for RAID 1
129
130 Only 2 disks
131
132
133 for RAID 1E
134
135 At least 3 disks
136
137
138 for RAID 5
139
140 At least 3 disks
141
142
143 for RAID 10
144
145 At least 2 sub-volumes, each sub-volume must be formed by 2
146 disks
147
148
149 for RAID 50
150
151 At least 2 sub-volumes, each sub-volume must be formed by at
152 least 3 disks, and the disk amount in each sub-volume should be
153 the same
154
155 For example, the expression "0.0.0 0.1.0" means that the 2 speci‐
156 fied disks form a RAID volume, which can either be a RAID 0 or a
157 RAID 1 volume. "(0.0.0 0.1.0)(0.2.0 0.3.0)" means that the first 2
158 disks and the last 2 disks form 2 sub-volumes, and that these 2
159 sub-volumes form a RAID 10 volume. See the EXAMPLES section for
160 more samples.
161
162 The -r option specifies the RAID level of the volume that will be
163 created. Possible levels are 0, 1, 1E, 5, 10, 50. If this option is
164 omitted, raidctl creates a RAID 1 volume by default.
165
166 The -z option specifies the capacity of the volume that will be
167 created. The unit can be tera-bytes, giga-bytes, or mega-bytes (for
168 example, 2t, 10g, 20m, and so on). If this option is omitted, raid‐
169 ctl calculates the maximum capacity of the volume that can be cre‐
170 ated by the specified disks and uses this value to create the vol‐
171 ume.
172
173 The -s option specifies the stripe size of the volume that will be
174 created. The possible values are 512, 1k, 2k, 4k, 8k, 16k, 32k,
175 64k, or 128k. If this option is omitted, raidctl chooses an appro‐
176 priate value for the volume (for example, 64k).
177
178 In some cases, the creation of a RAID volume may cause data on
179 specified disks to be lost (for instance, on LSI1020, LSI1030,
180 SAS1064, or SAS1068 HBAs), and raidctl prompts the user for confir‐
181 mation about the creation. Use the -f option to force the volume
182 creation without prompting the user for confirmation.
183
184 The controller argument is used to identify which RAID controller
185 the specified disks belongs. The -l option can be used to list the
186 controller's ID number.
187
188
189 -d [-f] volume
190
191 Delete the RAID volume specified as volume. The volume is specified
192 in canonical form (for example, c0t0d0).
193
194 When a volume is deleted, all data is lost. Therefore, unless the
195 -f option is specified, raidctl prompts the user for confirmation
196 before deleting the volume.
197
198 When a RAID 1 volume is deleted from a LSI1020, LSI1030, SAS1064,
199 or SAS1068 HBA, the primary and secondary disks are "split". If the
200 volume was in SYNCING state, the primary will contain the data, and
201 the secondary will not. If the volume state was OPTIMAL, both disks
202 will contain a complete image of the data.
203
204
205 -F filename [-f] controller...
206
207 Update the firmware running on the specified controller(s). The
208 raidctl utility prompts the user for confirmation of this action,
209 unless the -f option is provided.
210
211
212 -a {set | unset} -g disk {volume | controller}
213
214 If the volume is specified, raidctl sets or unsets the disk as a
215 local hot-spare disk dedicated to the volume, depending on the
216 value specified by the -a option. If the controller is specified,
217 raidctl sets or unsets the disk as a global hot-spare disk.
218
219
220 -p "param=value" [-f] volume
221
222 Change the property value for a given RAID volume. This option can
223 be used to change cache write policy or to activate a volume. When
224 changing the cache write policy, param should be the string wp
225 (SET_WR_POLICY), and value can be either on or off. When used to
226 activate a volume, param should be state and value should be acti‐
227 vate.
228
229 Changing a RAID volume's property may affect the internal behavior
230 of the RAID controller, so raidctl prompts the user for a confirma‐
231 tion before applying the change, unless the -f option is specified.
232
233
234 -c [-f] [-r raid_level] disk1 disk2 [disk3...]
235
236 Create a volume using the specified disks. This is an alternative
237 to the -C option with similar functionality. This option is pre‐
238 served for compatibility reasons, but only works with LSI1020,
239 LSI1030, SAS1064, and SAS1068 HBAs to create RAID 0, RAID 1, or
240 RAID 1E volumes. For other HBAs, the user can only use the -C
241 option.
242
243 The -r option can be used to specify the RAID level of the target
244 volume. If the -r option is omitted, raidctl will create a RAID 1
245 volume.
246
247 Disks must be specified in Solaris canonical format (for example,
248 c0t0d0).
249
250 Creating a RAID 1 volume with this option replaces the contents of
251 disk2 with the contents of disk1.
252
253 When the user creates a RAID volume with this option, the RAID vol‐
254 ume assumes the identity of disk1. Other disks become invisible and
255 the RAID volume appears as one disk.
256
257 Creating a volume with this option is by default interactive. The
258 user must answer a prompt affirmatively to create the volume. Use
259 the -f option to force the volume creation without prompting the
260 user for confirmation.
261
262
263 -l -g disk controller
264
265 Display information about the specified disk of the given con‐
266 troller. The output includes the following information:
267
268 Disk
269
270 Displays the disk in C.ID.L expression disk.
271
272
273 Vendor
274
275 Displays the vendor ID string.
276
277
278 Product
279
280 Displays the product ID string.
281
282
283 Capacity
284
285 Displays the total capacity of the disk.
286
287
288 Status
289
290 Displays the current status of disk. The status can be either
291 "GOOD" (operating normally), "FAILED" (non-functional), or
292 "MISSING" (disk not present).
293
294
295 HSP
296
297 Indicates if the disk has been set as a global hot-spare disk,
298 local hot-spare disk, or a normal one. If it is a local hot-
299 spare disk, all volumes which this disk is assigned to are dis‐
300 played.
301
302
303 GUID
304
305 GUID string for the specified disk. This is an additional datum
306 and might be unavailable in some cases.
307
308
309
310 -l volume
311
312 Display information about the specified volume. The output includes
313 the following information:
314
315 Volume
316
317 Displays volume in canonical format.
318
319
320 Sub
321
322 Displays sub-volumes, if the specified volume is of RAID 10 or
323 RAID 50 volume.
324
325
326 Disk
327
328 Displays all disks that form the specified volume.
329
330
331 Stripe Size
332
333 Displays the stripe size of the volume.
334
335
336 Status
337
338 Displays the status of the specified volume, or the sub-volumes
339 or disks that form the specified volume. For an inactive
340 volume, the status should be INACTIVE; otherwise it can be
341 OPTIMAL (operating optimally), DEGRADED (operating with reduced
342 functionality), FAILED (non-functional), or SYNC (disks are
343 syncing). For a disk, the status can be GOOD, FAILED, or MISS‐
344 ING.
345
346
347 Cache
348
349 Indicates whether the cache is applied to I/O write activities.
350 The cache can be either "ON" or "OFF".
351
352
353 RAID level
354
355 Displays the RAID level. The RAID level can be either 0, 1, 1E,
356 5, 10, or 50.
357
358
359
360 -l controller ...
361
362 Display information about the specified controller(s). The output
363 includes the following information:
364
365 Controller
366
367 Displays the RAID controller's ID number.
368
369
370 Type
371
372 Displays the RAID controller's product type.
373
374
375 fw_version
376
377 Displays the controller's firmware version.
378
379
380
381 [-l]
382
383 List all RAID related objects that the raidctl utility can manipu‐
384 late, including all available RAID controllers, RAID volumes, and
385 physical disks. The -l option can be omitted.
386
387 The output includes the following information:
388
389 Controller
390
391 Displays the RAID controller's ID number.
392
393
394 Volume
395
396 Displays the logical RAID volume name.
397
398
399 Disk
400
401 Displays the RAID disk in C.ID.L expression.
402
403
404
405 -S [volume | controller]
406
407 Takes a snapshot of the RAID configuration information including
408 all available RAID devices, RAID controllers, volumes, and disks.
409
410 Each line of the output specifies a RAID device and its related
411 information, separated by space(s). All volumes and disks belong to
412 the last specified controller.
413
414 The output lists the following information:
415
416 Controller
417
418 Displays the controller ID number, and the controller type
419 string in double-quotation marks.
420
421
422 Volume
423
424 Displays the RAID volume name, number of component disks, the
425 C.ID.L expression of the component disks, the RAID level, and
426 the status. The status can be either OPTIMAL, DEGRADED, FAILED,
427 or SYNCING.
428
429
430 Disk
431
432 Displays the C.ID.L expression of the disk, and the status. The
433 status can be either GOOD, FAILED, or HSP (disk has been set as
434 a stand-by disk).
435
436 If a volume or a controller is specified, a snapshot is only taken
437 of the information for the specified volume or controller.
438
439
440 -S -g disk controller
441
442 Takes a snapshot of the information for the specified disk.
443
444
445 -h
446
447 Print out the usage string.
448
449
451 Example 1 Creating the RAID Configuration
452
453
454 The following command creates a RAID 0 volume of 10G on controller 0,
455 and the stripe size will be set to 64k:
456
457
458 # raidctl -C "0.0.0 0.2.0" -r 0 -z 10g -s 64k 0
459
460
461
462
463 The following command creates a RAID 1 volume on controller 2:
464
465
466 # raidctl -C "0.0.0 1.1.0" -r 1 2
467
468
469
470
471 The following command creates a RAID 5 volume on controller 2:
472
473
474 # raidctl -C "0.0.0 0.1.0 0.2.0" -r 5 2
475
476
477
478
479 The following command creates a RAID 10 volume on controller 0:
480
481
482 # raidctl -C "(0.0.0 0.1.0)(0.2.0 0.3.0)" -r 10 0
483
484
485
486
487 The following command creates a RAID 50 volume on controller 0:
488
489
490 # raidctl -C "(0.0.0 0.1.0 0.2.0)(0.3.0 0.4.0 0.5.0)" -r 50 0
491
492
493
494 Example 2 Displaying the RAID Configuration
495
496
497 The following command displays all available controllers, volumes, and
498 disks:
499
500
501 # raidctl -l
502
503 Controller: 0
504 Controller: 2
505 Volume:c2t0d0
506 Disk: 0.0.0
507 Disk: 0.1.0
508 Disk: 0.2.0
509 Disk: 0.3.0(HSP)
510
511
512
513
514 The following command displays information about controller 2:
515
516
517 # raidctl -l 2
518
519 Controller Type Fw_version
520 --------------------------------------------------------------
521 c2 LSI 1030 1.03.39.00
522
523
524
525
526 The following command displays information about the specified volume:
527
528 # raidctl -l c2t0d0
529
530 Volume Size Stripe Status Cache RAID
531 Sub Size Level
532 Disk
533 --------------------------------------------------------------
534 c2t0d0 10240M 64K OPTIMAL ON RAID5
535 0.0.0 5120M GOOD
536 0.1.0 5120M GOOD
537 0.2.0 5120M GOOD
538
539
540
541
542 The following command displays information about disk 0.0.0 on con‐
543 troller 0:
544
545 # raidctl -l -g 0.0.0 0
546
547 Disk Vendor Product Firmware Capacity Status HSP
548 --------------------------------------------------------------------
549 0.0.0 HITACHI H101473SCSUN72G SQ02 68.3G GOOD N/A
550 GUID:2000000cca02536c
551
552
553
554 Example 3 Deleting the RAID Configuration
555
556
557 The following command deletes a volume:
558
559
560 # raidctl -d c0t0d0
561
562
563
564 Example 4 Updating Flash Images on the Controller
565
566
567 The following command updates flash images on the controller 0:
568
569
570 # raidctl -F lsi_image.fw 0
571
572
573
574 Example 5 Setting or Unsetting a Hot-Spare Disk
575
576
577 The following command sets disk 0.3.0 on controller 2 as a global hot-
578 spare disk:
579
580
581 # raidctl -a set -g 0.3.0 2
582
583
584
585
586 The following command sets disk 0.3.0 on controller 2 as a local hot-
587 spare disk to volume c2t0d0:
588
589
590 # raidctl -a set -g 0.3.0 c2t0d0
591
592
593
594
595 The following command converts disk 0.3.0 on controller 2 from a global
596 hot-spare disk to a normal one:
597
598
599 # raidctl -a unset -g 0.3.0 2
600
601
602
603
604 The following command removes disk 0.3.0 from being a local hot-spare
605 disk from volume c2t0d0:
606
607
608 # raidctl -a unset -g 0.3.0 c2t0d0
609
610
611
612 Example 6 Setting the Volume's Property
613
614
615 The following command sets the write policy of the volume to "off":
616
617
618 # raidctl -a set -p "wp=off" c0t0d0
619
620
621
622 Example 7 Creating Volumes with the -c Option
623
624
625 The following command creates a RAID 1 volume:
626
627
628 # raidctl -c c0t0d0 c0t1d0
629
630
631
632
633 The following command creates a RAID 0 volume:
634
635
636 # raidctl -c -r 0 c0t1d0 c0t2d0 c0t3d0
637
638
639
640 Example 8 Taking a Snapshot of the RAID Configuration
641
642
643 The following command takes a snapshot of all RAID devices:
644
645
646 # # raidctl -S
647
648 1 "LSI 1030"
649 c1t1d0 2 0.2.0 0.3.0 1 DEGRADED
650 0.2.0 GOOD
651 0.3.0 FAILED
652
653
654
655
656 The following command takes a snapshot about volume c1t0d0:
657
658
659 # raidctl -S c1t0d0
660
661 c1t0d0 2 0.0.0 0.1.0 1 OPTIMAL
662
663
664
665
666 The following command takes a snapshot about disk 0.1.0 on controller
667 1:
668
669
670 # raidctl -S -g 0.1.0 1
671
672 0.1.0 GOOD
673
674
675
677 The following exit values are returned:
678
679 0
680
681 Successful completion.
682
683
684 1
685
686 Invalid command line input or permission denied.
687
688
689 2
690
691 Request operation failed.
692
693
695 See attributes(5) for descriptions of the following attributes:
696
697
698
699
700 ┌─────────────────────────────┬─────────────────────────────┐
701 │ ATTRIBUTE TYPE │ ATTRIBUTE VALUE │
702 ├─────────────────────────────┼─────────────────────────────┤
703 │Availability │SUNWcsu │
704 ├─────────────────────────────┼─────────────────────────────┤
705 │Interface Stability │Committed │
706 └─────────────────────────────┴─────────────────────────────┘
707
709 attributes(5), mpt(7D)
710
711
712 System Administration Guide: Basic Administration
713
715 Do not create raid volumes on internal SAS disks if you are going to
716 use the Solaris Multipathing I/O feature (also known as MPxIO). Creat‐
717 ing a new raid volume under Solaris Multipathing will give your root
718 device a new GUID which does not match the GUID for the existing
719 devices. This will cause a boot failure since your root device entry in
720 /etc/vfstab will not match.
721
723 The -z option is not supported on systems that use the mpt driver and
724 LSI RAID controllers.
725
726
727
728SunOS 5.11 5 Feb 2009 raidctl(1M)