SVM – Get a disk out of maintenance mode

SVM shows a disk in maintenance mode like this:

d0: Mirror
Submirror 0: d10
State: Needs maintenance
Submirror 1: d20
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 62691300 blocks

d10: Submirror of d0
State: Needs maintenance
Invoke: metareplace d0 c1t0d0s0
Size: 62691300 blocks
Stripe 0:
Device Start Block Dbase State Hot Spare
c1t0d0s0 0 No Maintenance

But format show the disks as OK:
AVAILABLE DISK SELECTIONS:
0. c1t0d0
/pci@9,600000/SUNW,qlc@2/fp@0,0/ssd@w21000004cffa641b,0
1. c1t1d0
/pci@9,600000/SUNW,qlc@2/fp@0,0/ssd@w21000004cffa6512,0

And iostat -En show the disk clean
c1t0d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: SEAGATE Product: ST336607FSUN36G Revision: 0207 Serial No: 0317A1JGSH
Size: 36.42GB <36418595328 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0

This may be caused by the "metasync -r" command not getting executed when the system boots

This metasync command is normally executed in one of the startup scripts run at boot time.

For Online: DiskSuite[TM] 1.0, the metasync command is located in the /etc/rc.local script. This entry is placed in that file by the metarc command.

For Solstice DiskSuite versions between 3.x and 4.2, inclusive, the metasync command is located in the /etc/rc2.d/S95SUNWmd.sync file.

For Solstice DiskSuite version 4.2.1 and above, the metasync command is located in the file /etc/rc2.d/S95lvm.sync.

In all cases, because this script is not run until the system transitions into run state 3 (multi-user mode), it is to be expected to have both submirrors in a "Needs maintenance" state until the command is run. I/O to these metadevices works just fine while in this state, so there is no need to worry.

You can also run metareplace -e d0 c1t0d0s0 (not actually replacing the drive) and wait through the rebuild

SVM how to mirror a root disk

Before attempting this procedure, you must have first created at least 2 state databases (replicas) on unused slices using the metadb command.

metadb -a -f -c 2 c0t0d0s5 c1t0d0s5

The -a and -f options are used together to create the initial state database replicas. The -c 2 option puts two state database replicas on each specified slice, creating a total of four replicas. By spreading the state database replicas across controllers, you can increase metadevice performance and reliability.

Also, it is assumed that you have formatted the second disk exactly like the original root disk (use prtvtoc and fmthard for ease). Each corresponding slice on each disk must be the same size.

For this example, we will be mirroring all the slices on the root disk (c0t0d0) to another equally sized disk (c1t0d0). You will have to make the appropriate changes.

For each partition on your root disk (/, /usr, /var, /opt, /export/home, or any other partition you may have), the following commands must be run:

1.

For each slice, you must create 3 new metadevices: one for the existing slice, one for the slice on the mirrored disk, and one for the mirror. To do this, make the appropriate entries in the md.tab[1] file. For example, for slice 0, we’ll create the following entries:

d10 1 1 /dev/dsk/c0t0d0s0
d20 1 1 /dev/dsk/c1t0d0s0
d0 -m d10

As an example for slice 1, we’ll create the following entries in the md.tab file:

d11 1 1 /dev/dsk/c0t0d0s1
d21 1 1 /dev/dsk/c1t0d0s1
d1 -m d11

Follow this example, creating groups of 3 entries for each slice on the root disk.
2.

Run the metainit command to create all the metadevices you have just defined in the md.tab file. If you use the -a option, all the metadevices defined in the md.tab will be created.

metainit -a -f

The -f is required because the slices on the root disk are currently mounted.
3.

Modify the entries in the /etc/vfstab file to reflect metadevices instead of slices. Start off by making a backup copy of the file:

cp /etc/vfstab /etc/vfstab.pre_sds

Then, edit the /etc/vfstab file to change each slice to the appropriate metadevice. Do not update the entry for the root filesystem, it will be changed by the metaroot command in step 4. You will need to edit the swap, /usr, /var, and any other slices in the same manner. For example, you would need to change the swap device line from

/dev/dsk/c0t0d0s1 – – swap – no –

to

/dev/md/dsk/d1 – – swap – no –

Make sure that you change the slice to the main mirror (in this case, d1) and not to the simple submirror (in this case, d11).
4.

Run the metaroot command for the metadevice you designated for the root mirror. In the example above, we created d0 to be the mirror device for the root partition, so we would run

metaroot d0

5.

For Solstice DiskSuite versions 4.1 or greater and Solaris[TM] 9 LVM it is necessary to lock filesystems before rebooting, so run

lockfs -fa

6.

Reboot the system. This step is necessary. Do not proceed without rebooting your system, or data corruption will occur.
7.

After the system has rebooted, you can verify that root and other slices are under DiskSuite’s control by running the

# df -k
# swap -l

commands. The outputs of these commands should reflect the metadevice names, not the slice names.
8.

Set the dump device to the correct device, using the command

# dumpadm -d swap

Verify that the dump device is set correctly to swap by running the command

#dumpadm

9.

Lastly, attach the second submirror to the metamirror device. This attachment, using the metattach command, must be done for each partition on the disk, and will start the syncing of data from the current root disk to the other. To continue our example, to add the mirror for root, enter the command:

metattach d0 d20

Even though this command returns to the shell prompt immediately, the syncing process has begun. To follow the progress of this syncing for this mirror, enter the command

metastat d0

Although you can run all the metattach commands one right after another, it is a good idea to run the next metattach command only after the first syncing has completed. This reduces the amount of head movement on the disk and can speed up the total time it takes to mirror all the slices on the disk.

Once you have attached all the submirrors to the metamirrors, and all the syncing has completed, your root disk is mirrored.

Recording the Alternate Boot Device Path

Recording the Alternate Boot Device Path

Determine the path to the alternate root device by using the ls -l command on the slice that is being attached as the second submirror to the root (/) mirror.

# ls -l /dev/rdsk/c1t3d0s0

lrwxrwxrwx 1 root root 55 Mar 5 12:54 /dev/rdsk/c1t3d0s0 -> \ ../../devices/sbus@1,f8000000/esp@1,200000/sd@3,0:a

Here you would record the string that follows the /devices directory:

/sbus@1,f8000000/esp@1,200000/sd@3,0:a.

Solaris Volume Manager users who are using a system with OpenBoot" Prom can use the OpenBoot nvalias command to define a backup root device alias for the secondary root (/) mirror. For example:

ok nvalias backup_root /sbus@1,f8000000/esp@1,200000/sd@3,0:a

Then, redefine the boot-device alias to reference both the primary and secondary submirrors, in the order in which you want them to be used, and store the configuration.

ok printenv boot-device

boot-device = disk net

ok setenv boot-device disk backup-root net

boot-device = disk backup-root net

ok nvstore

In the event of primary root disk failure, the system would automatically boot to the second submirror. Or, if you boot manually, rather than using auto boot, you would only enter:

ok boot backup_root

SVM – How to add a disk to a disk set

Expanding Disk Sets
ProcedureHow to Add Disks to a Disk Set
Caution � Caution �

Do not add disks larger than 1Tbyte to disk sets if you expect to run the Solaris software with a 32�bit kernel or if you expect to use a version of the Solaris OS prior to the Solaris 9 4/03 release. See Overview of Multi-Terabyte Support in Solaris Volume Manager for more information about multi-terabyte volume support in Solaris Volume Manager.

Only disks that meet the following conditions can be added to a disk set:

*

The disk must not be in use in a volume or hot spare pool.
*

The disk must not contain a state database replica.
*

The disk must not be currently mounted, swapped on, or otherwise opened for use by an application.

Before You Begin

Check Guidelines for Working With Disk Sets.
Steps

1.

To add disks to a disk set, use one of the following methods:
*

From the Enhanced Storage tool within the Solaris Management Console, open the Disk Sets node. Select the disk set that you want to modify. Then click the right mouse button and choose Properties. Select the Disks tab. Click Add Disk. Then follow the instructions in the wizard. For more information, see the online help.
*

To add disks to a disk set from the command line, use the following form of the metaset command:

# metaset -s diskset-name -a disk-name

-s diskset-name

Specifies the name of a disk set on which the metaset command will work.
-a

Adds disks to the named disk set.
disk-name

Specifies the disks to add to the disk set. disk names are in the form cxtxdx. N The �sx� slice identifiers are not included when adding a disk to a disk set.

See the metaset(1M)man page for more information.

The first host to add a disk to a disk set becomes the owner of the disk set.
Caution � Caution �

Do not add a disk with data to a disk set. The process of adding a disk with data to a disk set might repartition the disk, destroying the data.
2.

Verify the status of the disk set and disks.

# metaset

Example 19�3 Adding a Disk to a Disk Set

# metaset -s blue -a c1t6d0
# metaset
Set name = blue, Set number = 1

Host Owner
host1 Yes

Drive Dbase
c1t6d0 Yes

In this example, the host name is host1. The shared disk set is blue. Only the disk, c1t6d0, has been added to the disk set blue.

Optionally, you could add multiple disks at once by listing each disk on the command line. For example, you could use the following command to add two disks to the disk set simultaneously:

# metaset -s blue -a c1t6d0 c2t6d0

disk replacement howto

Any disk replacement should only be attempted by qualified service personnel, or
with the assistance of a qualified person. This document is only intended as an
introduction or reminder of the multiple software/hardware considerations that
can affect a safe and successful disk replacement. Full documentation should be
consulted when anything is unclear: http://docs.sun.com
Considerations

When planning to replace a disk, determine the following:

*

1. Is the underlying hardware SCSI or fibre channel?
Is it a Storage Area Network(SAN) environment?
*

2. Is the disk under logical volume management?
*

2a. What software or hardware is being used for logical volume management?
*

2b. Is the volume a stripe(RAID 0), concatenation(RAID 0), mirror(RAID 1),
or RAID 5?
*

3. If the disk is not under volume management, are any partitions on the
disk mounted? Disks should not be removed while there is a potential for
I/O operations to occur to the disk.
*

4. Has the disk been allocated for RAW usage (i.e for a database)
*

5. Is the disk shared in a cluster environment? (Clusters have considerations
for disks which may contain quorum information.)
Cluster Note:
Details for cluster environments are outside of the scope of this document.
Reference Cluster documentation, or get assistance from Sun Support Personnel.

The primary types of logical volume management used with SUN storage:
Software Volume Management

JBOD (Just a Bunch of Disks)

Mostly used with: internal disks, Sun StorEdge[TM] A5000 family, Sun StorEdge
D1000, Sun StorEdge D240, Netra[TM] st D130, Sun StorEdge S1, Sun StorEdge
UniPack, Sun StorEdge MultiPack, and others.

Disks may be mounted individually without any logical volume management, and
would show up in mount or df -k output in the form:

/dev/dsk/c#t#d#s#.

Commands to check:

# mount
# df -k

NOTE: if a disk is being used for raw access by a data base, it will not appear in mount or df -k. Confirm this with a System administrator familiar with the system.

Solaris[TM] Volume Manager(SVM) a.k.a. Solstice DiskSuite[TM] (SDS) or Online: Disk Suite[TM] (ODS)

Mostly used with: any JBOD disks

Commands to check:

# df -k Shows volumes mounted as /dev/md/dsk/d#
# metastat Lists volume configuration, shows disk states.
# /usr/opt/SUNWmd/sbin/metastat SDS or ODS may require the full path to the command.

Veritas Volume Manager (VxVM) aka Sun Enterprise Volume Manager[TM] (SEVM)

Mostly used with: any JBOD disks, automatically licensed by Sun if an Sun StorEdge A5000 family array is attached to the system

Commands to check:

# df -k Shows volumes mounted as /dev/vx/dsk/ for rootdg,
or /dev/vx/dsk// for other disk groups.
# vxdisk list Lists disks, shows if they’re under VxVM control, shows their
current state. The "error" state indicates that the disk is
not under VxVM control.
# vxprint -ht Lists volume configuration.
# vxdiskadm Text-based menus for VxVM administration. Options 4 and 5 are
used to replace failed disks.

Hardware commands

Fibre channel(FC-AL or SAN) hardware may require additional commands to ensure
creation of proper device links to the disk world wide number(WWN). Fibre
channel disks require luxadm commands to remove the WWNs in the device path. In
Solaris 8 OS and later, adding / removing / replacing a SCSI disk drive,
requires the use of cfgadm(see cfgadm_scsi(1M) for more details). Consult the
documentation or man commands for further information.

Commands used:

# luxadm probe Lists arrays that use the SES
driver for array enclosure
management
# luxadm display Displays details of array or
disks
# luxadm remove_device Removes device paths to a fibre channel disk
# cfgadm -x replace_device Starts the process of replacing
a SCSI disk drive in Solaris 8
OS and later.

Important documentation – more information on cfgadm:
http://docs.sun.com/db/doc/805-7228/6j6q7uesd?a=view#devconfig2-15
Hardware Volume Management

Raid Manager 6 (RM6)

Used with: Sun StorEdge A1000, Sun RSM Array[TM] 2000, Sun StorEdge A3000, Sun StorEdge A3500FC

Features: Hardware RAID controller, managed from the host by GUI or command line.

Commands to check:

# /usr/lib/osa/bin/rm6 Launches management GUI for the
hardware raid controllers. The
Recovery Guru application in the GUI
is extremely useful for handling
failure events.
# /usr/sbin/osa/lad Lists c#t#d# of logical disks
available in hardware
# /usr/sbin/osa/drivutil -l List individual disks in logical disk
volume.
# /usr/sbin/osa/healthck -a Show array hardware status

Important documentation:
http://www.sun.com/products-n-solutions/hardware/docs/Network_Storage_Solutions/Workgroup/A1000D1000/index.html
Version Matrix:
http://sunsolve.sun.com/pub-cgi/retrieve.pl?doc=finfodoc%2F43483&zone_110=43483&wholewords=on

Sun StorEdge[TM] T3 and Sun StorEdge[TM] 6120 arrays

Used with : Sun StorEdge T3 and Sun StorEdge 6120 arrays

Features: Internal management by a hardware RAID controller which can be
accessed like a UNIX system through telnet or a serial service port.

Commands to check:

# telnet Username=root, password set by customer
# fru stat Field replaceable unit status
# fru list Field replaceable unit list
# vol stat Volume status
# vol list Volume list
# proc list Process list (Example: logical volume
construction or recovery)

Important documentation:
http://www.sun.com/products-n-solutions/hardware/docs/Network_Storage_Solutions/Workgroup/T3WG/index.html

Sun StorEdge 3310, Sun StorEdge 3510 FC Array

Used with: Sun StorEdge 3310, Sun StorEdge 3510 FC Array

Features: Internal management by a RAID controller, which can be accessed
through an ethernet management port or serial port. Features a text terminal
menu interface. Can also be managed by a UNIX GUI or UNIX command line interface
from the host.

Commands to use:

For in-band management (through SCSI or FCAL interface)

# sccli Starts command line interface (cli) for array
management.
sccli> show configuration Shows array components status
sccli> show disks display status of disks, note: faulted disks
do not appear in the listing.
sccli> show events display system log
sccli> show FRUs list fru’s; contains part numbers and serial numbers.
sccli> show logical-drives display LUN’s and LUN status; shows if spare
is configured and any failed disks.
sccli> show redundancy-mode displays controller status
sccli> help Displays cli commands available.

Manual: http://www.sun.com/products-n-solutions/hardware/docs/html/817-4951-10/

#ssconsole Launches the UNIX GUI for monitoring and
management.

SVM mirror of your root filesystem in 10 steps

Setting up a mirror of your root filesystem in 10 steps

Read man pages for all the commands below for more info.

You’ll need a partition at least as big as your / (root partition) and at least 3 other partitions with a free space of at least 5 MB. Preferably, create at least 3 small partitions a couple of MB in size, on different disks on different controllers (which are preferably not a part of your mirror or RAID setup), holding the state database replica’s. In the example, the root partition is c0t0d0s0 and the mirror partition is c0t2d0s4.

1.

Create the state databases:

metadb -f -a /dev/rdsk/c1t3d0s7
metadb -a /dev/rdsk/c2t4d0s7
metadb -a /dev/rdsk/c3t5d0s7

2.

Initialize a RAID 0 with 1 stripe on your root partition:

metainit -f d11 1 1 /dev/rdsk/c0t0d0s0

3.

The same on the mirror partition:

metainit d12 1 1 /dev/rdsk/c0t2d0s4

4.

Initialize the mirror:

metainit d10 -m d11

5.

Reflect this change in /etc/vfstab (this command will edit your vfstab):

metaroot d10

6.

Reboot your system so that it rereads /etc/vfstab.
7.

Connect the mirror partition to your mirror:

metattach d10 d12

Issue the metastat command to see that this works, give the mirror time to synchronize.
8.

In the openbootprom, create an alias for the mirror partition, so you can boot from it in case of emergency:

OK nvalias mirror /pci@1f,0/ide@d/disk@2,0:e

Read the OpenBoot FAQ for information on device trees, the above line is architecture dependent!
9.

Add this alias to the list of boot devices:

OK boot-device disk mirror

10.

Test your mirror:

OK boot mirror

Note: Volume Manager complains if it needs to mirror on an s0 that starts at cylinder 0. Therefor, have your first partition start at cylinder 1 if you need it as second part of a mirror.

Solaris Software Raid HowTo

Solaris Software Raid HowTo

This how to guide shows how to mirror two disks, including the root and swap slices. The guide has been tested with Solaris Volume Manager on Solaris 9/10. It should work basically the same for all Solaris versions with Solstice DiskSuite…

This is the howto in mirror root (/ ) partition and swap slice in 11 steps:

– Partition the first disk

# format c0t0d0

Use the partition tool (=> "p , p "!) to setup the slices. We assume the following slice setup afterwards:

#Part Tag Flag Cylinders Size Blocks
0 root wm 870 – 7389 14.65GB (6520/0/0) 30722240
1 swap wu 0 – 869 1.95GB (870/0/0) 4099440
2 backup wm 0 – 7505 16.86GB (7506/0/0) 35368272
3 unassigned wm 7390 – 7398 20.71MB (9/0/0) 42408
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0

1. Copy the partition table of the first disk to its future mirror disk

# prtvtoc /dev/rdsk/c0t0d0s2 | fmthard -s – /dev/rdsk/c0t1d0s2

2. Create at least two state database replicas on each disk

# metadb -a -f -c 2 c0t0d0s3 c0t1d0s3

Check the state of all replicas with metadb:

# metadb

Notes:

* A state database replica contains configuration and state information about the meta devices. Make sure that always at least 50% of the replicas are active!

3. Create the root slice mirror and its first submirror

# metainit -f d10 1 1 c0t0d0s0
# metainit -f d20 1 1 c0t1d0s0
# metainit d30 -m d10

Run metaroot to prepare /etc/vfstab and /etc/system (do this only for the root slice!):

# metaroot d30

4. Create the swap slice mirror and its first submirror

# metainit -f d11 1 1 c0t0d0s1
# metainit -f d21 1 1 c0t1d0s1
# metainit d31 -m d11

5. Edit /etc/vfstab to mount all mirrors after boot, including mirrored swap

/etc/vfstab before changes:

fd – /dev/fd fd – no –
/proc – /proc proc – no –
/dev/dsk/c0t0d0s1 – – swap – no –
/dev/md/dsk/d30 /dev/md/rdsk/d30 / ufs 1 no logging
swap – /tmp tmpfs – yes –

/etc/vfstab after changes:

fd – /dev/fd fd – no –
/proc – /proc proc – no –
/dev/md/dsk/d31 – – swap – no –
/dev/md/dsk/d30 /dev/md/rdsk/d30 / ufs 1 no logging
swap – /tmp tmpfs – yes –

Notes:

* The entry for the root device (/) has already been altered by the metaroot command we executed before.

6. Reboot the system

# lockfs -fa && init 6

7. Attach the second submirrors to all mirrors

# metattach d30 d20
# metattach d31 d21

Notes:

* This will finally cause the data from the boot disk to be synchronized with the mirror drive.
* You can use metastat to track the mirroring progress.

8. Change the crash dump device to the swap metadevice

# dumpadm -d `swap -l | tail -1 | awk ‘{print $1}’

9. Make the mirror disk bootable

# installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/c0t1d0s0

Notes:

* This will install a boot block to the second disk.

10. Determine the physical device path of the mirror disk

# ls -l /dev/dsk/c0t1d0s0
… /dev/dsk/c0t1d0s0 -> ../../devices/pci@1f,4000/scsi@3/sd@1,0:a

11. Create a device alias for the mirror disk

# eeprom "nvramrc=devalias mirror /pci@1f,4000/scsi@3/disk@1,0"
# eeprom "use-nvramrc?=true"

Add the mirror device alias to the Open Boot parameter boot-device to prepare the case of a problem with the primary boot device.

# eeprom "boot-device=disk mirror cdrom net"

You can also configure the device alias and boot-device list from the Open Boot Prompt (OBP a.k.a. ok prompt):

ok nvalias mirror /pci@1f,4000/scsi@3/disk@1,0
ok use-nvramrc?=true
ok boot-device=disk mirror cdrom net

How to setup SVM using RAID 1 mirroring one disk to another

CONTENTS:

INTRODUCTION
PART I: DISK LAYOUT
PART II: METADB SETUP – FREEING SWAP SPACE FOR A NEW SLICE
PART III: VTOC COPY AND METADB INSTALLATION
PART IV: CONFIGURING OBP AND TESTING

——————————————————————————————

INTRODUCTION

This file is an attempt to explain how to setup RAID 1 on a Solaris 10 machine using
Sun Volume Manager. This was done on a Sun E4500 attatched by fiber to a StorEDGE A5100
array running Solaris 10 release 3/05, but should work on any Solaris system. I tried
to make this as easy to read as possible because all of the documentation I found was
hard to comprehend at first.

——————————————————————————————

PART I: DISK LAYOUT

The disks I will be using are c0t096d0 c0t097d0 for the Solaris OS and c0t098d0 c0t099d0
for the mirrors.

Here’s a layout of how I have c0t096d0:

0 root wm 293 – 514 389.26MB (222/0/0) 797202
1 usr wm 515 – 970 799.56MB (456/0/0) 1637496
2 backup wm 0 – 4923 8.43GB (4924/0/0) 17682084
3 var wm 971 – 1156 326.14MB (186/0/0) 667926
4 swap wu 0 – 256 450.63MB (257/0/0) 922887
5 unassigned wm 1157 – 2720 2.68GB (1564/0/0) 5616324
6 usr wm 2721 – 4923 3.77GB (2203/0/0) 7910973
7 unassigned wm 257 – 292 63.12MB (36/0/0) 129276

Here’s a layout of how I have c0t097d0:

0 unassigned wm 0 0 (0/0/0) 0
1 unassigned wm 0 0 (0/0/0) 0
2 backup wm 0 – 4923 8.43GB (4924/0/0) 17682084
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 home wm 0 – 4923 8.43GB (4924/0/0) 17682084

c0t096d0 is using all of the slices and c0t097d0 is filled completey with slice 7
(/export/home). This was automatically setup in the Solaris 10 install.

——————————————————————————————

PART II: METADB SETUP – FREEING SWAP SPACE FOR A NEW SLICE

If you already have a free slice and you don’t need to do this, skip down to PART II.

The first part in setting up a mirror is the metadb’s. These are databases placed on
slices that hold the information for the mirrors. It’s good to have these on their own
slice and its good to have atleast two of these on each disk. After installing Solaris
I realized I didnt have any free slices to use for the metadb’s so I had to shrink my
swap to allocate space for a new slice and then add swap back. This is how I did it:

First Boot into single user mode.

{5} ok boot -s

Once it’s booted, list our swap status:

# swap -l

swapfile dev swaplo blocks free
/dev/dsk/c0t96d0s4 118,76 16 1052144 1052144

This tells us it is on slice 4 (c0t96d0s4), we want to delete it:

# swap -d /dev/dsk/c0t96d0s4
/dev/dsk/c0t96d0s4 was dump device —
invoking dumpadm(1M) -d swap to select new dump device
dumpadm: no swap devices are available

List the swaps again to make sure it’s gone:

# swap -l
No swap devices configured

Enter the format utility (this is where you manage the disks):

# format

Searching for disks…done

AVAILABLE DISK SELECTIONS:
0. c0t96d0
/sbus@2,0/SUNW,socal@d,10000/sf@1,0/ssd@w210000203723015c,0
1. c0t97d0
/sbus@2,0/SUNW,socal@d,10000/sf@1,0/ssd@w21000020370ede66,0
2. c0t98d0
/sbus@2,0/SUNW,socal@d,10000/sf@1,0/ssd@w210000203714ef36,0
3. c0t99d0
/sbus@2,0/SUNW,socal@d,10000/sf@1,0/ssd@w21000020370bbfd9,0
4. c0t100d0
/sbus@2,0/SUNW,socal@d,10000/sf@1,0/ssd@w210000203716830d,0
5. c0t101d0
/sbus@2,0/SUNW,socal@d,10000/sf@1,0/ssd@w2100002037146891,0
6. c0t102d0
/sbus@2,0/SUNW,socal@d,10000/sf@1,0/ssd@w21000020370cf8dc,0
7. c0t112d0
/sbus@2,0/SUNW,socal@d,10000/sf@1,0/ssd@w2100002037139837,0
8. c0t113d0
/sbus@2,0/SUNW,socal@d,10000/sf@1,0/ssd@w210000203713bb84,0
9. c0t114d0
/sbus@2,0/SUNW,socal@d,10000/sf@1,0/ssd@w21000020371413ba,0
10. c0t115d0
/sbus@2,0/SUNW,socal@d,10000/sf@1,0/ssd@w2100002037141b20,0

Specify disk (enter its number): 0
selecting c0t96d0
[disk formatted]
Warning: Current Disk has mounted partitions.

partition> print
Current partition table (original):
Total disk cylinders available: 4924 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks
0 root wm 293 – 514 389.26MB (222/0/0) 797202
1 usr wm 515 – 970 799.56MB (456/0/0) 1637496
2 backup wm 0 – 4923 8.43GB (4924/0/0) 17682084
3 var wm 971 – 1156 326.14MB (186/0/0) 667926
4 swap wu 0 – 292 513.75MB (293/0/0) 1052163
5 unassigned wm 1157 – 2720 2.68GB (1564/0/0) 5616324
6 usr wm 2721 – 4923 3.77GB (2203/0/0) 7910973
7 unassigned wm 0 0 (0/0/0) 0

The swap partition is going from cylinders 0-292 so I want to fix that to be smaller.
Choose the swap to modify it.

partition> 4
Part Tag Flag Cylinders Size Blocks
4 swap wu 0 – 292 513.75MB (293/0/0) 1052163

Enter partition id tag[swap]:
Enter partition permission flags[wu]:
Enter new starting cyl[0]:
Enter partition size[1052163b, 293c, 292e, 513.75mb, 0.50gb]: 450mb
partition> print
Current partition table (unnamed):
Total disk cylinders available: 4924 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks
0 root wm 293 – 514 389.26MB (222/0/0) 797202
1 usr wm 515 – 970 799.56MB (456/0/0) 1637496
2 backup wm 0 – 4923 8.43GB (4924/0/0) 17682084
3 var wm 971 – 1156 326.14MB (186/0/0) 667926
4 swap wu 0 – 256 450.63MB (257/0/0) 922887
5 unassigned wm 1157 – 2720 2.68GB (1564/0/0) 5616324
6 usr wm 2721 – 4923 3.77GB (2203/0/0) 7910973
7 unassigned wm 0 0 (0/0/0) 0

This made it goto cylinder 0-256 instead of 0-292 so now I have 32 cylinders for the
unassigned slice we are about to create on number 7.

partition> 7
Part Tag Flag Cylinders Size Blocks
7 unassigned wm 0 512.00MB (292/0/0) 1048572

Enter partition id tag[unassigned]:
Enter partition permission flags[wm]:
Enter new starting cyl[257]:
Enter partition size[1048572b, 292c, 548e, 512.00mb, 0.50gb]: 292e
partition> print
Current partition table (unnamed):
Total disk cylinders available: 4924 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks
0 root wm 293 – 514 389.26MB (222/0/0) 797202
1 usr wm 515 – 970 799.56MB (456/0/0) 1637496
2 backup wm 0 – 4923 8.43GB (4924/0/0) 17682084
3 var wm 971 – 1156 326.14MB (186/0/0) 667926
4 swap wu 0 – 256 450.63MB (257/0/0) 922887
5 unassigned wm 1157 – 2720 2.68GB (1564/0/0) 5616324
6 usr wm 2721 – 4923 3.77GB (2203/0/0) 7910973
7 unassigned wm 257 – 292 63.12MB (36/0/0) 129276

partition> label
Ready to label disk, continue? y

partition> quit

# reboot

After it reboots, check the slices again with format make sure they are the same.

——————————————————————————————

PART III – VTOC COPY AND METADB INSTALLATION

Now that we have the slices setup how we want them, we need to make them the exact
same on the mirror drive (c0t98d0). There is an easy command to do this:

# prtvtoc /dev/rdsk/c0t96d0s2 | fmthard -s – /dev/rdsk/c0t98d0s2
fmthard: New volume table of contents now in place.

After the slices are all setup in place on both disks, we are now ready to create
the metadb’s on both disks:

Remember above we allocated space from swap to use slice 7 for the metadbs so this is
where we will specify to use it. Here we are telling it to create two (-c 2) metadbs
on each disk.

# metadb -f -a -c 2 c0t96d0s7 c0t98d0s7

Create concat/stripe of slice 0 (c0t96d0s0).

# metainit -f d10 1 1 c0t96d0s0
d10: Concat/Stripe is setup

Create concat/stripe of slice 0 on mirror disk (c0t98d0s0).

# metainit -f d20 1 1 c0t98d0s0
d20: Concat/Stripe is setup

Create d0 and attatch submirror d10.

# metainit d0 -m d10
d0: Mirror is setup

For ONLY the root slice, you can use metaroot command to update the vfstab.
For other slices, you have to update vfstab by hand.

# metaroot d0

Reboot the machine. You have to reboot after running the metaroot command before
attatching the second submirror.

# reboot

Attatch the second sub-mirror (d20) to the volume (d0) which causes a mirror resync.

# metattach d0 d20
d0: submirror d20 is attached

To check progress of the attatchment:

# metastat | grep progress
Resync in progress: 12 % done

After this, the root slice is now setup to mirror and this can be done for all
other slices.
To double check your configuration type metastat and it should look similiar to this:

# metastat
d0: Mirror
Submirror 0: d10
State: Okay
Submirror 1: d20
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 797202 blocks (389 MB)

d10: Submirror of d0
State: Okay
Size: 797202 blocks (389 MB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c0t96d0s0 0 No Okay Yes

d20: Submirror of d0
State: Okay
Size: 797202 blocks (389 MB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c0t98d0s0 0 No Okay Yes

Device Relocation Information:
Device Reloc Device ID
c0t98d0 Yes id1,ssd@n200000203714ef36
c0t96d0 Yes id1,ssd@n200000203723015c

——————————————————————————————

PART IV – CONFIGURING OBP AND TESTING

After we have the root slice mirrored now, we can setup the OBP to boot from the mirror
root slice if the main one fails.

First get the device path for the mirror slice:
Write this down, we will need this for the OBP configuration.

# ls -al /dev/dsk/c0t98d0s0
lrwxrwxrwx 1 root root 74 May 11 16:05 /dev/dsk/c0t98d0s0 -> ../../devices/sbus@2,0/SUNW,socal@d,10000/sf@1,0/ssd@w
210000203714ef36,0:a

Reboot and go into the OBP.

# reboot

Now we’re in the OBP, we setup the alias called "fcalbackup" to point to our root mirror
slice that we wrote down.

{0} ok nvalias fcalbackup /sbus@2,0/SUNW,socal@d,10000/sf@1,0/ssd@w210000203714ef36,0:a

Show the boot devices

{0} ok printenv boot-device
boot-device = disk diskbrd diskisp disksoc net

Now we set the boot device to boot from fcal then fcalbackup if it fails, then net.

{0} ok setenv boot-device fcal fcalbackup net
boot-device = fcal fcalbackup net

I also set the diag-device because my diags are always set to max.

{0} ok printenv diag-device
diag-device = fcal

{0} ok setenv diag-device fcal fcalbackup
diag-device = fcal fcalbackup

NOTE: If you mess up when you create a nvalias and have to delete a nvalias you must
reboot for it to show up when you issue devalias command.

If you want to test to see if it works, just do "boot fcalbackup" to boot from the mirror.

In the first guide part of the guide I explained how to just
mirror your root slice, and this is showing how to mirror the rest of the
slices.

So last time I left off with just mirroring the root slice (c0t96d0s0)
and now we will start with slice 1 (c0t96d0s1).

NOTE: In d21 2 is the submirror number and 1 is the slice number
same with d11, 1 is submirror #, 1 is slice #

Create concat/stripe of slice 1.

# metainit -f d11 1 1 c0t96d0s1

Create concat/stripe of slice 1 on the mirror disk.

# metainit d21 1 1 c0t98d0s1

Create d1 and attatch d11.

# metainit d1 -m d11

Edit your /etv/vfstab manually for other slices than root.
sample:

#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
fd – /dev/fd fd – no –
/proc – /proc proc – no –
/dev/dsk/c0t96d0s4 – – swap – no –
/dev/md/dsk/d0 /dev/md/rdsk/d0 / ufs 1 no –
/dev/dsk/c0t96d0s6 /dev/rdsk/c0t96d0s6 /usr ufs 1 no –
/dev/dsk/c0t96d0s3 /dev/rdsk/c0t96d0s3 /var ufs 1 no –
/dev/dsk/c0t97d0s7 /dev/rdsk/c0t97d0s7 /export/home ufs 2 yes –
/dev/dsk/c0t96d0s5 /dev/rdsk/c0t96d0s5 /opt ufs 2 yes –
/dev/md/dsk/d1 /dev/md/rdsk/d1 /usr/openwin ufs 2 yes –
/devices – /devices devfs – no –
ctfs – /system/contract ctfs – no –
objfs – /system/object objfs – no –
swap – /tmp tmpfs – yes –

I changed the c0t96d0s1 entry to point to /dev/md/dsk/d1 and /dev/md/rdsk/d1 and thats it.

This tells SVM its OK to boot up even if only half of the metadbs are available:

# echo ‘set md_mirrored_root_flag=1’ >> /etc/system

# metainit -f d13 1 1 c0t96d0s3
d13: Concat/Stripe is setup
# metainit d23 1 1 c0t98d0s3
d23: Concat/Stripe is setup
# metainit d3 -m d13
d3: Mirror is setup
# metainit -f d14 1 1 c0t96d0s4
d14: Concat/Stripe is setup
# metainit d24 1 1 c0t98d0s4
d24: Concat/Stripe is setup
# metainit d4 -m d14
d4: Mirror is setup
# metainit -f d15 1 1 c0t96d0s5
d15: Concat/Stripe is setup
# metainit d25 1 1 c0t98d0s5
d25: Concat/Stripe is setup
# metainit d5 -m d15
d5: Mirror is setup
# metainit -f d16 1 1 c0t96d0s6
d16: Concat/Stripe is setup
# metainit d26 1 1 c0t98d0s6
d26: Concat/Stripe is setup
# metainit d6 -m d16
d6: Mirror is setup

Edit all other entries in /etc/vfstab like we did above.

#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
fd – /dev/fd fd – no –
/proc – /proc proc – no –
/dev/md/dsk/d4 – – swap – no –
/dev/md/dsk/d0 /dev/md/rdsk/d0 / ufs 1 no –
/dev/md/dsk/d6 /dev/md/rdsk/d6 /usr ufs 1 no –
/dev/md/dsk/d3 /dev/md/rdsk/d3 /var ufs 1 no –
/dev/dsk/c0t97d0s7 /dev/rdsk/c0t97d0s7 /export/home ufs 2 yes –
/dev/md/dsk/d5 /dev/md/rdsk/d5 /opt ufs 2 yes –
/dev/md/dsk/d1 /dev/md/rdsk/d1 /usr/openwin ufs 2 yes –
/devices – /devices devfs – no –
ctfs – /system/contract ctfs – no –
objfs – /system/object objfs – no –
swap – /tmp tmpfs – yes –

# sync; sync; reboot

After it reboots, attatch the other submirrors to d1,d3,d4,d5,d6.

# metattach d1 d21
d1: submirror d21 is attached
# metattach d3 d23
d3: submirror d23 is attached
# metattach d4 d24
d4: submirror d24 is attached
# metattach d5 d25
d5: submirror d25 is attached
# metattach d6 d26
d6: submirror d26 is attached

# metastat | grep progress
Resync in progress: 0 % done
Resync in progress: 1 % done
Resync in progress: 16 % done
Resync in progress: 24 % done
Resync in progress: 18 % done

Solaris Volumn Manager Cheatsheet

Solaris Volume Manager

To verify that you have DiskSuite 3.0 – 4.2:

# pkginfo -l SUNWmd
To verify that you have DiskSuite 4.2.1 or SVM:

# pkginfo -l SUNWmdu
Where is it?
SDS 4.0 -> 2.5.1 Server Pack, INTRANET EXTENSION CD-ROM
Not customer-downloadable.

SDS 4.1 -> 2.6 Server Pack, INTRANET EXTENSION CD-ROM
Not customer-downloadable.

SDS 4.2 -> 2.7 Server Pack, EASY ACCESS SERVER 2.0/3.0 CD-ROM
NOTE: This cd is in it’s own little cardboard box…
Not customer-downloadable.

SDS 4.2.1 -> Solaris 8, 2nd of 2 "Solaris Software" CD’s (2 of 2)
under the EA (easy access) directory
See also: Storage Download Center

SVM -> Solaris 9 or later, part of End User install clusterkages To Install
SDS 4.0, 4.1, and 4.2:

SUNWmd (required) Base Product
SUNWmdg (optional) GUI package
SUNWmdn (optional) SNMP log daemon package

SDS 4.2.1

SUNWmdu Solstice DiskSuite Commands
SUNWmdr Solstice DiskSuite Drivers
SUNWmdx Solstice DiskSuite Drivers(64-bit)
SUNWmdg (optional) Solstice DiskSuite Tool
SUNWmdnr (optional) Solstice DiskSuite Log Daemon Configuration Files
SUNWmdnu (optional) Solstice DiskSuite Log Daemon

The following packages are EarlyAccess software only and aren’t
(yet) supported. These give the user the ability to try out the new
Logical Volume Manager GUI inside SMC (Solaris Management Console).

SUNWlvma (really optional) Solaris Volume Management API’s
SUNWlvmg (really optional) Solaris Volume Management Application
SUNWlvmr (really optional) Solaris Volume Management (root)

NOTE: Later versions of Solaris 8 have a few additional packages for
DiskSuite, namely, 3 packages starting with SUNWlvm*. These do
not have to be loaded. They are beta-test packages for the new
Logical Volume Manager which will premier with Solaris 9.

SVM – Solaris 9 only

SUNWmdr Solaris Volume Manager, (Root)
SUNWmdu Solaris Volume Manager, (Usr)
SUNWmdx Solaris Volume Manager Drivers, (64-bit)
SUNWlvma Solaris Volume Management API’s [GUI]
SUNWlvmg Solaris Volume Management Application [GUI]
SUNWlvmr Solaris Volume Management (root) [GUI]

To run the Solaris VolumeManager GUI, you must load the SMC 2.0
packages. / Output To Get From Customers
1.) Ask, if appropriate, if the root disk is under SDS/SVM control.
2.) # metadb
3.) # metastat