SVM CheatSheet

# umount filesystem (unmount any non-svm open filesystems on failed disk)

# metadb -d c1t0d0s7 (if replicas on this disk, remove them)

# metadb | grep c1t0d0s0 (verify there are no existing replicas left on the disk)

# cfgadm -c unconfigure c1::dsk/c1t0d0 (might not complete command if busy, remove failed disk)

Insert a new disk :

# cfgadm -c configure c1::dsk/c1t0d0 (configure new disk)

# prtvtoc /dev/rdsk/c0t0d0s2 > /tmp/firstdisk (get format for new disk)

# fmthard -s /tmp/firstdisk /dev/rdsk/c1t0d0s2 (format disk same as mirror)

# metadevadm -u c1t0d0 (will update the New DevID)

# metadb -a c1t0d0s7 (if necessary, recreate any replicas)

# metareplace -e d0 c1t0d0s0 (do this for each submirror on the disk)

# metastat -i (will change unavailable state of devices to Okay)

Raid-5 disk replacement: (use when raid unit “ State: Needs maintenance” in metastat cmd)

On failing disk:(If you can access the disk, if not start at the cfgadm -c unconfigure step)

# umount filesystem (unmount any open non-svm filesystems on this disk)

# metadb -d c1t0d0s7 (any replicas on this disk, remove them)

# metadb | grep c1t0d0 (verify there are no existing replicas left on the disk)

# cfgadm -c unconfigure c1::dsk/c1t0d0 (might not complete command if busy, remove the failed disk)

Insert a new disk :

# cfgadm -c configure c1::dsk/c1t0d0

Run \\\’format\\\’ or \\\’prtvtoc\\\’ to put the desired partition table on the new disk

# metadevadm -u c1t0d0 (will update the New DevID)

# metadb -a c1t0d0s7 (if necessary, recreate any replicas)

# metareplace -e c1t0d0s0 (do this for each raid on the disk)

# metastat -i (will change unavailable state of devices to Okay)

SDS – How to mirror the root disk

Use this procedure to mirror the system disk partitions using Solstice DiskSuite:

– first format the second disk exactly like the original root disk: (typically s7 is reserved for metadatabase)

# prtvtoc /dev/rdsk/c0t0d0s2 > /tmp/firstdisk

# fmthard -s /tmp/firstdisk /dev/rdsk/c1t0d0s2

– create at least 3 state database replicas on unused (10mb) slices.

# metadb -a -f -c 3 c0t0d0s7 c1t0d0s7 (-a and -f options create the initial state database replicas. -c 3

puts three state database replicas on each specified slice)

– for each slice, you must create 3 new metadevices: one for the existing slice, one for the slice on the

mirrored disk, and one for the mirror. To do this, make the appropriate entries in the md.tab file.

slice 0, create the following entries in (/etc/lvm/md.tab)

d10 1 1 /dev/dsk/c0t0d0s0

d20 1 1 /dev/dsk/c1t0d0s0

d0 -m d10

slice 1, create the following entries in (/etc/lvm/md.tab)

d11 1 1 /dev/dsk/c0t0d0s1

d21 1 1 /dev/dsk/c1t0d0s1

d1 -m d11

Follow this example, creating groups of 3 entries for each data slice on the root disk.

– run the metainit command to create all the metadevices you have just defined in the md.tab file.

If you use the -a option, all the metadevices defined in the md.tab will be created.

# metainit -a -f (-f is required because the slices on the root disk are currently mounted)

– make a backup copy of the vfstab file: # cp /etc/vfstab /etc/vfstab.pre_sds

– run the metaroot command for the metadevice you designated for the root mirror. In the example

above, we created d0 to be the mirror device for the root partition, so we would run:

# metaroot d0

– edit the /etc/vfstab file to change each slice to the appropriate metadevice. \\\’metaroot\\\’ command has already

done this for you for the root slice.

/dev/dsk/c0t0d0s1 – – swap – no –

to

/dev/md/dsk/d1 – – swap – no –

Make sure that you change the slice to the main mirror, d1 not to the simple submirror, d11.

– reboot the system. Do not proceed without rebooting your system, or data corruption will occur.

– After the system has rebooted, you can verify that root and other slices are under DiskSuite\\\’s control:

# df -k

# swap -l

The outputs of these commands should reflect the metadevice names, not the slice names.

– Last, attach the second submirror to the metamirror device.

# metattach d0 d20 (must be done for each partition on the disk, and will start the syncing of data)

– to follow the progress of this syncing for this mirror, enter the command

# metastat d0

Although you can run all the metattach commands one right after another, it is a good idea to run the next

metattach command only after the first syncing has completed. Once you have attached all the submirrors

to the metamirrors, and all the syncing has completed, your root disk is mirrored.

SVM – How to add a disk to a disk set

Expanding Disk Sets
ProcedureHow to Add Disks to a Disk Set
Caution � Caution �

Do not add disks larger than 1Tbyte to disk sets if you expect to run the Solaris software with a 32�bit kernel or if you expect to use a version of the Solaris OS prior to the Solaris 9 4/03 release. See Overview of Multi-Terabyte Support in Solaris Volume Manager for more information about multi-terabyte volume support in Solaris Volume Manager.

Only disks that meet the following conditions can be added to a disk set:

*

The disk must not be in use in a volume or hot spare pool.
*

The disk must not contain a state database replica.
*

The disk must not be currently mounted, swapped on, or otherwise opened for use by an application.

Before You Begin

Check Guidelines for Working With Disk Sets.
Steps

1.

To add disks to a disk set, use one of the following methods:
*

From the Enhanced Storage tool within the Solaris Management Console, open the Disk Sets node. Select the disk set that you want to modify. Then click the right mouse button and choose Properties. Select the Disks tab. Click Add Disk. Then follow the instructions in the wizard. For more information, see the online help.
*

To add disks to a disk set from the command line, use the following form of the metaset command:

# metaset -s diskset-name -a disk-name

-s diskset-name

Specifies the name of a disk set on which the metaset command will work.
-a

Adds disks to the named disk set.
disk-name

Specifies the disks to add to the disk set. disk names are in the form cxtxdx. N The �sx� slice identifiers are not included when adding a disk to a disk set.

See the metaset(1M)man page for more information.

The first host to add a disk to a disk set becomes the owner of the disk set.
Caution � Caution �

Do not add a disk with data to a disk set. The process of adding a disk with data to a disk set might repartition the disk, destroying the data.
2.

Verify the status of the disk set and disks.

# metaset

Example 19�3 Adding a Disk to a Disk Set

# metaset -s blue -a c1t6d0
# metaset
Set name = blue, Set number = 1

Host Owner
host1 Yes

Drive Dbase
c1t6d0 Yes

In this example, the host name is host1. The shared disk set is blue. Only the disk, c1t6d0, has been added to the disk set blue.

Optionally, you could add multiple disks at once by listing each disk on the command line. For example, you could use the following command to add two disks to the disk set simultaneously:

# metaset -s blue -a c1t6d0 c2t6d0

disk replacement howto

Any disk replacement should only be attempted by qualified service personnel, or
with the assistance of a qualified person. This document is only intended as an
introduction or reminder of the multiple software/hardware considerations that
can affect a safe and successful disk replacement. Full documentation should be
consulted when anything is unclear: http://docs.sun.com
Considerations

When planning to replace a disk, determine the following:

*

1. Is the underlying hardware SCSI or fibre channel?
Is it a Storage Area Network(SAN) environment?
*

2. Is the disk under logical volume management?
*

2a. What software or hardware is being used for logical volume management?
*

2b. Is the volume a stripe(RAID 0), concatenation(RAID 0), mirror(RAID 1),
or RAID 5?
*

3. If the disk is not under volume management, are any partitions on the
disk mounted? Disks should not be removed while there is a potential for
I/O operations to occur to the disk.
*

4. Has the disk been allocated for RAW usage (i.e for a database)
*

5. Is the disk shared in a cluster environment? (Clusters have considerations
for disks which may contain quorum information.)
Cluster Note:
Details for cluster environments are outside of the scope of this document.
Reference Cluster documentation, or get assistance from Sun Support Personnel.

The primary types of logical volume management used with SUN storage:
Software Volume Management

JBOD (Just a Bunch of Disks)

Mostly used with: internal disks, Sun StorEdge[TM] A5000 family, Sun StorEdge
D1000, Sun StorEdge D240, Netra[TM] st D130, Sun StorEdge S1, Sun StorEdge
UniPack, Sun StorEdge MultiPack, and others.

Disks may be mounted individually without any logical volume management, and
would show up in mount or df -k output in the form:

/dev/dsk/c#t#d#s#.

Commands to check:

# mount
# df -k

NOTE: if a disk is being used for raw access by a data base, it will not appear in mount or df -k. Confirm this with a System administrator familiar with the system.

Solaris[TM] Volume Manager(SVM) a.k.a. Solstice DiskSuite[TM] (SDS) or Online: Disk Suite[TM] (ODS)

Mostly used with: any JBOD disks

Commands to check:

# df -k Shows volumes mounted as /dev/md/dsk/d#
# metastat Lists volume configuration, shows disk states.
# /usr/opt/SUNWmd/sbin/metastat SDS or ODS may require the full path to the command.

Veritas Volume Manager (VxVM) aka Sun Enterprise Volume Manager[TM] (SEVM)

Mostly used with: any JBOD disks, automatically licensed by Sun if an Sun StorEdge A5000 family array is attached to the system

Commands to check:

# df -k Shows volumes mounted as /dev/vx/dsk/ for rootdg,
or /dev/vx/dsk// for other disk groups.
# vxdisk list Lists disks, shows if they’re under VxVM control, shows their
current state. The "error" state indicates that the disk is
not under VxVM control.
# vxprint -ht Lists volume configuration.
# vxdiskadm Text-based menus for VxVM administration. Options 4 and 5 are
used to replace failed disks.

Hardware commands

Fibre channel(FC-AL or SAN) hardware may require additional commands to ensure
creation of proper device links to the disk world wide number(WWN). Fibre
channel disks require luxadm commands to remove the WWNs in the device path. In
Solaris 8 OS and later, adding / removing / replacing a SCSI disk drive,
requires the use of cfgadm(see cfgadm_scsi(1M) for more details). Consult the
documentation or man commands for further information.

Commands used:

# luxadm probe Lists arrays that use the SES
driver for array enclosure
management
# luxadm display Displays details of array or
disks
# luxadm remove_device Removes device paths to a fibre channel disk
# cfgadm -x replace_device Starts the process of replacing
a SCSI disk drive in Solaris 8
OS and later.

Important documentation – more information on cfgadm:
http://docs.sun.com/db/doc/805-7228/6j6q7uesd?a=view#devconfig2-15
Hardware Volume Management

Raid Manager 6 (RM6)

Used with: Sun StorEdge A1000, Sun RSM Array[TM] 2000, Sun StorEdge A3000, Sun StorEdge A3500FC

Features: Hardware RAID controller, managed from the host by GUI or command line.

Commands to check:

# /usr/lib/osa/bin/rm6 Launches management GUI for the
hardware raid controllers. The
Recovery Guru application in the GUI
is extremely useful for handling
failure events.
# /usr/sbin/osa/lad Lists c#t#d# of logical disks
available in hardware
# /usr/sbin/osa/drivutil -l List individual disks in logical disk
volume.
# /usr/sbin/osa/healthck -a Show array hardware status

Important documentation:
http://www.sun.com/products-n-solutions/hardware/docs/Network_Storage_Solutions/Workgroup/A1000D1000/index.html
Version Matrix:
http://sunsolve.sun.com/pub-cgi/retrieve.pl?doc=finfodoc%2F43483&zone_110=43483&wholewords=on

Sun StorEdge[TM] T3 and Sun StorEdge[TM] 6120 arrays

Used with : Sun StorEdge T3 and Sun StorEdge 6120 arrays

Features: Internal management by a hardware RAID controller which can be
accessed like a UNIX system through telnet or a serial service port.

Commands to check:

# telnet Username=root, password set by customer
# fru stat Field replaceable unit status
# fru list Field replaceable unit list
# vol stat Volume status
# vol list Volume list
# proc list Process list (Example: logical volume
construction or recovery)

Important documentation:
http://www.sun.com/products-n-solutions/hardware/docs/Network_Storage_Solutions/Workgroup/T3WG/index.html

Sun StorEdge 3310, Sun StorEdge 3510 FC Array

Used with: Sun StorEdge 3310, Sun StorEdge 3510 FC Array

Features: Internal management by a RAID controller, which can be accessed
through an ethernet management port or serial port. Features a text terminal
menu interface. Can also be managed by a UNIX GUI or UNIX command line interface
from the host.

Commands to use:

For in-band management (through SCSI or FCAL interface)

# sccli Starts command line interface (cli) for array
management.
sccli> show configuration Shows array components status
sccli> show disks display status of disks, note: faulted disks
do not appear in the listing.
sccli> show events display system log
sccli> show FRUs list fru’s; contains part numbers and serial numbers.
sccli> show logical-drives display LUN’s and LUN status; shows if spare
is configured and any failed disks.
sccli> show redundancy-mode displays controller status
sccli> help Displays cli commands available.

Manual: http://www.sun.com/products-n-solutions/hardware/docs/html/817-4951-10/

#ssconsole Launches the UNIX GUI for monitoring and
management.

Solaris Software Raid HowTo

Solaris Software Raid HowTo

This how to guide shows how to mirror two disks, including the root and swap slices. The guide has been tested with Solaris Volume Manager on Solaris 9/10. It should work basically the same for all Solaris versions with Solstice DiskSuite…

This is the howto in mirror root (/ ) partition and swap slice in 11 steps:

– Partition the first disk

# format c0t0d0

Use the partition tool (=> "p , p "!) to setup the slices. We assume the following slice setup afterwards:

#Part Tag Flag Cylinders Size Blocks
0 root wm 870 – 7389 14.65GB (6520/0/0) 30722240
1 swap wu 0 – 869 1.95GB (870/0/0) 4099440
2 backup wm 0 – 7505 16.86GB (7506/0/0) 35368272
3 unassigned wm 7390 – 7398 20.71MB (9/0/0) 42408
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0

1. Copy the partition table of the first disk to its future mirror disk

# prtvtoc /dev/rdsk/c0t0d0s2 | fmthard -s – /dev/rdsk/c0t1d0s2

2. Create at least two state database replicas on each disk

# metadb -a -f -c 2 c0t0d0s3 c0t1d0s3

Check the state of all replicas with metadb:

# metadb

Notes:

* A state database replica contains configuration and state information about the meta devices. Make sure that always at least 50% of the replicas are active!

3. Create the root slice mirror and its first submirror

# metainit -f d10 1 1 c0t0d0s0
# metainit -f d20 1 1 c0t1d0s0
# metainit d30 -m d10

Run metaroot to prepare /etc/vfstab and /etc/system (do this only for the root slice!):

# metaroot d30

4. Create the swap slice mirror and its first submirror

# metainit -f d11 1 1 c0t0d0s1
# metainit -f d21 1 1 c0t1d0s1
# metainit d31 -m d11

5. Edit /etc/vfstab to mount all mirrors after boot, including mirrored swap

/etc/vfstab before changes:

fd – /dev/fd fd – no –
/proc – /proc proc – no –
/dev/dsk/c0t0d0s1 – – swap – no –
/dev/md/dsk/d30 /dev/md/rdsk/d30 / ufs 1 no logging
swap – /tmp tmpfs – yes –

/etc/vfstab after changes:

fd – /dev/fd fd – no –
/proc – /proc proc – no –
/dev/md/dsk/d31 – – swap – no –
/dev/md/dsk/d30 /dev/md/rdsk/d30 / ufs 1 no logging
swap – /tmp tmpfs – yes –

Notes:

* The entry for the root device (/) has already been altered by the metaroot command we executed before.

6. Reboot the system

# lockfs -fa && init 6

7. Attach the second submirrors to all mirrors

# metattach d30 d20
# metattach d31 d21

Notes:

* This will finally cause the data from the boot disk to be synchronized with the mirror drive.
* You can use metastat to track the mirroring progress.

8. Change the crash dump device to the swap metadevice

# dumpadm -d `swap -l | tail -1 | awk ‘{print $1}’

9. Make the mirror disk bootable

# installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/c0t1d0s0

Notes:

* This will install a boot block to the second disk.

10. Determine the physical device path of the mirror disk

# ls -l /dev/dsk/c0t1d0s0
… /dev/dsk/c0t1d0s0 -> ../../devices/pci@1f,4000/scsi@3/sd@1,0:a

11. Create a device alias for the mirror disk

# eeprom "nvramrc=devalias mirror /pci@1f,4000/scsi@3/disk@1,0"
# eeprom "use-nvramrc?=true"

Add the mirror device alias to the Open Boot parameter boot-device to prepare the case of a problem with the primary boot device.

# eeprom "boot-device=disk mirror cdrom net"

You can also configure the device alias and boot-device list from the Open Boot Prompt (OBP a.k.a. ok prompt):

ok nvalias mirror /pci@1f,4000/scsi@3/disk@1,0
ok use-nvramrc?=true
ok boot-device=disk mirror cdrom net

SVM mirror of your root filesystem in 10 steps

Setting up a mirror of your root filesystem in 10 steps

Read man pages for all the commands below for more info.

You’ll need a partition at least as big as your / (root partition) and at least 3 other partitions with a free space of at least 5 MB. Preferably, create at least 3 small partitions a couple of MB in size, on different disks on different controllers (which are preferably not a part of your mirror or RAID setup), holding the state database replica’s. In the example, the root partition is c0t0d0s0 and the mirror partition is c0t2d0s4.

1.

Create the state databases:

metadb -f -a /dev/rdsk/c1t3d0s7
metadb -a /dev/rdsk/c2t4d0s7
metadb -a /dev/rdsk/c3t5d0s7

2.

Initialize a RAID 0 with 1 stripe on your root partition:

metainit -f d11 1 1 /dev/rdsk/c0t0d0s0

3.

The same on the mirror partition:

metainit d12 1 1 /dev/rdsk/c0t2d0s4

4.

Initialize the mirror:

metainit d10 -m d11

5.

Reflect this change in /etc/vfstab (this command will edit your vfstab):

metaroot d10

6.

Reboot your system so that it rereads /etc/vfstab.
7.

Connect the mirror partition to your mirror:

metattach d10 d12

Issue the metastat command to see that this works, give the mirror time to synchronize.
8.

In the openbootprom, create an alias for the mirror partition, so you can boot from it in case of emergency:

OK nvalias mirror /pci@1f,0/ide@d/disk@2,0:e

Read the OpenBoot FAQ for information on device trees, the above line is architecture dependent!
9.

Add this alias to the list of boot devices:

OK boot-device disk mirror

10.

Test your mirror:

OK boot mirror

Note: Volume Manager complains if it needs to mirror on an s0 that starts at cylinder 0. Therefor, have your first partition start at cylinder 1 if you need it as second part of a mirror.

How to setup SVM using RAID 1 mirroring one disk to another

CONTENTS:

INTRODUCTION
PART I: DISK LAYOUT
PART II: METADB SETUP – FREEING SWAP SPACE FOR A NEW SLICE
PART III: VTOC COPY AND METADB INSTALLATION
PART IV: CONFIGURING OBP AND TESTING

——————————————————————————————

INTRODUCTION

This file is an attempt to explain how to setup RAID 1 on a Solaris 10 machine using
Sun Volume Manager. This was done on a Sun E4500 attatched by fiber to a StorEDGE A5100
array running Solaris 10 release 3/05, but should work on any Solaris system. I tried
to make this as easy to read as possible because all of the documentation I found was
hard to comprehend at first.

——————————————————————————————

PART I: DISK LAYOUT

The disks I will be using are c0t096d0 c0t097d0 for the Solaris OS and c0t098d0 c0t099d0
for the mirrors.

Here’s a layout of how I have c0t096d0:

0 root wm 293 – 514 389.26MB (222/0/0) 797202
1 usr wm 515 – 970 799.56MB (456/0/0) 1637496
2 backup wm 0 – 4923 8.43GB (4924/0/0) 17682084
3 var wm 971 – 1156 326.14MB (186/0/0) 667926
4 swap wu 0 – 256 450.63MB (257/0/0) 922887
5 unassigned wm 1157 – 2720 2.68GB (1564/0/0) 5616324
6 usr wm 2721 – 4923 3.77GB (2203/0/0) 7910973
7 unassigned wm 257 – 292 63.12MB (36/0/0) 129276

Here’s a layout of how I have c0t097d0:

0 unassigned wm 0 0 (0/0/0) 0
1 unassigned wm 0 0 (0/0/0) 0
2 backup wm 0 – 4923 8.43GB (4924/0/0) 17682084
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 home wm 0 – 4923 8.43GB (4924/0/0) 17682084

c0t096d0 is using all of the slices and c0t097d0 is filled completey with slice 7
(/export/home). This was automatically setup in the Solaris 10 install.

——————————————————————————————

PART II: METADB SETUP – FREEING SWAP SPACE FOR A NEW SLICE

If you already have a free slice and you don’t need to do this, skip down to PART II.

The first part in setting up a mirror is the metadb’s. These are databases placed on
slices that hold the information for the mirrors. It’s good to have these on their own
slice and its good to have atleast two of these on each disk. After installing Solaris
I realized I didnt have any free slices to use for the metadb’s so I had to shrink my
swap to allocate space for a new slice and then add swap back. This is how I did it:

First Boot into single user mode.

{5} ok boot -s

Once it’s booted, list our swap status:

# swap -l

swapfile dev swaplo blocks free
/dev/dsk/c0t96d0s4 118,76 16 1052144 1052144

This tells us it is on slice 4 (c0t96d0s4), we want to delete it:

# swap -d /dev/dsk/c0t96d0s4
/dev/dsk/c0t96d0s4 was dump device —
invoking dumpadm(1M) -d swap to select new dump device
dumpadm: no swap devices are available

List the swaps again to make sure it’s gone:

# swap -l
No swap devices configured

Enter the format utility (this is where you manage the disks):

# format

Searching for disks…done

AVAILABLE DISK SELECTIONS:
0. c0t96d0
/sbus@2,0/SUNW,socal@d,10000/sf@1,0/ssd@w210000203723015c,0
1. c0t97d0
/sbus@2,0/SUNW,socal@d,10000/sf@1,0/ssd@w21000020370ede66,0
2. c0t98d0
/sbus@2,0/SUNW,socal@d,10000/sf@1,0/ssd@w210000203714ef36,0
3. c0t99d0
/sbus@2,0/SUNW,socal@d,10000/sf@1,0/ssd@w21000020370bbfd9,0
4. c0t100d0
/sbus@2,0/SUNW,socal@d,10000/sf@1,0/ssd@w210000203716830d,0
5. c0t101d0
/sbus@2,0/SUNW,socal@d,10000/sf@1,0/ssd@w2100002037146891,0
6. c0t102d0
/sbus@2,0/SUNW,socal@d,10000/sf@1,0/ssd@w21000020370cf8dc,0
7. c0t112d0
/sbus@2,0/SUNW,socal@d,10000/sf@1,0/ssd@w2100002037139837,0
8. c0t113d0
/sbus@2,0/SUNW,socal@d,10000/sf@1,0/ssd@w210000203713bb84,0
9. c0t114d0
/sbus@2,0/SUNW,socal@d,10000/sf@1,0/ssd@w21000020371413ba,0
10. c0t115d0
/sbus@2,0/SUNW,socal@d,10000/sf@1,0/ssd@w2100002037141b20,0

Specify disk (enter its number): 0
selecting c0t96d0
[disk formatted]
Warning: Current Disk has mounted partitions.

partition> print
Current partition table (original):
Total disk cylinders available: 4924 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks
0 root wm 293 – 514 389.26MB (222/0/0) 797202
1 usr wm 515 – 970 799.56MB (456/0/0) 1637496
2 backup wm 0 – 4923 8.43GB (4924/0/0) 17682084
3 var wm 971 – 1156 326.14MB (186/0/0) 667926
4 swap wu 0 – 292 513.75MB (293/0/0) 1052163
5 unassigned wm 1157 – 2720 2.68GB (1564/0/0) 5616324
6 usr wm 2721 – 4923 3.77GB (2203/0/0) 7910973
7 unassigned wm 0 0 (0/0/0) 0

The swap partition is going from cylinders 0-292 so I want to fix that to be smaller.
Choose the swap to modify it.

partition> 4
Part Tag Flag Cylinders Size Blocks
4 swap wu 0 – 292 513.75MB (293/0/0) 1052163

Enter partition id tag[swap]:
Enter partition permission flags[wu]:
Enter new starting cyl[0]:
Enter partition size[1052163b, 293c, 292e, 513.75mb, 0.50gb]: 450mb
partition> print
Current partition table (unnamed):
Total disk cylinders available: 4924 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks
0 root wm 293 – 514 389.26MB (222/0/0) 797202
1 usr wm 515 – 970 799.56MB (456/0/0) 1637496
2 backup wm 0 – 4923 8.43GB (4924/0/0) 17682084
3 var wm 971 – 1156 326.14MB (186/0/0) 667926
4 swap wu 0 – 256 450.63MB (257/0/0) 922887
5 unassigned wm 1157 – 2720 2.68GB (1564/0/0) 5616324
6 usr wm 2721 – 4923 3.77GB (2203/0/0) 7910973
7 unassigned wm 0 0 (0/0/0) 0

This made it goto cylinder 0-256 instead of 0-292 so now I have 32 cylinders for the
unassigned slice we are about to create on number 7.

partition> 7
Part Tag Flag Cylinders Size Blocks
7 unassigned wm 0 512.00MB (292/0/0) 1048572

Enter partition id tag[unassigned]:
Enter partition permission flags[wm]:
Enter new starting cyl[257]:
Enter partition size[1048572b, 292c, 548e, 512.00mb, 0.50gb]: 292e
partition> print
Current partition table (unnamed):
Total disk cylinders available: 4924 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks
0 root wm 293 – 514 389.26MB (222/0/0) 797202
1 usr wm 515 – 970 799.56MB (456/0/0) 1637496
2 backup wm 0 – 4923 8.43GB (4924/0/0) 17682084
3 var wm 971 – 1156 326.14MB (186/0/0) 667926
4 swap wu 0 – 256 450.63MB (257/0/0) 922887
5 unassigned wm 1157 – 2720 2.68GB (1564/0/0) 5616324
6 usr wm 2721 – 4923 3.77GB (2203/0/0) 7910973
7 unassigned wm 257 – 292 63.12MB (36/0/0) 129276

partition> label
Ready to label disk, continue? y

partition> quit

# reboot

After it reboots, check the slices again with format make sure they are the same.

——————————————————————————————

PART III – VTOC COPY AND METADB INSTALLATION

Now that we have the slices setup how we want them, we need to make them the exact
same on the mirror drive (c0t98d0). There is an easy command to do this:

# prtvtoc /dev/rdsk/c0t96d0s2 | fmthard -s – /dev/rdsk/c0t98d0s2
fmthard: New volume table of contents now in place.

After the slices are all setup in place on both disks, we are now ready to create
the metadb’s on both disks:

Remember above we allocated space from swap to use slice 7 for the metadbs so this is
where we will specify to use it. Here we are telling it to create two (-c 2) metadbs
on each disk.

# metadb -f -a -c 2 c0t96d0s7 c0t98d0s7

Create concat/stripe of slice 0 (c0t96d0s0).

# metainit -f d10 1 1 c0t96d0s0
d10: Concat/Stripe is setup

Create concat/stripe of slice 0 on mirror disk (c0t98d0s0).

# metainit -f d20 1 1 c0t98d0s0
d20: Concat/Stripe is setup

Create d0 and attatch submirror d10.

# metainit d0 -m d10
d0: Mirror is setup

For ONLY the root slice, you can use metaroot command to update the vfstab.
For other slices, you have to update vfstab by hand.

# metaroot d0

Reboot the machine. You have to reboot after running the metaroot command before
attatching the second submirror.

# reboot

Attatch the second sub-mirror (d20) to the volume (d0) which causes a mirror resync.

# metattach d0 d20
d0: submirror d20 is attached

To check progress of the attatchment:

# metastat | grep progress
Resync in progress: 12 % done

After this, the root slice is now setup to mirror and this can be done for all
other slices.
To double check your configuration type metastat and it should look similiar to this:

# metastat
d0: Mirror
Submirror 0: d10
State: Okay
Submirror 1: d20
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 797202 blocks (389 MB)

d10: Submirror of d0
State: Okay
Size: 797202 blocks (389 MB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c0t96d0s0 0 No Okay Yes

d20: Submirror of d0
State: Okay
Size: 797202 blocks (389 MB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c0t98d0s0 0 No Okay Yes

Device Relocation Information:
Device Reloc Device ID
c0t98d0 Yes id1,ssd@n200000203714ef36
c0t96d0 Yes id1,ssd@n200000203723015c

——————————————————————————————

PART IV – CONFIGURING OBP AND TESTING

After we have the root slice mirrored now, we can setup the OBP to boot from the mirror
root slice if the main one fails.

First get the device path for the mirror slice:
Write this down, we will need this for the OBP configuration.

# ls -al /dev/dsk/c0t98d0s0
lrwxrwxrwx 1 root root 74 May 11 16:05 /dev/dsk/c0t98d0s0 -> ../../devices/sbus@2,0/SUNW,socal@d,10000/sf@1,0/ssd@w
210000203714ef36,0:a

Reboot and go into the OBP.

# reboot

Now we’re in the OBP, we setup the alias called "fcalbackup" to point to our root mirror
slice that we wrote down.

{0} ok nvalias fcalbackup /sbus@2,0/SUNW,socal@d,10000/sf@1,0/ssd@w210000203714ef36,0:a

Show the boot devices

{0} ok printenv boot-device
boot-device = disk diskbrd diskisp disksoc net

Now we set the boot device to boot from fcal then fcalbackup if it fails, then net.

{0} ok setenv boot-device fcal fcalbackup net
boot-device = fcal fcalbackup net

I also set the diag-device because my diags are always set to max.

{0} ok printenv diag-device
diag-device = fcal

{0} ok setenv diag-device fcal fcalbackup
diag-device = fcal fcalbackup

NOTE: If you mess up when you create a nvalias and have to delete a nvalias you must
reboot for it to show up when you issue devalias command.

If you want to test to see if it works, just do "boot fcalbackup" to boot from the mirror.

In the first guide part of the guide I explained how to just
mirror your root slice, and this is showing how to mirror the rest of the
slices.

So last time I left off with just mirroring the root slice (c0t96d0s0)
and now we will start with slice 1 (c0t96d0s1).

NOTE: In d21 2 is the submirror number and 1 is the slice number
same with d11, 1 is submirror #, 1 is slice #

Create concat/stripe of slice 1.

# metainit -f d11 1 1 c0t96d0s1

Create concat/stripe of slice 1 on the mirror disk.

# metainit d21 1 1 c0t98d0s1

Create d1 and attatch d11.

# metainit d1 -m d11

Edit your /etv/vfstab manually for other slices than root.
sample:

#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
fd – /dev/fd fd – no –
/proc – /proc proc – no –
/dev/dsk/c0t96d0s4 – – swap – no –
/dev/md/dsk/d0 /dev/md/rdsk/d0 / ufs 1 no –
/dev/dsk/c0t96d0s6 /dev/rdsk/c0t96d0s6 /usr ufs 1 no –
/dev/dsk/c0t96d0s3 /dev/rdsk/c0t96d0s3 /var ufs 1 no –
/dev/dsk/c0t97d0s7 /dev/rdsk/c0t97d0s7 /export/home ufs 2 yes –
/dev/dsk/c0t96d0s5 /dev/rdsk/c0t96d0s5 /opt ufs 2 yes –
/dev/md/dsk/d1 /dev/md/rdsk/d1 /usr/openwin ufs 2 yes –
/devices – /devices devfs – no –
ctfs – /system/contract ctfs – no –
objfs – /system/object objfs – no –
swap – /tmp tmpfs – yes –

I changed the c0t96d0s1 entry to point to /dev/md/dsk/d1 and /dev/md/rdsk/d1 and thats it.

This tells SVM its OK to boot up even if only half of the metadbs are available:

# echo ‘set md_mirrored_root_flag=1’ >> /etc/system

# metainit -f d13 1 1 c0t96d0s3
d13: Concat/Stripe is setup
# metainit d23 1 1 c0t98d0s3
d23: Concat/Stripe is setup
# metainit d3 -m d13
d3: Mirror is setup
# metainit -f d14 1 1 c0t96d0s4
d14: Concat/Stripe is setup
# metainit d24 1 1 c0t98d0s4
d24: Concat/Stripe is setup
# metainit d4 -m d14
d4: Mirror is setup
# metainit -f d15 1 1 c0t96d0s5
d15: Concat/Stripe is setup
# metainit d25 1 1 c0t98d0s5
d25: Concat/Stripe is setup
# metainit d5 -m d15
d5: Mirror is setup
# metainit -f d16 1 1 c0t96d0s6
d16: Concat/Stripe is setup
# metainit d26 1 1 c0t98d0s6
d26: Concat/Stripe is setup
# metainit d6 -m d16
d6: Mirror is setup

Edit all other entries in /etc/vfstab like we did above.

#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
fd – /dev/fd fd – no –
/proc – /proc proc – no –
/dev/md/dsk/d4 – – swap – no –
/dev/md/dsk/d0 /dev/md/rdsk/d0 / ufs 1 no –
/dev/md/dsk/d6 /dev/md/rdsk/d6 /usr ufs 1 no –
/dev/md/dsk/d3 /dev/md/rdsk/d3 /var ufs 1 no –
/dev/dsk/c0t97d0s7 /dev/rdsk/c0t97d0s7 /export/home ufs 2 yes –
/dev/md/dsk/d5 /dev/md/rdsk/d5 /opt ufs 2 yes –
/dev/md/dsk/d1 /dev/md/rdsk/d1 /usr/openwin ufs 2 yes –
/devices – /devices devfs – no –
ctfs – /system/contract ctfs – no –
objfs – /system/object objfs – no –
swap – /tmp tmpfs – yes –

# sync; sync; reboot

After it reboots, attatch the other submirrors to d1,d3,d4,d5,d6.

# metattach d1 d21
d1: submirror d21 is attached
# metattach d3 d23
d3: submirror d23 is attached
# metattach d4 d24
d4: submirror d24 is attached
# metattach d5 d25
d5: submirror d25 is attached
# metattach d6 d26
d6: submirror d26 is attached

# metastat | grep progress
Resync in progress: 0 % done
Resync in progress: 1 % done
Resync in progress: 16 % done
Resync in progress: 24 % done
Resync in progress: 18 % done

SVM CheatSheet

# umount filesystem (unmount any non-svm open filesystems on failed disk)

# metadb -d c1t0d0s7 (if replicas on this disk, remove them)

# metadb | grep c1t0d0s0 (verify there are no existing replicas left on the disk)

# cfgadm -c unconfigure c1::dsk/c1t0d0 (might not complete command if busy, remove failed disk)

Insert a new disk :

# cfgadm -c configure c1::dsk/c1t0d0 (configure new disk)

# prtvtoc /dev/rdsk/c0t0d0s2 > /tmp/firstdisk (get format for new disk)

# fmthard -s /tmp/firstdisk /dev/rdsk/c1t0d0s2 (format disk same as mirror)

# metadevadm -u c1t0d0 (will update the New DevID)

# metadb -a c1t0d0s7 (if necessary, recreate any replicas)

# metareplace -e d0 c1t0d0s0 (do this for each submirror on the disk)

# metastat -i (will change unavailable state of devices to Okay)

Raid-5 disk replacement: (use when raid unit “ State: Needs maintenance” in metastat cmd)

On failing disk:(If you can access the disk, if not start at the cfgadm -c unconfigure step)

# umount filesystem (unmount any open non-svm filesystems on this disk)

# metadb -d c1t0d0s7 (any replicas on this disk, remove them)

# metadb | grep c1t0d0 (verify there are no existing replicas left on the disk)

# cfgadm -c unconfigure c1::dsk/c1t0d0 (might not complete command if busy, remove the failed disk)

Insert a new disk :

# cfgadm -c configure c1::dsk/c1t0d0

Run \\\’format\\\’ or \\\’prtvtoc\\\’ to put the desired partition table on the new disk

# metadevadm -u c1t0d0 (will update the New DevID)

# metadb -a c1t0d0s7 (if necessary, recreate any replicas)

# metareplace -e c1t0d0s0 (do this for each raid on the disk)

# metastat -i (will change unavailable state of devices to Okay)

SDS – How to mirror the root disk

Use this procedure to mirror the system disk partitions using Solstice DiskSuite:

– first format the second disk exactly like the original root disk: (typically s7 is reserved for metadatabase)

# prtvtoc /dev/rdsk/c0t0d0s2 > /tmp/firstdisk

# fmthard -s /tmp/firstdisk /dev/rdsk/c1t0d0s2

– create at least 3 state database replicas on unused (10mb) slices.

# metadb -a -f -c 3 c0t0d0s7 c1t0d0s7 (-a and -f options create the initial state database replicas. -c 3

puts three state database replicas on each specified slice)

– for each slice, you must create 3 new metadevices: one for the existing slice, one for the slice on the

mirrored disk, and one for the mirror. To do this, make the appropriate entries in the md.tab file.

slice 0, create the following entries in (/etc/lvm/md.tab)

d10 1 1 /dev/dsk/c0t0d0s0

d20 1 1 /dev/dsk/c1t0d0s0

d0 -m d10

slice 1, create the following entries in (/etc/lvm/md.tab)

d11 1 1 /dev/dsk/c0t0d0s1

d21 1 1 /dev/dsk/c1t0d0s1

d1 -m d11

Follow this example, creating groups of 3 entries for each data slice on the root disk.

– run the metainit command to create all the metadevices you have just defined in the md.tab file.

If you use the -a option, all the metadevices defined in the md.tab will be created.

# metainit -a -f (-f is required because the slices on the root disk are currently mounted)

– make a backup copy of the vfstab file: # cp /etc/vfstab /etc/vfstab.pre_sds

– run the metaroot command for the metadevice you designated for the root mirror. In the example

above, we created d0 to be the mirror device for the root partition, so we would run:

# metaroot d0

– edit the /etc/vfstab file to change each slice to the appropriate metadevice. \\\’metaroot\\\’ command has already

done this for you for the root slice.

/dev/dsk/c0t0d0s1 – – swap – no –

to

/dev/md/dsk/d1 – – swap – no –

Make sure that you change the slice to the main mirror, d1 not to the simple submirror, d11.

– reboot the system. Do not proceed without rebooting your system, or data corruption will occur.

– After the system has rebooted, you can verify that root and other slices are under DiskSuite\\\’s control:

# df -k

# swap -l

The outputs of these commands should reflect the metadevice names, not the slice names.

– Last, attach the second submirror to the metamirror device.

# metattach d0 d20 (must be done for each partition on the disk, and will start the syncing of data)

– to follow the progress of this syncing for this mirror, enter the command

# metastat d0

Although you can run all the metattach commands one right after another, it is a good idea to run the next

metattach command only after the first syncing has completed. Once you have attached all the submirrors

to the metamirrors, and all the syncing has completed, your root disk is mirrored.

Solaris Volumn Manager Cheatsheet

Solaris Volume Manager

To verify that you have DiskSuite 3.0 – 4.2:

# pkginfo -l SUNWmd
To verify that you have DiskSuite 4.2.1 or SVM:

# pkginfo -l SUNWmdu
Where is it?
SDS 4.0 -> 2.5.1 Server Pack, INTRANET EXTENSION CD-ROM
Not customer-downloadable.

SDS 4.1 -> 2.6 Server Pack, INTRANET EXTENSION CD-ROM
Not customer-downloadable.

SDS 4.2 -> 2.7 Server Pack, EASY ACCESS SERVER 2.0/3.0 CD-ROM
NOTE: This cd is in it’s own little cardboard box…
Not customer-downloadable.

SDS 4.2.1 -> Solaris 8, 2nd of 2 "Solaris Software" CD’s (2 of 2)
under the EA (easy access) directory
See also: Storage Download Center

SVM -> Solaris 9 or later, part of End User install clusterkages To Install
SDS 4.0, 4.1, and 4.2:

SUNWmd (required) Base Product
SUNWmdg (optional) GUI package
SUNWmdn (optional) SNMP log daemon package

SDS 4.2.1

SUNWmdu Solstice DiskSuite Commands
SUNWmdr Solstice DiskSuite Drivers
SUNWmdx Solstice DiskSuite Drivers(64-bit)
SUNWmdg (optional) Solstice DiskSuite Tool
SUNWmdnr (optional) Solstice DiskSuite Log Daemon Configuration Files
SUNWmdnu (optional) Solstice DiskSuite Log Daemon

The following packages are EarlyAccess software only and aren’t
(yet) supported. These give the user the ability to try out the new
Logical Volume Manager GUI inside SMC (Solaris Management Console).

SUNWlvma (really optional) Solaris Volume Management API’s
SUNWlvmg (really optional) Solaris Volume Management Application
SUNWlvmr (really optional) Solaris Volume Management (root)

NOTE: Later versions of Solaris 8 have a few additional packages for
DiskSuite, namely, 3 packages starting with SUNWlvm*. These do
not have to be loaded. They are beta-test packages for the new
Logical Volume Manager which will premier with Solaris 9.

SVM – Solaris 9 only

SUNWmdr Solaris Volume Manager, (Root)
SUNWmdu Solaris Volume Manager, (Usr)
SUNWmdx Solaris Volume Manager Drivers, (64-bit)
SUNWlvma Solaris Volume Management API’s [GUI]
SUNWlvmg Solaris Volume Management Application [GUI]
SUNWlvmr Solaris Volume Management (root) [GUI]

To run the Solaris VolumeManager GUI, you must load the SMC 2.0
packages. / Output To Get From Customers
1.) Ask, if appropriate, if the root disk is under SDS/SVM control.
2.) # metadb
3.) # metastat