Solaris pocket admin – special hardware configs

"Special" Hardware Config


Sun V440 Build-in RAID controller


raidctl # display raid config
raidctl -c … # create mirror pair

There are some posting about issues of creating more than one mirror pair…
Can probably only do RAID 1+0.



Sun T3 Disk Array (T3b)



Commands for Sun T3+ (aka T3B) array.

Monitor task:

vol list # list fs volumes
fru stat # display status of components
sys list # list general sys config, cache info, etc.

refresh -s # check battery recharge level

lpc version # list controller firmware version
port list

————————–

System setup cmd:

set ip
set gateway 10.215.2.2
set netmask 255.255.255.0

set hostname t3arrayname

passwd ( default is root, blank password).

set timezone US/Pacific # or
tzset -0800
tzset # redisplay

date # show syste date
date 04060915 # set date and time to apr 6, 9:15 am (same as sol).

sys # general array info
reset # reboot the array (read ip, etc)

ver # see firmware level

Array config cmd:

vol unmount v0 # remove preconfigured raid 5 vol
vol remove v0

Target:
disk 1-6, strip + mirror ( raid 1 in T3+ of 2n, n>1 will automatically be strip + m
irror)
disk 7-8, mirror
disk 9, hot spare

vol add v0 data u1d1-6 raid 1 standby u1d9 # controller 1, disk 1 to 6
vol add v1 data u1d7-8 raid 1 standby u1d9
vol init v0 data; vol init v1 data # chain cmd to parallelize task.
vol mount v0; vol mount v1

std command that works in the T3b:
cd
pwd
ls -l

files:
/etc/
syslog


Sun StorEdge Component Manager is software that can be installed on host to manage the T3/T3+ array.
But I didn’t install it, and configured it via telnet/serial login cli.




A1000 Disk Array


Raid Manager (RM6) is used to control the A1000 (array) and D1000 (JBOD) boxen.
These are pretty old by now, popular during the dot-bomb days circa Y2k.
As old as the D1000 is, it will take drives up to 144 GB in size.

D1000 system handbook
Sun login required now ­čÖü



RM6 commands

packages are SUNWosa*, install w/ bin link in /etc/raid/bin/

/etc/raid/bin/rm6 Main GUI for config and status check, etc.

raidutil -c c2t5d0 -i : get info about raid device, such as firmware version, etc.

nvutil -vf : verify nvsram is set correctly for A1000.

raidutil -c {c2t5d0} -B : display battery age
raidutil -c {c2t5d0} -R : replace battery date
See Recovery Guru info on replacing battery. Array need to be powered off for this to happen.
After changing battery, the above command is used to reset remembered date on the controller
so that it knows it can use the battery for 2 years from date of reset.


Other Frequently Used RM6 commands


drivutil
fwutil
healthck
lad
logutil
nvutil
parityck
raidutil
rdacutil
rm6
storutil

You’ll need to formally fail a disk before you replace it in case of
failure. Use raidutil for that.


RM6 details from user guide


(from a sun pdf doc, p170, cli ref)

Basic Information


rm6 Gives an overview of the software├ćs graphical user interface (GUI), command-line
programs, background process programs and driver modules, and customizable
elements.

rdac Describes the software’s support for RDAC (Redundant Disk Array Controller),
including details on any applicable drivers and daemons.
rmevent The RAID Event File Format. This is the file format used by the applications to
dispatch an event to the rmscript notification script. It also is the format for
Message Log’s log file (the default is rmlog.log).

raidcode.txt A text file containing information about the various RAID events and error codes.
Command-Line Utilities

drivutil The drive/LUN utility. This program manages drives/LUNs. It allows you to
obtain drive/LUN information, revive a LUN, fail/revive a drive, and obtain LUN
reconstruction progress.

fwutil The controller firmware download utility. This program downloads appware,
bootware, or an NVSRAM file to a specified controller.

healthck The health check utility. This program performs a health check on the indicated
RAID module and displays a report to standard output.

lad The list array devices utility. This program identifies the RAID controllers and
logical units that are connected to the system.

logutil The log format utility. This program formats the error log file and displays a
formatted version to the standard output.

nvutil The NVSRAM display/modification utility. This program views and changes RAID
controller non-volatile RAM settings, allowing for some customization of controller
behavior. It verifies and fixes any NVSRAM settings that are not compatible with
the storage management software.

parityck The parity check/repair utility. This program checks and, if necessary, repairs the
parity information stored on the array.

raidutil The RAID configuration utility. This program is the command-line counterpart to
the graphical Configuration application. It allows you to create and delete RAID
logical units and hot spares from a command line or script. It also allows certain
battery management functions to be performed on one controller at a time.

rdacutil The redundant disk array controller management utility. This program permits
certain redundant controller operations such as LUN load balancing and controller

failover and restoration to be performed from a command line or script.

storutil The host store utility. This program performs certain operations on a region of the
controller called host store. You can use this utility to set an independent controller
configuration, change RAID module names, and clear information in the host store
region.


Background Process Programs and Driver Modules

arraymon The array monitor background process. The array monitor watches for the
occurrence of exception conditions in the array and provides administrator
notification when they occur.

rdaemon
(UNIX only)
The redundant I/O path error resolution daemon. The rdaemon receives and
reacts to redundant controller exception events and participates in the applicationtransparent
recovery of those events through error analysis and, if necessary,
controller failover.

rdriver
(Solaris only)
The redundant I/O path routing driver. The rdriver module works in
cooperation with rdaemon in handling the transparent recovery of I/O path
failures. It routes I/Os down the proper path and communicates with the rdaemon
about errors and their resolution.


Customizable Elements

rmparams The storage management software├ćs parameter file. This ASCII file has a number of
parameter settings, such as the array monitor poll interval, what time to perform
the daily array parity check, and so on. The storage management applications read
this file at startup or at other selected times during their execution. A subset of the
parameters in the rmparams file are changeable under the graphical user interface.
For more information about the rmparams file, see the Sun StorEdge RAID Manager
Installation and Support Guide.

rmscript The notification script. This script is called by the array monitor and other
programs whenever an important event is reported. The file has certain standard
actions, including posting the event to the message log (rmlog.log), sending
email to the superuser/administrator and, in some cases, sending an SNMP trap.
Although you can edit the rmscript file, be sure that you do not disturb any of
the standard actions.

—-

a1000 (at least the one attached to sonata, then moved to perseus),
scsi controller is DIFF, SE don’t work. From An, DIFF is high voltage differential,
SE is low voltage diff. Thus, A1000 controller is High Voltage Diff.
If connect to SE, the scsi bus light blink on the A1000, and no disk/arraay
will be seen by the host.


Install/upgrading firmware of A1000



IMHO, this is quite a nighmarish exercise. Lot of steps and if-conditions
of what to do listed in a about 3 huge HTML pages.
Cluster patch for Solaris will not cover this at all.

install RM6 (old software, circa 2002. version 6.22.1 was last one).
get patches for OS, most are in cluster patch now.

patchadd -M . 112126-06
# patchadd -M . 113277-04 113033-03 # these 2 seems to be added by cluster patch
# 113033-03 is only for sbus hba
init S; patchadd 112233-04; touch /reconfigure; reboot
#112233 seems to have later version in latest cluster patch.

run rm6, select controller on array, go to firmware, and after all the warnings,
it will provide list of firmwares that came with RM6, ready for download to the array controller.
Upgrade them in sequence to avoid firmware jump unsupported problems.




It is possible to change a group from RAID 10 to RAID 5 while disk online w/ file system active.
Extra space gained can be used to create extra LUN.
But RM6 (on A1000) does not support LUN expandsion, so if desire to create a single LUN
with all the disk space of RAID 5, it will still need to remove the LUN, and then recreate it.
This of course means offline the fs.
RM6 warns that OS communicate with array and expect to see a LUN 0, and problem can arise when
there is no LUN 0, and that to recreate it back right away.
So far, no problem. Maybe should avoid using format and other disk poking tool
when there is no LUN 0.


raid storage array

luxadm inquiry /dev/rdsk/c?t*s2 # get disk array firmware rev.




StorEdge 3510

StorEdge 3510 is a 2U w/ 12 disk and lot of fc port in back.
Popular circa 2005.

Serial console is set at 38400 bps.


IP config
software control via fc port: Configuration Service Console
/opt/SUNWsscs/sscsconsole/sscs (GUI)

2 controllers, primary (top) and secondary (bottom).

Each controller has these ports:
Phy Ch 0 (FC) – PID 40 SID N/A – Host
Phy Ch 1 (FC) – PID N/A SID 42 – Host
Phy Ch 2 (FC) – PID 14 SID 15 – Drive (daisy chain to other drive?)
Phy Ch 3 (FC) – PID 14 SID 15 – Drive (daisy chain to other drive?)
Phy Ch 4 (FC) – PID 44 SID N/A – Host
Phy Ch 5 (FC) – PID N/A SID 46 – Host

Max host connectivity:
– 4 hosts, w/ dual path (one to each controller?)
– 8 hosts, w/ single path (is this really supported?)

An LD/LV (Logical Drive/Logical Volume) is created,
then inside the LD, partitions are created.
The partitions are shown to host as LUN.

"zoning" is really mapping a given partition/lun to a specific port/channel,
so that only the host connected to that channel can see the partition/lun.
path redundancy can be obtained
(? by connecting to different controller on different port/channel)

Presumably, multiple LD/LV can be configured on a single StorEdge array.
Think of LD/LV as a RAID group in EMC Clariion.
A specific LD/LV has a single RAID level and span a certain number of disk.

SE3510 allow global standby/hotspare disk that can serve multiple LD/LV.

Leave *AT LEAST ONE* partition/lun mapping to the controller host,
or else the host will loose ability to talk to the array via the FC.
Only choice after that is to readd the mapping thru the serial console.

Sample init config:
1. Hook up host to SE via fc.
2. On host, run sscs. Let it probe for the array, take over control as primary config host.
3. Click "Custom Config" (Menu Configuration|Custom Configure).
4. Create a new LD/LV. This will take long to finish, as it need to zero all disks.
5. Seems like, by default, a single Partition/LUN is created that span all space avail in LD/LV. this is usable to host.
6. Use Custome Config and change partition/lun config, this is fast.
7. Bind partition/lun to specific port so that host can access it.
8. SE doesn’t really have concept of "empty space for growth" inside the LD/LV,i
so left over space is assigned to a partition, which can be left unmapped
to any host. The confusion remains that it must be checked it is not used,
it is not marked as free space.
?? redundant path config?
somehow, even bind partition/lun to single port/host, redundant path/disk
are seen by the host.
Seems like only one controller is being seen/config at a time ??

LD/LV can be grown dynamically (and reconfigured).

Use the custom config button to see all the tasks that can be done on an LV
such as partition/lun creation, channel/port binding (for host to see), etc.




Leave a Reply

Your email address will not be published. Required fields are marked *