WWN identification in Solaris

The Storage Area Network (SAN) foundation software must be installed for this method to work. If you already have a device attached to the HBA, you can get the WWN using ‘luxadm’ commands. The steps, with an example, are as follows:

1) Use luxadm to display the path:

# luxadm -e port

Found path to 1 HBA ports

/devices/pci@4,4000/SUNW,qlc@4/fp@0,0:devctl CONNECTED

NOTE: If the path does not report CONNECTED, it means that no devices are attached to the HBA. Therefore, the following ‘luxadm dump_map’ command will not report the WWN for the HBA.

2) Using the path shown in the previous command, use ‘luxadm’ to report the WWN:

# luxadm -e dump_map /devices/pci@4,4000/SUNW,qlc@4/fp@0,0:devctl

Pos AL_PA ID Hard_Addr Port WWN Node WWN Type

0 2 7c 0 210000e08b05b4c1 200000e08b05b4c1 0x1f (Unknown Type)
1 d9 8 d9 50020f230000a81b 50020f200000a81b 0x0 (Disk device)
2 1 7d 0 210000e08b052b82 200000e08b052b82 0x1f (Unknown Type,
Host Bus Adapter)

Note: The HBA is the device notated as "(Unknown Type, Host Bus Adapter)." The Port and Node WWN are reported. The Port WWN is the number to use for LUN-masking.

In this example, the WWN of the HBA is 210000e08b052b82.

Some quick wget tips

$ wget -r -np -nd http://example.com/packages/

This little gem is probably my most used variation. It will download all files in the /packages/ directory on example.com — without traversing up to parent directories (-np), and without recreating the directory structure on your machine (-nd).

$ wget -r -np -nd –accept=iso http://example.com/centos-5/i386/

Adding the –accept argument with a list of file extensions (comma separated) will grab only those files ending in the specified extension.

Another way to grab just the files you want:

$ wget -i filename.txt

Put all the desired urls in filename.txt and run wget against it to download a list of files automatically.

On a bad connection?

$ wget -c http://example.com/really-big-file.iso

The “-c” option tells wget to continue and retry until it has completed downloading.

wget -m -k (-H) http://www.example.com/

Mirror a site, converting its links to work locally, so that you can move the site to another server. Use the ‘-H’ option if images are loaded from another site.

Process Memory Usage

Process Memory Usage

The /usr/proc/bin/pmap command is available in Solaris 2.6 and above. It can help pin down which process is the memory hog. /usr/proc/bin/pmap -x PID prints out details of memory use by a process.

Summary statistics regarding process size can be found in the RSS column of ps -ly or top.

dbx, the debugging utility in the SunPro package, has extensive memory leak detection built in. The source code will need to be compiled with the -g flag by the appropriate SunPro compiler.

ipcs -mb shows memory statistics for shared memory. This may be useful when attempting to size memory to fit expected traffic.

Where does my USB key mount on my sunray

bash-3.00$ utdiskadm -h
usage:
utdiskadm -h
utdiskadm { -l | -s } [-a]
utdiskadm { -c | -e | -r } device_name
utdiskadm -m partition_name [ -p mount_path ]
utdiskadm -u mount_point
Options:
-c # check device for presence of media
-e # eject media from removable media devices
-h # print this help message
-l # list devices on current session
-l -a # list all devices on system (root only)
-m # mount partition_name on default mount point
-p # (with -m) mount partition_name on given mount_path
-r # prepare device for removal
-s # list stale mount points
-s -a # list all stale mount points on system (root only)
-u # unmount mount_point
bash-3.00$
bash-3.00$ utdiskadm -l
Device Partition Mount Path
—— ——— ———-
disk5 disk5s2 /tmp/SUNWut/mnt/jc222222/udisk20x
bash-3.00$

How to determine UFS fragmentation

To see the amount of fragmentation on a mounted file system, run the fsck
command with the -n option. Following is an example:

# fsck -n /dev/rdsk/c0t0d0s0
** /dev/rdsk/c0t0d0s0 (NO WRITE)
** Currently Mounted on /
** Phase 1 – Check Blocks and Sizes
** Phase 2 – Check Pathnames
** Phase 3 – Check Connectivity
** Phase 4 – Check Reference Counts
** Phase 5 – Check Cyl groups

433904 files, 1745945 used, 216021 free 216021 frags, 0 blocks, 11.0%
fragmentation)

You need to defragment your filesystem in the following circumstances:

-If the fragmentation is greater than 5%.
-If you are seeing issues where df -k is indicating space but you are
getting filesystem full errors.

Follow this procedure to defragment a file system:

1. Unmount the file system.
2. Perform the ufsdump command to back up the data.
3. Use the newfs command against the file system. This destroys all
existing information in the file system.
4. Restore the data.

Setting up syslog for forwarding

The default Solaris syslogd daemon is a primitive (old style) daemon that is controlled by /etc/syslog.conf. It can send/receive messages from the remote hosts but the level of control is very primitive (they will be mixed with the logs from the LOGHOST server). syslog-ng is probably a step forward and can be used with Solaris.

The default Solaris syslog daemon uses port 514 (UDP) for forwarding messages to a centralized log host. There is no acknowledgement and messages are lost if the LOGHOST server is down or if its syslogd daemon is down of if the is a firewall that filter out this port on the route tot he LOGHOST. In Solaris this is done using STEAMS log driver.

The simplest remote syslog configuration is an unencrypted one. To activate such remote sysloging on Solaris you need:

* To edit /etc/hosts (./inet/hosts) and add one or several lines that defines IP address for loghost (Name is arbitrary, and if several remote hosts are defined you can use any name you wish)
* Edit syslog configuration file (/etc/syslog.conf) for each server you need to collect logs from. To send everything to remote syslog server, add this line to your syslogd.conf file:

*.* @loghost

After editing file you need to restart syslogd:

#pname -HUP syslogd

or

#/etc/rc2.d/S74syslog stop ; /etc/rc2.d/S74syslog start

You can use a full DNS name like @logbase.mycompany.com In this case you do not need to touch hosts file, but remote logging became dependent on the DNS availability.

Solaris uses macro processor M4 to preprocess syslog.conf and you can use all the power of M$ to make the configuration more flexible. Example in the standard syslogd configuration is a pretty weak one: it is based on checking if LOGHOST env. variable is defined (it is only if IP address of the server is defined as loghost in /etc/hosts) The ifdef (‘LOGHOST’, truefield, falsefield) command permits to select two variants of log forwarding depending on this. This permits any server to be used by loghost. Nice, but pretty useless in practice trick as LOGHOST server generally needs to be a specialized and more secure server.

Also you can replace syslogd with syslog-ng. The latter lets filter and route logs. The ability of syslog-ng to filter messages of various types provides the capability to distribute specific message types to people responsible for a given application/function. That makes the environment closer to VM/CMS system ;-). It is possible to invoke a script on the receipt of specific mail messages. If you use Perl the logic can be pretty sophisticated.

Creating a loghost for your servers can be invaluable both for troubleshooting and in the event of a security breach. If you are part of a cluster, make sure your machine logs to another loghost as well as itself. If someone breaks in and erases your log files, and you have only been logging locally, you will have no chance of figuring out where they came from or what they did. Intruders commonly use scripts to erase their presence from the log files. By sending your logs to another more secure machine, you have a better chance of at least tracking the intruder’s activities.

Security precautions to take:

* Make sure the time is correct on your system. Otherwise you will have trouble tracing problems and breakins.
* System logs are generally kept in the /var partition, mainly /var/adm. Make sure that /var is large enough to hold much more than the basic log file. This is to prevent accidental overflows, which could potentially erase important logging info.
* The default syslog.conf does not do a very good job of logging. Try changing the entry for /var/adm/messages to:

*.info /var/adm/messages

If you start getting too much of a certain message (say from sendmail), you can always bump that particular facility down by doing

*.info;mail.none /var/adm/messages

* To enable more logging of logins, create the "loginlog":

touch var/adm/loginlog

chmod 600 /var/adm/loginlog

chgrp sys /var/adm/loginlog

* Enable ssh and tcpwrappers that log important events to syslogd.

Additional security precautions for LOGHOST:

* In order to create a loghost, pick one machine and secure it as much as possible. Basically, don’t run anything on this machine besides syslogd. Turn off inetd, sendmail, everything, but make sure you have basic networking up. Possibly don’t even run ssh on this machine. That way, the only way to view the log files on the loghost would be to physically log into the console.
* Make sure the time is always correct on the loghost.
* In order to allow the loghost to receive syslog messages from other machines, you may need to enable it to receive remote logging. (Find out first by reading the syslogd man pages). Do this by adding the –r command line upon syslogd startup. Edit /etc/rc2.d/S74syslog, and find the line:

/usr/sbin/syslogd

and change it to:

/usr/sbin/syslogd -r

* Again, make sure the loghost is as secure as possible: the only thing it should be running is syslogd.

Protocol concerns

* No congestion control with UDP, allowing denial of service attack (Contra argyment: "Simple protocol")
* UDP packets are easily spoofed. (Contra: "No more easily than TCP, aside from sequence number.")
* No authentication – no way for a loghost to reject messages received.
* No confidentiality of message data. (Contra: "Use IPSEC." if you are really concered about security)
* Vulnerable to man-in-middle attack — packets may be altered undetectably in transit. (Contra: "Use IPSEC.")
* Vulnerable to spoofing/chaffing — bogus messages can be sent to the logging host. (Contra: "Use IPSEC.")
* Gaps in the message sequence due to lost packets can’t be detected by receiver

Implementation woes

* Vulnerable to successful intruders: need to have inviolability of the log itself. Contraargument: you first need to make logs availbale to your personnel that most often does not read them at all. also Loghost server can be easily secured more tightly to aviod common breakins.
* Timestamp is provided at receiver, not originator; forwarding often adds observable delay potencially causing misinterpretation.
* Limited information on originating facility and severity level (priority) of message
* Lack of flexibility in use of facility codes among systems and applications — making it difficult to write filter scripts to detect security event.

Protocol gripes

1. Firewalls & proxies: need UDP port 514 proxy
2. Unreliable: No guarantee that a syslog packet will be received, and no facilities for retransmission (Contra: "Simple protocol, usually not needed on a typical LAN")
3. Better Timestamping: It should be a requirement that the timestamp specified in the protocol (both by the client, in the packet; and in the server, when the message is logged) are recorded in the logfile.
4. Problems with log message formatting & structure: "The structure of messages is really a mess."
* No standard way of formatting of message text and separating various arguments of the log message
* Priority/facility are encoded in a not particularly human-readable way, but then sent in text form. "This has the worst features of binary protocols (not human-readable) and text protocols (inefficient use of bandwidth – and just to make this one worse, the first byte of every packet is constant!)"
5. Standardisation: syslog is not standardised, which makes it more difficult than it should be to produce interoperable implementations. (Contra: "Syslog is currently interoperable and actually more or less standardized between all flavours of Unix though there are some individual variations.")

Implementation woes

* Repeat notification: ‘"…repeated N times" doesn’t show up until a different line comes along which is a pain if you’re watching the log.
* Problems with multiple syslog.conf lines that name the same file.
* Problem with spraing LAN when logserver became unabalable or IP address changed.
* No ability to define daemon’s fsync() behavior.
* No standardized logging directory (/var/log, /var/adm, …).
* No forwarding for sulog.

Syslog is still the most widespread and usable tool for event logging, esp. because it is a buil-in, low-cost protocol that can be easily added to network and embedded devices. Strategies for cautious use include:

* Use physically separate out-of-band networks for all management data including syslog.
* Syslog configuration can use non-IP interfaces between log hosts, e.g. serial links, to keep the destination loghost out of the ARP tables of a compromised access host.
* Firewall filtering, route authentication, VLAN assignments, etc. to limit udp/514 syslog, can isolate syslog regions within a network as well as inside from outside.
* Syslog servers can be provided with CPU and disk capacity far exceeding normal needs.
* Syslog daemon code can be altered for improved filtering and handling of application-specific messages. (This has been widespread in the UNIX and Open Source community, with versions of syslog() and syslogd sometimes distributed with other applications — e.g. INN.)

Basic security enhancements

* Out-of-band management network can be used for logging.
* Non-IP links can limit visibility of syslog forwarding path.
* IP access list for syslogd can permit receiver to drop all packets not claiming to be from a valid source address.
* Check file permissions for all logfiles and syslog.conf.

Tunable Kernel Parameters

The following table lists the general kernel parameters that can be tuned.

Parameter Description

——— ———–

dump_cnt Size of the dump

autoup Used in struct var for dynamic configuration of the
age that a delayed-write buffer must be,in seconds,
before bdflush will write it out (default = 60)

bufhwm Used in struct var for v_bufhwm; it’s the high water
mark for buffer cache memory usage, in Kbytes
(default = 2% of physical memory).

maxusers Maximum number of users
The default is number of Megabytes in physical memory minus 2.

max_nprocs Maximum number of processes (default = 10 + 16 * maxusers).

maxuprc The maximum number of user processes.
(default = max_nprocs – 5)

rstchown POSIX_CHOWN_RESTRICTED is enabled (default = 1).

ngroups_max Maximum number of groups per user
(default = 16, max = 32).
Do not raise the value of NGROUPS_MAX above 16 for systems that
need to be NFS clients (Bugid 6001110).

rlim_fd_cur Maximum number of open file descriptors per process
system wide (default = 64, max = 1024) also see limit(1).

Streams Parameters:

nautopush Number of entries in the autopush free list,
the high water mark for Streams administrative
devices (sads). One autopush entry is needed for each
entry in the /etc/iu.ap file and any others that may
be configured by using the autopush command directly,
or by opening the sad device and using the SAD_SAP
ioctl(). If you run out of autopush entries, the ioctl()
will return -1 and set errno to ENOSR (out of streams
resources).

sadcnt Number allowed of concurrent opens of both
/dev/sad/user and /dev/sad/admin (default 16).

Once all are in use, open() will return -1 and
set errno to ENXIO (No such device or address).

Psuedo Devices: (Needs reboot -r to take effect)

npty Number of 4.X pseudo-ttys configured (default = 48).

The device entries are /dev/pty*

pt_cnt Number of 5.X pseudo-ttys configured (default = 48).
(keep below 3000)

The device entries are /dev/pts/*

Paging Parameters:

physmem Sets the number of pages usable in physical memory.

Only use this for testing, it reduces the size of usable
memory.

minfree Memory threshold which determines when to start swapping
processes, when free memory falls to this level swapping
begins (default:
2.5.1 and later : desfree / 2
2.5 : desfree / 2 at most 100k bytes (sun4d 200k bytes)
2.4 : 4d = 50 pages, all others 25 pages
2.3 : physmem / 64 ).

desfree This is the desired free level, this determines when
paging is abandoned for swapping. When free memory stays
below this level for 30 seconds, swapping kicks in (default:
2.5.1 and later : lotsfree / 2
2.5 : physmem / 64 at most 200k bytes (sun4d 400k bytes)
2.4 : 4d = 100 pages, all others 50 pages,
2.3 : physmem / 32 ).

lotsfree Memory threshold which determines when to start paging.
When free memory falls below this level paging begins
(default:
2.5.1 and later : physmem / 64 or at least 512k bytes worth
of pages
2.5 : physmem / 32 at most 512k bytes (sun4d 1024k bytes)
2.4 : 4d = 256 pages all others 128 pages
2.3 : physmem / 16 ).

fastscan The number of pages scanned per second when free memory
is zero, the scan rate increases as free memoryfalls
from lotsfree to zero, reaching fastscan (default:
2.5 and later : physmem / 2 with 64Mb of memory being max
2.4 : physmem / 4 with 64Mb of memory being max
2.3 : physmem / 2 ).

slowscan The number of pages scanned per second when free memory
is equal to lotsfree, also see fastscan (default:
2.5 and later : fastscan / 10 but not to exceed 100
2.4 : is fixed at 100,
2.3 : fastscan / 10 ).

handspreadpages The distance between the fronthand and backhand in
the clock algorithm. The larger the number the longer an
idle page can stay in memory (default:
2.5 and later : fastscan
2.4 : physmem / 4
2.3 : physmem / 2 ).

maxpgio The maximum number of page-out I/O operations per second.
This acts as a throttle for the page daemon to prevent
page thrashing ((DISKRPM * 2) /3 ).

t_gpgslo 2.1 through 2.3, Used to set the threshold on when to
swap out processes (default 25 pages ).

File System Parameters:

ufs_ninode Maximum number of inodes.
(default = max_nprocs + 16 + maxusers + 64)

ndquot Number of disk quota structures.
(default = (maxusers * NMOUNT / 4) + max_nprocs)

ncsize Number of dnlc entries.
(default = max_procs + 16 + maxusers + 64);
dnlc is the directory-name lookup cache

TCP/IP Parameters:

arptab_size Size of arp table

ipcnt Size of ip_pcb

ipprovcnt Size of provider

loopcnt Size of loop_pcb

ntcp Number of tcp devices, tcp_dev

nudp Number of udp devices, udp_dev

Packages and Patches Tips and Tricks Crib Sheet

Packages and Patches Tips and Tricks Crib Sheet

What is a patch?
A patch is composed of one or more updated packages that replace packages that are currently installed on the system.

Where are patches stored by default?
#/var/sadm/patch

**Most customers do not read the Install.info file **
**Many do not read the SPECIAL_INSTRUCTIONS file ***

What is a package?
A package is a collection of files and directories that make up a particular function or application.

Where are packages stored?
# /var/sadm/pkg

Commands to query the Patch/package map (database)
# ls -la /var/sadm/patches| grep # showrev -p | grep # patchadd -p | grep

Command to find out where a command is physically located and what package it is part of:
# grep /var/sadm/install/contents

# pkgchk -l -p

Commands to list package details:
# pkginfo -l
# more /var/sadm/pkg/SUNWxxx/pkginfo

Installing patches:
Solaris 9 and below:
Best way to apply the cluster for a system with patches is as follows:
Please note: We recommend init 0 and boot -s to apply patches if the README requests single user application

1)Downloaded recommended to
# /var/tmp
2)Change perms
# chmod 777
3)Uncompress
# zcat boot -s
6)install the patch cluster
# cd /var/tmp; ./install_cluster

If there are any failure, please forward get the following:
1) The patch cluster log file
# more /var/sadm/install_data/
2)The log file of all failed patches in the log
# more /var/tmp//log

Installing Patches Solaris 10 with zones:

Best way to apply the cluster for a system with patches is as follows:
Please note: As always we recommend init 0 and boot -s to apply patches if the README requests single user appllication

1)Downloaded recommended to
# /var/tmp
2)Change perms
# chmod 777
3)Uncompress
# zcat boot -s
6) make sure that all entries in the /etc/mntab pertaining to local zones are available
# mount
5) Boot the non-global zones
# zoneadm -z zone boot
Boot all zones manually
so global accessible prior to patch application
6) install the patch cluster to the global zone
# cd /var/tmp
# ./install_cluster

I also recommend with Solaris 10 to apply zone patches prior to an install_cluster
These zone patches to install first are this revision or greater:
120900-04, 121133-02, 121428-03, 120235-01, then the cluster, then 122660-07 .

To eliminate potential issues with zones and patchadd

Also it is recommended to review:
http://docs.sun.com/app/docs/doc/817-1592/6mhahuorv?a=view

Command to find out what Solaris Meta-Cluster is installed:
# more /var/sadm/system/admin/CLUSTER
entire+OEM,
entire,
developer,
end user,
required/core
and now in Solaris 10 below core(Reduced Networking Core)

File to query to find out what packages are bundled with what Cluster(Cluster Table of Contents):
# /var/sadm/system/admin/.clustertoc
Note: you will note the .clustertoc file has a [.] in front of the file.
If you type ls -a it will show all hidden files in any directory.

–special thanks to John Chase for this—-

Crontab Tip and Tricks Crib Sheet

Crontab Tip and Tricks Crib Sheet

The cron daemon (/usr/sbin/cron) enables authorized Solaris users to execute one or more commands or programs at a regularly scheduled date and time in the future, these collections of cron commands are referred to as cron jobs.

Commands to check if crond is running:
# ps -ef | grep cron | grep -v grep
# pgrep -fl cron

Commands to query cron is online in Solaris 10:
# svcs cron
# svcs -pv cron
# svcs -l cron

Log to check if cron fails upon boot in Solaris 10
/var/svc/log/system-cron:default.log

Log file to check when cron fails:
# more /var/cron/log

Command to view the contents of your crontab file. # crontab -l
Note: only the superuser can read or edit the crontab file of another user.

The cron daemon updates it’s schedules at the following times:

a) at boot, when /usr/sbin/cron daemon is started by /etc/rc2.d/S75cron
b) after a user writes a `crontab -e` entry
c) when a user completes the at or batch commands with control-d
d) if root stops and restarts cron

The proper way to stop and restart cron are:

# /etc/init.d/cron stop
# /etc/init.d/cron start

In Solaris 10 you could use the following command as well:
# svcadm refresh cron
# svcadm restart cron

Check the customer has the latest version of cron on the system to eliminate the issue is related to a bug that was already addressed by opening a sunsolve.sun.com url and placing cron 5.x in Patch Description on the top right corner of your sunsolve url page and hitting go.

Cron files and directories:
/etc/init.d/cron startup script for the cron daemon
/usr/sbin/cron cron clock daemon
/usr/sbin/crontab command manipulates crontab files
/etc/cron.d.. contains access, config files and logchecker script /etc/cron.d/logchecker cleans up the cron log /etc/cron.d/cron.deny defines users not permitted to use cron /etc/cron.d/at.deny defines users not permitted to use at /etc/cron.d/at.allow defines users permitted to use at /etc/cron.d/FIFO cron lock file
/var/spool/cron/crontabs contains user crontab queue /var/spool/cron/atjobs contains user at job queue
/var/cron/log cron log /var/cron/olog alternate cron log
/etc/default/cron cron config file
Cron documentation:

InfoDocs:

11282 Alternate ways to safely edit crontab (besides crontab -e)
21077 What does cron do about Daylight Saving Time? 3959 crontab administration and usage
24048 Troubleshooting Cron problems 21713 Why scripts that run at command line don’t run in cron jobs 20340 User’s cron jobs fail to run. Error in cron log: "!bad user"
16258 System monitoring in cron (27 Nov 1997)
18442 How to Setup Cron for Automatic Data collection from SAR
18702 How cron behaves with daylight savings
19164 How timezone strings handle daylight saving time 3035 how to set up cron to automate a ftp session

SRDBs:

21868 cron jobs fail with the message ": not found"
18219 cron won’t start- error: cannot create ancillary files 15489 reseting cron without rebooting 4449 cron aborts when it is being started
6259 cron jobs not running 6263 cron clock time drifted backwards 15854 Setting path in /etc/default/cron 4450 Cron fails to execute commands after shutdown issued
19383 "queue max run limit reached" error at batch or cron jobs
4486 cron daemon aborts (dies) after execution of a batch job.
5838 how to include .profile/.login in cron execution
20814 Cron job sources .profile error "sh: test: unknown operator a"
4760 Cron exits with error message: TERM: undefined variable. 4808 AT/batch/cron command complains about TERM when run as csh
10155 crontab -e yields "non-zero exit status from vi" 16082 rtc cron job failing, command not found 16188 Unsynchronized Client Clock Errors 20430 at: no prototype
4436 Shell works from login not crontab