Cap Memory in a Zone

From the “Oracle White Paper— Best Practices for Running Oracle Databases in Oracle Solaris Containers” http://www.oracle.com/technetwork/server-storage/solaris/solaris-oracle-db-wp-168019.pdf
Section CPU and Memory Management, page 6

Resource Capping Daemon
The resource capping daemon (rcapd) can be used to regulate the amount of physical memory that is consumed by projects with resource caps defined. The rcapd daemon repeatedly samples the memory utilization of projects which are configured with physical memory caps. The sampling interval is specified by the administrator. When the system’s physical memory utilization soft cap exceeds the threshold for cap enforcement and other conditions are met, the daemon takes action to reduce the memory consumption of projects with memory caps to levels at or below the caps.
Virtual memory (swap space) can also be capped. This is a hard cap. In an Oracle Solaris Container which has a swap cap, an attempt by a process to allocate more virtual memory than is allowed will fail.
With Oracle Database it may be not appropriate to set the physical memory and swap limitation since the swapping of Oracle Database processes or System Global Area (SGA) is not desirable.
The third new memory cap is locked memory. This is the amount of physical memory that an Oracle Solaris Container can lock down, or prevent from being paged out. By default an Oracle Solaris Container now has the proc_lock_memory privilege, so it is wise to set this cap for all Oracle Solaris Containers.

You shouldn’t generally care of/set this limit unless you know some processes are actually using locked memory or if you don’t trust that zone users/administrators. Its value should then be set to a value smaller than the granted physical memory.

Below an example, that uses rcapd:

zonecfg:svr01> add capped-memory
zonecfg:svr01:capped-memory> set physical=4096M
zonecfg:svr01:capped-memory> set swap=6G zonecfg:svr01:capped-memory> set locked=2G
zonecfg:svr01:capped-memory> end

#Add-capped memory: limit RAM usage
#Set physical: self explained
#Set swap: here the value must be greater than physical, its address space, and not swap file usage.
#Set locked=2G: here is your value for lock

Adding zfs datasets to an existing zone

Adding zfs datasets to an existing zone

If you add zfs datasets to an existing zone you can’t just set a mount point in the zone.
On the next boot Zfs tries to auto mount and when it fails it takes out volfs and about 30 other services

-bash-3.2# zoneadm list -v
ID NAME STATUS PATH BRAND IP
0 global running / native shared
1 myzone running /export/home/zones/myzone native shared

so doing something like this:

-bash-3.2# zfs create mypool/my_apps
-bash-3.2# zfs create mypoo/my_archive
-bash-3.2# zfs create mypool/my_data

-bash-3.2# zfs set mountpoint=/export/home/zones/myzone/root/my_apps mypool/my_apps
-bash-3.2# zfs set mountpoint=/export/home/zones/myzone/root/my_archive mypool/my_archive
-bash-3.2# zfs set mountpoint=/export/home/zones/myzone/root/my_data mypool/my_data

Will work until the next reboot, then the system will go into maintenance mode and you won’t even be able to ssh in.

CORRECT WAY 😉

You set the my_dataset mountpoint to legacy

You then use the zonecfg command to add it

-bash-3.2# zfs set mountpoint=legacy mypool/my_apps
-bash-3.2# zfs set mountpoint=legacy mypool/my_archive
-bash-3.2# zfs set mountpoint=legacy mypool/my_data

-bash-3.2# zoneadm -z myzone halt
-bash-3.2# zonecfg -z myzone
zonecfg:myzone> add fs
zonecfg:myzone:fs> set type=zfs
zonecfg:myzone:fs> set special=mypool/my_apps
zonecfg:myzone:fs> set dir=/my_apps
zonecfg:myzone:fs> end
zonecfg:myzone> add fs
zonecfg:myzone:fs> set type=zfs
zonecfg:myzone:fs> set special=mypool/my_archive
zonecfg:myzone:fs> set dir=/my_archive
zonecfg:myzone:fs> end
zonecfg:myzone> add fs
zonecfg:myzone:fs> set type=zfs
zonecfg:myzone:fs> set special=mypool/my_data
zonecfg:myzone:fs> set dir=/my_data
zonecfg:myzone:fs> end
zonecfg:myzone> commit
zonecfg:myzone> exit
-bash-3.2# zoneadm -z myzone boot

Renaming a Zone

To rename a zone:
[[oldname]] refers to the old name of the zone.
[[newname]] refers to the new name of the zone.

1. Halt the zone.

2. Ensure that no other zones on the system are being installed,
being removed, or being configured during this procedure.

3. Make a backup copy of the file: /etc/zones/index

4. Make a backup copy of the file: /etc/zones/[[oldname]].xml

5. Edit the file /etc/zones/index. Locate the line
corresponding to [[oldname]]; the zone name is the first field.

6. Change [[oldname]] to [[newname]].

7. Move the file /etc/zones/[[oldname]].xml to
/etc/zones/[[newname]].xml

8. Edit /etc/zones/[[newname]].xml. Locate the line which
begins:

Add a ZFS file system to a Zone

Add a ZFS file system to a Zone

If the filesystem is created in the global zone and added to the local zone via zonecfg, it may be assigned to more than one zone unless the mountpoint is set to legacy.
zfs set mountpoint=legacy pool-name/filesystem-name

To import a ZFS filesystem within a zone:
zonecfg -z zone-name

add fs
set dir=mount-point
set special=pool-name/filesystem-name
set type=zfs
end
verify
commit
exit

Administrative rights for a filesystem can be granted to a local zone:
zonecfg -z zone-name

add dataset
set name=pool-name/filesystem-name
end
commit exit

moving or cloning a zone

The syntax for moving a zone will be:

# zoneadm -z my-zone move /newpath

where /newpath specifies the new zonepath for the zone. This will
be implemented so that it works both within and across filesystems,
subject to the existing rules for zonepath (e.g. it cannot be on an
NFS mounted filesystem). When crossing filesystem boundaries the
data will be copied and the original directory will be removed.
Internally the copy will be implemented using cpio with the proper
options to preserve all of the data (ACLs, etc.). The zone must be
halted while being moved.

The syntax for cloning a zone will be:

# zoneadm -z new-zone clone [-m method] method_params

Cloning a zone is analogous to installing a zone. That is, you
first must configure the new zone using the zonecfg command. Once
you have the new zone in the configured state you can use clone to
set up the zone root instead of installing. This allows all
customizations (configuration, pkgs, etc.) from the source zone to
be directly instantiated in the new zone. The new zone will be left
in the sys-unconfigured state even though the source zone is likely
to be fully configured. The source zone must be halted while the
clone is running.

The zoneadm command will be enhanced to perform additional
verification when cloning. Appropriate warnings and errors will
be printed if the new zone and source zone are configured
inappropriately.

The -m option specifies the method used to clone the source. The
default and initial -m method will be “copy”. This will copy the
data from the source zone (specified as the method_param) zonepath
to the new-zone zonepath (implemented using cpio as with the move
sub-command).

Migrate A Non-Global Zone from one host to another

Migrate A Non-Global Zone from one host to another

This ONLY WORKS (Big Gotcha) on machines of same architecure and Solaris 10 realease

1.

Become superuser, or assume the Primary Administrator role.

2.

Halt the zone to be migrated, zone-name in this procedure.

host1# zoneadm -z zone-name halt

3.

Detach the zone.

host1# zoneadm -z zone-name detach

The detached zone is now in the configured state.
4.

Move the zonepath for zone-name to the new host.

See How to Move the zonepath to a new Host for more information.
5.

On the new host, configure the zone.

host2# zonecfg -z zone-name

You will see the following system message:

zone-name: No such zone configured
Use ‘create’ to begin configuring a new zone.

6.

To create the zone zone-name on the new host, use the zonecfg command with the -a option and the zonepath on the new host.

zonecfg:zone-name> create -a /export/zones/zone-name

7.

(Optional) View the configuration.

zonecfg:zone-name> info
zonename: zone-name
zonepath: /export/zones/zone-name
autoboot: false
pool:
inherit-pkg-dir:
dir: /lib
inherit-pkg-dir:
dir: /platform
inherit-pkg-dir:
dir: /sbin
inherit-pkg-dir:
dir: /usr
net:
address: 192.168.0.90
physical: bge0

8.

(Optional) Make any required adjustments to the configuration.

For example, the network physical device might be different on the new host, or devices that are part of the configuration might have different names on the new host.

zonecfg:zone-name> select net physical=bge0
zonecfg:zone-name:net> set physical=e1000g0
zonecfg:zone-name:net> end

9.

Commit the configuration and exit.

zonecfg:zone-name> commit
zonecfg:zone-name> exit

10.

Attach the zone on the new host.
*

Attach the zone with a validation check.

host2# zoneadm -z zone-name attach

The system administrator is notified of required actions to be taken if either or both of the following conditions are present:
o

Required packages and patches are not present on the new machine.
o

The software levels are different between machines.
*

Force the attach operation without performing the validation.

host2# zoneadm -z zone-name attach -F

problem rebooting a local zone

# zoneadm -z reboot

does not work

zoneadm list -v, shows that local zone as still shutting down, but it never finishes

try
# zoneadm -z halt

several times.

If that still does not work do a

#ps -ef -z and kill the processes

single liner:

#kill -9 `ps -ef -z | awk `{print $2}’`

(danger will robinson with the above command if your zone names are greater than 8 chars)

then try the halt or reboot commands again

All else fails you may have to reboot the entire system