How to build zones that are in a different subnet/vlan than the global, and have them route correctly

How to build zones that are in a different subnet/vlan than the global, and have them route correctly

AKA: vlan tagging, zones, and independent routing from the zones

================================
The IP details for this example.
================================

global dracko = 10.220.128.125 255.255.255.0 GW 10.220.128.11

zone dracko-zn1 = 10.220.44.20 255.255.255.0 GW 10.220.44.10 VLAN 44

zone dracko-zn2 = 10.220.43.20 255.255.255.0 GW 10.220.43.10 VLAN 43

====================================
1. add the netmasks to /etc/netmasks
====================================

dracko:/: cat /etc/netmasks
10.220.128.0 255.255.255.0
10.220.44.0 255.255.255.0
10.220.43.0 255.255.255.0

===================================
2. DO NOT add to /etc/defaultrouter
===================================

dracko:/: cat /etc/defaultrouter
10.220.128.11

==================
3. Plumb the VLANS
==================

the formula is adaptername[vlan * 1000][+ adapter number]

Our main NIC is e1000g0 so:

e1000g and 44 * 1000 + 0 for VLAN 44
e1000g and 43 * 1000 + 0 for VLAN 43

dracko:/: ifconfig e1000g44000 plumb up
dracko:/: ifconfig e1000g43000 plumb up

===============
4. Now we have:
===============

dracko:/: ifconfig -a
lo0: flags=2001000849 mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
lo0:1: flags=2001000849 mtu 8232 index 1
zone dracko-zn1
inet 127.0.0.1 netmask ff000000
lo0:2: flags=2001000849 mtu 8232 index 1
zone dracko-zn2
inet 127.0.0.1 netmask ff000000
e1000g0: flags=1000843 mtu 1500 index 2
inet 10.220.128.125 netmask ffffff00 broadcast 10.220.128.255
ether 0:14:4f:7e:56:46
e1000g43000: flags=201000842 mtu 1500 index 4
inet 0.0.0.0 netmask 0
ether 0:14:4f:7e:56:46
e1000g44000: flags=201000842 mtu 1500 index 3
inet 0.0.0.0 netmask 0
ether 0:14:4f:7e:56:46

dracko:/: dladm show-link
e1000g0 type: non-vlan mtu: 1500 device: e1000g0
e1000g44000 type: vlan 44 mtu: 1500 device: e1000g0
e1000g43000 type: vlan 43 mtu: 1500 device: e1000g0
e1000g1 type: non-vlan mtu: 1500 device: e1000g1
e1000g2 type: non-vlan mtu: 1500 device: e1000g2
e1000g3 type: non-vlan mtu: 1500 device: e1000g3

=====================
5. Make it permanent:
=====================

touch /etc/hostname.e1000g44000
touch /etc/hostname.e1000g43000

DO NOT put anything in these files! They are just so the interfaces are plumbed on reboot

==========================
6. Modify each zone config
==========================

dracko:/: zonecfg -z dracko-zn1
zonecfg:dracko-zn1> remove net
zonecfg:dracko-zn1> add net
zonecfg:dracko-zn1:net> set physical=e1000g44000 <---- the VLAN device zonecfg:dracko-zn1:net> set address=10.220.44.20
zonecfg:dracko-zn1:net> set defrouter=10.220.44.10 <---- set default route here, not in the global zonecfg:dracko-zn1:net> end
zonecfg:dracko-zn1> verify
zonecfg:dracko-zn1> exit

Note: rinse and repeat for all zones

=================
7. boot the zones
=================

dracko:/: zoneadm -z dracko-zn1 boot
dracko:/: zoneadm -z dracko-zn2 boot

=================================================
8. Now lets look at the ifconfig from the global:
=================================================

dracko:/: ifconfig -a
lo0: flags=2001000849 mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
lo0:1: flags=2001000849 mtu 8232 index 1
zone dracko-zn1
inet 127.0.0.1 netmask ff000000
lo0:2: flags=2001000849 mtu 8232 index 1
zone dracko-zn2
inet 127.0.0.1 netmask ff000000
e1000g0: flags=1000843 mtu 1500 index 2
inet 10.220.128.125 netmask ffffff00 broadcast 10.220.128.255
ether 0:14:4f:7e:56:46
e1000g43000: flags=201000842 mtu 1500 index 4
inet 0.0.0.0 netmask 0
ether 0:14:4f:7e:56:46
e1000g43000:1: flags=201000843 mtu 1500 index 4
zone dracko-zn2
inet 10.220.43.20 netmask ffffff00 broadcast 10.220.43.255
e1000g44000: flags=201000842 mtu 1500 index 3
inet 0.0.0.0 netmask 0
ether 0:14:4f:7e:56:46
e1000g44000:1: flags=201000843 mtu 1500 index 3
zone dracko-zn1
inet 10.220.44.20 netmask ffffff00 broadcast 10.220.44.255
dracko:/:

===============================
9: Netstat -nr from the global:
===============================

dracko:/: netstat -nr

Routing Table: IPv4
Destination Gateway Flags Ref Use Interface
——————– ——————– —– —– ———- ———
default 10.220.128.11 UG 1 12
default 10.220.44.10 UG 1 0 e1000g44000
default 10.220.43.10 UG 1 0 e1000g43000
10.220.128.0 10.220.128.125 U 1 2 e1000g0
224.0.0.0 10.220.128.125 U 1 0 e1000g0
127.0.0.1 127.0.0.1 UH 1 0 lo0
dracko:/:

=================================
10: A network view from the zone:
=================================

dracko:/: zlogin -C dracko-zn1
[Connected to zone ‘dracko-zn1’ console]

# bash
bash-3.2#
bash-3.2# ifconfig -a
lo0:1: flags=2001000849 mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
e1000g44000:1: flags=201000843 mtu 1500 index 3
inet 10.220.44.20 netmask ffffff00 broadcast 10.220.44.255
bash-3.2# netstat -nr

Routing Table: IPv4
Destination Gateway Flags Ref Use Interface
——————– ——————– —– —– ———- ———
default 10.220.44.10 UG 1 0 e1000g44000
10.220.44.0 10.220.44.20 U 1 1 e1000g44000:1
224.0.0.0 10.220.44.20 U 1 0 e1000g44000:1
127.0.0.1 127.0.0.1 UH 4 122 lo0:1
bash-3.2#
bash-3.2# ping 10.220.44.10
10.220.44.10 is alive <-- woohoo!

blackhole routing

The black hole routes ensure that traffic destined
for the bogon networks will not pass the firewall, and will therefore leave
your Internet link and screening router unscathed. Further, because the
packet is simply dropped, the performance impact is quite low.

To add a black hole route, we utilize the route command. Here is an example
of a blackhole route for an RFC1918 network, 10/8.

route add 10.0.0.0 -netmask 255.0.0.0 8.8.8.1 -blackhole

Where 8.8.8.1 is the internal (intranet) IP address of our firewall. The
result of this route addition is that the firewall will silently discard
all packets destined for the RFC1918 network, 10/8. Be careful here,
however! Do not add blackhole routes for networks that you utilize in-
ternally. I recommend the following black hole routes, which should be
added to the end of /etc/init.d/inetinit. Remember to replace 8.8.8.1
with the IP address of the internal interface of your firewall.

route add 1.0.0.0 -netmask 255.0.0.0 8.8.8.1 -blackhole
route add 2.0.0.0 -netmask 255.0.0.0 8.8.8.1 -blackhole
route add 10.0.0.0 -netmask 255.0.0.0 8.8.8.1 -blackhole
route add 172.16.0.0 -netmask 255.240.0.0 8.8.8.1 -blackhole
route add 192.168.0.0 -netmask 255.255.0.0 8.8.8.1 -blackhole
route add 192.0.2.0 -netmask 255.255.255.0 8.8.8.1 -blackhole
route add 169.254.0.0 -netmask 255.255.0.0 8.8.8.1 -blackhole
route add 240.0.0.0 -netmask 240.0.0.0 8.8.8.1 -blackhole

DNS Troubleshooting Tips

Enabling DNS logging –

Use logging (named.conf(4)) to cause the named process to write to a log file that you specify. Add the following to the top of the primary DNS system’s /etc/named.conf file and restart the named daemon:

logging {
channel logfile {
file "/var/named/bind-log";
print-time yes;
severity debug 9;
print-category yes;
print-severity yes;
};
category default { default_syslog; logfile; };
category queries { logfile; };
};

Logging starts as soon as the logging statement in the /etc/named.conf file is parsed, so the logging statement should be the first entry in that file.

A logging channel controls the destination of the logged data. Following is a description of each of the example entries:

* /var/named/bind-log – File to hold logged data
* print-time yes – Print time of the event
* severity debug 9 – Debug output of level 9 and below to be logged
* print-category yes – Log category information
* print-severity yes – Log severity information

The category section describes how the channel information is used. Following is a description of each of the example entries:

* category default { default_syslog; logfile; } – Log to syslog and logfile
* category queries { logfile; } – Log queries

The named daemon sends messages to the syslogd daemon by using the daemon facility. Messages that are sent with level notice or higher are written to the /var/adm/messages file by default. The contents of this file often show where configuration errors were made.

—-

Before the Solaris 9 OS, the primary test tool bundled with BIND was the nslookup utility. As of the Solaris 9 OS, the domain information groper (dig) utility was also bundled with the Solaris OS. In the Solaris 10 OS, the nslookup utility is included, but is marked as obsolete with a notification that it might be removed in a future release. The dig utility is now preferred and does the following:

* Sends queries and displays replies for any of the valid resource record types
* Queries the DNS server of your choice
* Debugs almost any domain that is not protected by a firewall

Executing Forward Queries

The syntax used for forward queries is as follows:

dig @DNS_server domain_name system_name

A typical debug query testing forward resolution might look like the following:

# dig @192.168.1.2 one.edu sys11.one.edu

; <<>> DiG 9.2.4 <<>> @192.168.1.2 one.edu sys11.one.edu
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1334 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;one.edu. IN A ;; AUTHORITY SECTION: one.edu. 86400 IN SOA sys12.one.edu. root.sys12.one.edu. 2005010101 3600 1800 6048000 86400 ;; Query time: 4 msec ;; SERVER: 192.168.1.2#53(192.168.1.2) ;; WHEN: Wed Jan 12 16:56:12 2005 ;; MSG SIZE rcvd: 72 ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1440 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;sys11.one.edu. IN A ;; ANSWER SECTION: sys11.one.edu. 86400 IN A 192.168.1.1 ;; AUTHORITY SECTION: one.edu. 86400 IN NS sys12.one.edu. one.edu. 86400 IN NS sys13.one.edu. ;; ADDITIONAL SECTION: sys12.one.edu. 86400 IN A 192.168.1.2 sys13.one.edu. 86400 IN A 192.168.1.3 ;; Query time: 3 msec ;; SERVER: 192.168.1.2#53(192.168.1.2) ;; WHEN: Wed Jan 12 16:56:12 2005 ;; MSG SIZE rcvd: 119 The ANSWER SECTION lists the answer retrieved from the DNS server. An answer number (on the flags line) greater than zero usually indicates success. Executing Reverse Queries The syntax used for reverse queries is as follows: dig @DNS_server domain_name -x IP_address A typical debug query testing reverse resolution might look like the following: dig @192.168.1.2 one.edu -x 192.168.1.1 ; <<>> DiG 9.2.4 <<>> @192.168.1.2 one.edu -x 192.168.1.1
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1881 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;one.edu. IN A ;; AUTHORITY SECTION: one.edu. 86400 IN SOA sys12.one.edu. root.sys12.one.edu. 2005010101 3600 1800 6048000 86400 ;; Query time: 4 msec ;; SERVER: 192.168.1.2#53(192.168.1.2) ;; WHEN: Wed Jan 12 16:55:11 2005 ;; MSG SIZE rcvd: 72 ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1932 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;1.1.168.192.in-addr.arpa. IN PTR ;; ANSWER SECTION: 1.1.168.192.in-addr.arpa. 86400 IN PTR sys11.one.edu. ;; AUTHORITY SECTION: 1.168.192.in-addr.arpa. 86400 IN NS sys13.one.edu. 1.168.192.in-addr.arpa. 86400 IN NS sys12.one.edu. ;; ADDITIONAL SECTION: sys12.one.edu. 86400 IN A 192.168.1.2 sys13.one.edu. 86400 IN A 192.168.1.3 ;; Query time: 3 msec ;; SERVER: 192.168.1.2#53(192.168.1.2) ;; WHEN: Wed Jan 12 16:55:11 2005 ;; MSG SIZE rcvd: 141 ---- The remote name daemon controller command, rndc, is used to dump the currently cached contents of the server. sys12# rndc dumpdb All of the options for the rndc utility are listed when it is invoked without any as follows: # rndc Usage: rndc [-c config] [-s server] [-p port] [-k key-file ] [-y key] [-V] command command is one of the following: reload Reload configuration file and zones. reload zone [class [view]] Reload a single zone. refresh zone [class [view]] Schedule immediate maintenance for a zone. reconfig Reload configuration file and new zones only. stats Write server statistics to the statistics file. querylog Toggle query logging. dumpdb Dump cache(s) to the dump file (named_dump.db). stop Save pending updates to master files and stop the server. halt Stop the server without saving pending updates. trace Increment debugging level by one. trace level Change the debugging level. notrace Set debugging level to 0. flush Flushes all of the server's caches. flush [view] Flushes the server's cache for a view. status Display status of the server. *restart Restart the server. * == not yet implemented Version: 9.2.4 Clearing the Cache Clear the server's cached data by restarting the named daemon. For example: sys12# svcs -a | grep dns online 5:09:02 svc:/network/dns/client:default online 5:09:25 svc:/network/dns/server:default sys12# svcadm disable svc:/network/dns/server:default sys12# svcs -a | grep dns disabled 6:54:30 svc:/network/dns/server:default online 5:09:02 svc:/network/dns/client:default sys12# svcadm enable svc:/network/dns/server:default sys12# svcs -a | grep dns online 5:09:02 svc:/network/dns/client:default online 6:54:45 svc:/network/dns/server:default Verify that the cache has been cleared using the rndc command: sys12# rndc dumpdb sys12# cat /var/named/named_dump.db ; ; Cache dump of view '_default' ; $DATE 20050112135516 You can use the rndc utility with the reconfig command to cause the named process to reload its configuration file and implement any changes to the zone files as follows: sys12# rndc reconfig ----- S10 Administrators use the remote name daemon control program (rndc) to control the operation of a name server. Name servers have always been controlled by administrators sending signals, such as SIGHUP and SIGINT. The rndc utility provides a finer granularity of control, and it can be used both interactively and non-interactively. As of the Solaris 10 OS, the rndc utility replaces the ndc utility as the name daemon control application. A significant difference between ndc in BIND 8 and rndc in BIND 9 is that rndc uses its own configuration file, rndc.conf. Securing Control Sessions The rndc utility supports security using key-based authentication. Remote clients are authorized specifically to control the daemon by establishing, configuring and using secret keys. Implementing this security requires an rndc-key reference entry in the /etc/name.conf file and the appropriate key information in the rndc.conf file. Without a rndc-key reference in the /etc/named.conf file, the following messages appear in the /var/adm/messages file: Jan 12 08:22:12 sys12 named[1346]: [ID 873579 daemon.notice] / command channel listening on 127.0.0.1#953 Jan 12 08:22:12 sys12 named[1346]: [ID 873579 daemon.notice] / couldn't add command channel ::1#953: address not available You can continue to use the rndc utility, albeit in a non-secure manner. Use the rndc-confgen utility to generate the proper contents for the rndc.conf and /etc/named.conf files. The rndc.conf file specifies which server controls and algorithm the server should use. You need only a rndc.conf file in place if the named.conf file has an entry for a rndc-key. sys12# /usr/sbin/rndc-confgen # Start of rndc.conf key "rndc-key" { algorithm hmac-md5; secret "jZOP5nh//i9t7BwHivvNzA=="; }; options { default-key "rndc-key"; default-server 127.0.0.1; default-port 953; }; # End of rndc.conf # Use with the following in named.conf, adjusting the allow list as needed: # key "rndc-key" { # algorithm hmac-md5; # secret "jZOP5nh//i9t7BwHivvNzA=="; # }; # # controls { # inet 127.0.0.1 port 953 # allow { 127.0.0.1; } keys { "rndc-key"; }; # }; # End of named.conf sys12# Copy the rndc-key section into a new file called /etc/rndc.conf. sys12# cat /etc/rndc.conf key "rndc-key" { algorithm hmac-md5; secret "jZOP5nh//i9t7BwHivvNzA=="; }; options { default-key "rndc-key"; default-server 127.0.0.1; default-port 953; }; Add the named.conf section to the /etc/named.conf file. Be sure to remove the comment indentifiers (#). The following is an example of a finished /etc/named.conf file: sys12# cat /etc/named.conf options { directory "/var/named"; }; // added to stop couldn't add command channel ::1#953 messages // from showing up in /var/adm/messages // following is output from /usr/sbin/rndc-confgen key "rndc-key" { algorithm hmac-md5; secret "jZOP5nh//i9t7BwHivvNzA=="; }; controls { inet 127.0.0.1 port 953 allow { 127.0.0.1; } keys { "rndc-key"; }; }; // end of rndc.key addition ... Testing Test the rndc.key by stopping and starting the named process, using the rndc utility, and examining the resulting /var/adm/messages file entries: sys12# svcadm disable svc:/network/dns/server:default sys12# svcadm enable svc:/network/dns/server:default sys12# tail -4 /var/adm/messages Jan 12 08:58:48 sys12 named[1402]: [ID 873579 daemon.notice] / starting BIND 9.2.4 Jan 12 08:58:48 sys12 named[1402]: [ID 873579 daemon.notice] / command channel listening on 127.0.0.1#953 Jan 12 08:58:48 sys12 named[1402]: [ID 873579 daemon.notice] running The daemon starting without the command channel message implies a successful key configuration The rndc command can now be used securely. You will see an error message similar to the following if either there is a problem with the contents of the rndc.conf file: sys12# rndc dumpdb Jan 12 10:13:40 sys12 named[1431]: invalid command from 127.0.0.1#32839: bad auth rndc: connection to remote host closed This may indicate that the remote server is using an older version of the command protocol, this host is not authorized to connect, or the key is invalid. sys12# Server Status The rndc utility can be used to query server status and report statistics. Now test to verify that the rndc utility works as expected: sys12# rndc status number of zones: 5 debug level: 0 xfers running: 0 xfers deferred: 0 soa queries in progress: 0 query logging is ON server is up and running Flushing the Memory Cache You can use the rndc utility to flush the memory cache. sys12# rndc flush sys12# rndc dumpdb sys12# cat /var/named/named_dump.db ; ; Cache dump of view '_default' ; $DATE 20050113141237 sys12# Changing the Debug Level of the Daemon Use the rndc utility to change the debug level of the server. Before making any changes, determine the current debug level of the daemon. sys12# rndc status number of zones: 5 debug level: 0 xfers running: 0 xfers deferred: 0 soa queries in progress: 0 query logging is ON server is up and running Increment the debug level by one. sys12# rndc trace sys12# rndc status number of zones: 5 debug level: 1 xfers running: 0 xfers deferred: 0 soa queries in progress: 0 query logging is ON server is up and running Assign the debug level to a specific level. sys12# rndc trace 8 sys12# rndc status number of zones: 5 debug level: 8 xfers running: 0 xfers deferred: 0 soa queries in progress: 0 query logging is ON server is up and running sys12# If logging is enabled, the debug level is shown along with the logged messages: sys12# tail -f /var/named/bind-log Jan 13 07:12:37.548 general: debug 1: received control channel command 'dumpdb' Jan 13 07:17:02.598 general: debug 1: received control channel command 'status' Jan 13 07:17:15.249 general: debug 1: received control channel command 'trace' Jan 13 07:17:17.929 general: debug 1: received control channel command 'status' Jan 13 07:17:34.838 general: debug 1: received control channel command 'trace 8' Jan 13 07:17:37.149 general: debug 1: received control channel command 'status'

Plumb an interface for a non-Global zone on boot

If you need to get an interface configured for a non-global zone and remain across a system boot:

In the global zone add an empty /etc/hostname.bgex file, where bgex is the interface for the non-global zone. This will plumb the interface when the global zone boots, but not configure it. Then use the standard zonecfg commands to configure the interface when the local zone boots. See Dracko #195 for some help on that.

Force the ethernet adapter’s operational parameters in Solaris

To force nge0 to 100 Mbit/s Full Duplex we need to issue all of the commands below:

# ndd -set /dev/nge0 adv_1000fdx_cap=0

# ndd -set /dev/nge0 adv_1000hdx_cap=0

# ndd -set /dev/nge0 adv_100fdx_cap=1

# ndd -set /dev/nge0 adv_100hdx_cap=0

# ndd -set /dev/nge0 adv_10fdx_cap=0

# ndd -set /dev/nge0 adv_10hdx_cap=0

# ndd -set /dev/nge0 adv_autoneg_cap=0

Note: Any changes made at the command line in this manner will be lost if the system is rebooted. So this method is used for temporary changes. Or you can add in a shell script in /etc/rc2

How do I show my gateway on Solaris

How do I show my gateway on Solaris

Sorry, it’s not in ifconfig

bash-3.00$ netstat -rnv

IRE Table: IPv4
Destination Mask Gateway Device Mxfrg Rtt Ref Flg Out In/Fwd
——————– ————— ——————– —— —– —– — — —– ——
192.85.120.128 255.255.255.192 192.85.120.146 ce0 1500* 0 1 U 57656 0
224.0.0.0 240.0.0.0 192.85.120.146 ce0 1500* 0 1 U 0 0
default 0.0.0.0 192.85.120.129 1500* 0 1 UG 443349 0
127.0.0.1 255.255.255.255 127.0.0.1 lo0 8232* 67 245 UH 8572251 0

Solaris nic card numbering

Solaris nic card numbering

When you install Solaris on a server that has on board nics and nics in the PCI slot OBP will scan the PCI bus first and if the device types are the same (like ce) than the pci card will be assigned ce0 and onboard will get ce2 or ce3

To prevent, if you are installing on a system that has an on board nic and pci nics, pull the PCI nics before install, or there are .asr commands in OBP to offline the board, then after install do a boot -r