Download as pdf or txt
Download as pdf or txt
You are on page 1of 42

Practicing Solaris Cluster using VirtualBox

Example configuration to run a training and development cluster environment on a single system

Combining technologies to work

Thorsten Frueauf, 10/16/2009

This white paper describes how to configure a training and development environment for Solaris 10 and Solaris Cluster 3.2 on a physical system running OpenSolaris, using technologies like VirtualBox, software quorum, Solaris Container Clusters (Zone Clusters), Crossbow, IPsec and COMSTAR iSCSI.

Table of Contents
1 2 2.1 2.2 Introduction............................................................................................................. 3 Host Configuration.................................................................................................. 4 BIOS Configuration.................................................................................................4 OpenSolaris Configuration......................................................................................4

2.2.1Network Configuration............................................................................................ 4 2.2.2Filesystem Configuration........................................................................................ 7 2.2.3COMSTAR / iSCSI Target Configuration.................................................................8 2.3 2.4 2.5 2.6 3 3.1 Install VirtualBox..................................................................................................... 9 Install rdesktop........................................................................................................9 Download Solaris 10 05/09 (Update 7) ISO image................................................. 9 Download Solaris Cluster 3.2 01/09 archive...........................................................9 VirtualBox Configuration....................................................................................... 10 VirtualBox Guest Configuration.............................................................................10 3.1.1Virtual Disk Configuration......................................................................................11 3.1.2Virtual Machine Configuration............................................................................... 11 3.2 VirtualBox Guest Solaris Configuration.................................................................13 3.2.1First Guest Installation (S10-U7-SC-32U2-1)....................................................... 13 3.2.2Second Guest Installation (S10-U7-SC-32U2-2).................................................. 15 3.3 Getting Crash dumps from Solaris guests............................................................ 18 3.3.1Booting Solaris with kernel debugger enabled..................................................... 18 3.3.2How to break into the kernel debugger.................................................................18 3.3.3Forcing a crash dump........................................................................................... 19 3.3.4Crash dump analysis with Solaris CAT................................................................. 19 4 4.1 Solaris Cluster Configuration................................................................................ 20 Solaris Cluster Installation.................................................................................... 21 4.1.1First node cluster installation (s10-sc32-1)........................................................... 21 4.1.2First node cluster configuration (s10-sc32-1)....................................................... 22 4.1.3Second node cluster installation (s10-sc32-2)...................................................... 23 4.1.4Second node cluster configuration (s10-sc32-2).................................................. 23 4.2 4.3 4.4 4.5 4.6 iSCSI Initiator Configuration................................................................................. 24 ZFS zpool Configuration for Data......................................................................... 24 Software Quorum Configuration........................................................................... 25 IPsec Configuration for the cluster interconnect................................................... 25 Zone Cluster Configuration...................................................................................28

4.6.1First Zone Cluster Configuration (zc1)..................................................................28 4.6.2Second Zone Cluster Configuration (zc2).............................................................30 4.7 4.8 4.9 A Resource Group and HA ZFS Configuration (zc1)............................................... 32 HA MySQL Configuration (zc1).............................................................................33 HA Tomcat Configuration (zc1)............................................................................. 39 References........................................................................................................... 42

4.10 Scalable Apache Configuration (zc2)....................................................................40

1 Introduction

Page 3 / 42

1 Introduction
For developers it is often convenient to have all tools necessary for their work in one place, ideally on a laptop for maximum mobility. For system administrators, it is often critical to have a test system on which to try out things and learn about new features. Of course the system needs to be low cost and transportable to anywhere they need to be. HA Clusters are often perceived as complex to setup and resource hungry in terms of hardware requirements. This white paper explains how to setup a single x86 based system (like a laptop) with OpenSolaris, configuring a training and development environment for Solaris 10 / Solaris Cluster 3.2 and using VirtualBox to setup a two node cluster. The configuration can then be used to practice various technologies: OpenSolaris technologies like Crossbow (to create virtual networking adapters), COMSTAR (to export iSCSI targets from the host being used as iSCSI initiators by the Solaris Cluster nodes as shared storage and quorum device), ZFS (to export a ZFS volume as iSCSI targets and as failover file system within the cluster) and IPsec (to secure the cluster private interconnect traffic) are used for the host system and VirtualBox guests to configure Solaris 10 / Solaris Cluster 3.2. Solaris Cluster technologies like software quorum and zone clusters are getting used to setup HA MySQL and HA Tomcat as failover services running in one virtual cluster. A second virtual cluster is being used to show how to setup Apache as a scalable service. The instructions can be used as a step-by-step guide to setup any x86 based system that is capable to run OpenSolaris. In order to try out if your system works, simply boot the OpenSolaris live CDROM and confirm with the Device Driver Utility (DDU) that all required components are able to run. The hardware compatibility list can be found at http://www.sun.com/bigadmin/hcl/.

Combining technologies to work

Practicing Solaris Cluster using VirtualBox

Page 4 / 42

2 Host Configuration

2 Host Configuration
The example host system used throughout this white paper is a Toshiba Tecra M10 Laptop with the following hardware specifications: 4 GB main memory Intel Coretm2 Duo P8400@2.26Ghz 160 GB SATA hard disk 1 physical network nic (1000 Mbit) e1000g0 1 wireless network nic (54 Mbit) iwh0

The system should at least have a minimum of 3GB of main memory in order to host two VirtualBox OpenSolaris guest systems.

2.1 BIOS Configuration


The Toshiba Tecra M10 has been updated to the BIOS version 2.0. By default, the option to use the CPU virtualization capabilities is disabled. This option needs to be enabled in order to use 64bit guests with VirtualBox: BIOS screen SYSTEM SETUP (1/3) OTHERS Set Virtualization Technology to Enabled.

2.2 OpenSolaris Configuration


In this example OpenSolaris 2009.06 build 111 has been installed on the laptop. For generic information on how to install OpenSolaris 2009.06, see the official guide at http://dlc.sun.com/osol/docs/content/2009.06/getstart/index.html. The following configuration choices will be used as an example: Hostname: vorlon User: scdemo

2.2.1 Network Configuration


By default OpenSolaris enables the Network Auto-Magic (NWAM) service. Since NWAM is currently designed to use only one active NIC at a time (and actively unconfigures all other existing NICs), the following steps are required to disable NWAM and setup a static networking configuration. The diagram shows an overview of the target network setup:

Practicing Solaris Cluster using VirtualBox

Combining technologies to work

2 Host Configuration

Page 5 / 42

Laptop vorlon OpenSolaris 2009.06

VirtualBox guest s10-sc32-1 Solaris 10 05/09 (Update 7) vnic12 vnic21 e1000g0 e1000g1 clprivnet0

e1000g0

NAT

vnic11

etherstub1

etherstub2 VirtualBox guest s10-sc32-2 Solaris 10 05/09 (Update 7) vnic22 vnic13 e1000g1 clprivnet0 e1000g0

The following IP addresses will be used: IP Address 10.0.2.100 10.0.2.121 10.0.2.122 10.0.2.130 10.0.2.131 10.0.2.140 10.0.2.141 10.0.2.142 10.0.2.143 vorlon-int s10-sc32-1 s10-sc32-2 s10-sc32-lh1 s10-sc32-lh2 zc1-z1 zc1-z2 zc2-z1 zc2-z2 Alias vnic11 e1000g0 / vnic12 e1000g0 / vnic13 Comment

Combining technologies to work

Practicing Solaris Cluster using VirtualBox

Page 6 / 42

2 Host Configuration

Disable the NWAM service: vorlon# svcadm disable nwam Create the virtual network: vorlon# vorlon# vorlon# vorlon# vorlon# vorlon# vorlon# dladm dladm dladm dladm dladm dladm dladm create-etherstub etherstub1 create-vnic -l etherstub1 vnic11 create-vnic -l etherstub1 vnic12 create-vnic -l etherstub1 vnic13 create-etherstub etherstub2 create-vnic -l etherstub2 vnic21 create-vnic -l etherstub2 vnic22

Add the IP addresses and aliases to /etc/inet/hosts: vorlon# vi ::1 vorlon 127.0.0.1 # # Internal 10.0.2.100 10.0.2.121 10.0.2.122 10.0.2.130 10.0.2.131 10.0.2.140 10.0.2.141 10.0.2.142 10.0.2.143 /etc/inet/hosts vorlon.local localhost loghost vorlon.local localhost loghost network for VirtualBox vorlon-int s10-sc32-1 s10-sc32-2 s10-sc32-lh1 s10-sc32-lh2 zc1-z1 zc1-z2 zc2-z1 zc2-z2

Add the default netmasks for the used subnets to /etc/inet/netmasks: vorlon# vi /etc/inet/netmasks 10.0.1.0 255.255.255.0 10.0.2.0 255.255.255.0 Configure the internal host IP used to access the network to the VirtualBox guest: vorlon# vi /etc/hostname.vnic11 vorlon-int Always plumb the vnics used by the VirtualBox guests when booting: vorlon# touch /etc/hostname.vnic12 /etc/hostname.vnic13 /etc/hostname.vnic21 /etc/hostname.vnic22 If you want the VirtualBox guests to be able to reach the external network connected to either e1000g0 or iwh0, then setup ipfilter to perform Network Address Translation (NAT) for the internal virtual network:
Practicing Solaris Cluster using VirtualBox Combining technologies to work

2 Host Configuration

Page 7 / 42

vorlon# vi /etc/ipf/ipf.conf pass in all pass out all vorlon# vi /etc/ipf/ipnat.conf map e1000g0 10.0.2.0/24 -> 0/32 portmap tcp/udp auto map e1000g0 10.0.2.0/24 -> 0/32 map iwh0 10.0.2.0/24 -> 0/32 portmap tcp/udp auto map iwh0 10.0.2.0/24 -> 0/32 If you want to make e.g. the tomcat URL configured later in section 4.9 accessible from outside of the hosts external network, add the following line to /etc/ipf/ipnat.conf: rdr e1000g0 0.0.0.0/0 port 8080 -> 10.0.2.130 port 8080 tcp Configure the public network on e1000g0 depending on your individual setup. The following example assumes a static IP configuration: vorlon# vi /etc/hostname.e1000g0 10.0.1.42 vorlon# vi /etc/defaultrouter 10.0.1.1 vorlon# vi /etc/resolv.conf nameserver 10.0.1.1 vorlon# vi /etc/nsswitch.conf => add dns to the hosts keyword: hosts: files dns Enable the static networking configuration: vorlon# svcadm enable svc:/network/physical:default Enable the service for ipfilter: vorlon# svcadm enable svc:/network/ipfilter:default Enable IPv4 forwarding: vorlon# routeadm -u -e ipv4-forwarding

2.2.2 Filesystem Configuration


Create some additional file systems for: crash dumps created for the host system (/var/crash)
Combining technologies to work Practicing Solaris Cluster using VirtualBox

Page 8 / 42

2 Host Configuration

downloads of various files (/data) VirtualBox Images (/VirtualBox-Images) zfs create -o mountpoint=/var/crash -o compression=on rpool/crash mkdir /var/crash/vorlon zfs create -o mountpoint=/data rpool/data zfs create -o mountpoint=/VirtualBox-Images rpool/vbox-images chown scdemo:staff /data /VirtualBox-Images

vorlon# vorlon# vorlon# vorlon# vorlon#

2.2.3 COMSTAR / iSCSI Target Configuration


If you want to be able to practice with Solaris Cluster 3.2, it will be necessary to provide some shared storage to the cluster nodes running as VirtualBox guests. Shared storage will be used for: HA ZFS failover zpool for application data quorum device using the software quorum feature The easiest way to achieve shared storage between VirtualBox guests is to configure one or more iSCSI targets from the host system, and configure the Solaris running inside the VirtualBox guests as iSCSI initiators. Section 3.1.1 provides a diagram of the storage configuration used in this example. First install the required packages for COMSTAR / iSCSI: vorlon# pkg install SUNWiscsi SUNWiscsit SUNWstmf vorlon# init 6 Configure a ZFS volume, which will then get exported as iSCSI target. Note that this example just uses a volume of 2GB size feel free to increase based in your needs and available disk space: vorlon# zfs create -V 2gb rpool/iscsi-t1 vorlon# svcadm disable svc:/network/iscsi_initiator:default vorlon# svcadm enable stmf vorlon# svcadm enable target vorlon# itadm create-target Target iqn.1986-03.com.sun:02:51720f58-cf97-eca4-c86e-9591ed87861c successfully created vorlon# sbdadm create-lu /dev/zvol/rdsk/rpool/iscsi-t1 Created the following LU: GUID -------------------------------600144f0000827bf93574ac359b20001 /dev/zvol/rdsk/rpool/iscsi-t1 DATA SIZE ------------------2147418112 SOURCE ----------------

vorlon# stmfadm add-view 600144f0000827bf93574ac359b20001

Practicing Solaris Cluster using VirtualBox

Combining technologies to work

2 Host Configuration

Page 9 / 42

In a similar way more iSCSI targets can get configured, if required.

2.3 Install VirtualBox


Download VirtualBox from http://www.virtualbox.org/wiki/Downloads select the archive for Solaris and OpenSolaris host on x86/amd64. Consult the VirtualBox User Guide for the complete installation instructions. In this white paper version 3.0.8 has been used. vorlon# pkgadd -G -d VirtualBoxKern-3.0.8-SunOS-r53138.pkg vorlon# pkgadd -G -d VirtualBox-3.0.8-SunOS-r53138.pkg

2.4 Install rdesktop


VirtualBox offers to start the guest using the VRDP protocol in order to access the guest console. rdesktop is a VRDP client that allows you to access the VRDP server, which VirtualBox starts for the guest. vorlon# pkg install SUNWrdesktop

2.5 Download Solaris 10 05/09 (Update 7) ISO image


You can download the ISO image from https://cds.sun.com/is-bin/INTERSHOP.enfinity/WFS/CDSCDS_SMI-Site/en_US/-/USD/ViewProductDetail-Start?ProductRef=Sol10-U7-SP-x86-FULL-DVD-GF@CDS-CDS_SMI. The following example will assume it to be available as /data/isos/Solaris10/Update7/x86-ga/sol-10u7-ga-x86-dvd.iso. Note that until CR 6888193 is fixed, do not try the specific configuration described in this white paper with Solaris 10 Update 8 or newer, since it will not work.

2.6 Download Solaris Cluster 3.2 01/09 archive


You can download the zip archive from http://cds.sun.com/is-bin/INTERSHOP.enfinity/WFS/CDSCDS_SMI-Site/en_US/-/USD/VerifyItem-Start/suncluster_3_2u2-ga-solaris-x86.zip. The following example will assume it to be available as /data/SolarisCluster/3.2U2/x86-ga/suncluster_3_2u2-ga-solaris-x86.zip. For HA MySQL you will need Patch 126033-07 or newer. It contains necessary changes to run that agent in zone clusters. If you have a sunsolve account, download it from http://sunsolve.sun.com/pdownload.do?target=126033-09&method=h and make it available as /data/SolarisCluster/126033-09.zip. For HA Tomcat you will need Patch 126072-02 or newer. It contains necessary changes to run that agent in zone clusters. If you have a sunsolve account, download it from http://sunsolve.sun.com/pdownload.do?target=126072-02&method=h and make it availabe as /data/SolarisCluster/126072-02.zip.

Combining technologies to work

Practicing Solaris Cluster using VirtualBox

Page 10 / 42

3 VirtualBox Configuration

3 VirtualBox Configuration
3.1 VirtualBox Guest Configuration
The following diagram describes the desired disk configuration:

Laptop vorlon
VBox Guest s10-sc32-1 VBox Guest s10-sc32-2

Zpool services

c3t2d0

c3t2d0

d1

d1

c0d0

rpool

iSCSI Initiator

iSCSI Initiator

c0d0

rpool

S10-U7-SC32U2-1.vdi

iSCSI Target
c2t0d0

S10-U7-SC32U2-2.vdi

rpool/iscsi-t1 d1 = quorum device


Practicing Solaris Cluster using VirtualBox Combining technologies to work

3 VirtualBox Configuration

Page 11 / 42

3.1.1 Virtual Disk Configuration


Create the boot disks for the two guests, size 30 GB (= 30720 MB, dynamically expanding image): s10-sc32-1 will use S10-U7-SC-32U2-1.vdi s10-sc32-2 will use S10-U7-SC-32U2-2.vdi scdemo@vorlon$ /opt/VirtualBox/VBoxManage createhd --filename /VirtualBox-Images/S10-U7-SC-32U2-1.vdi --size 30720 --format VDI --variant Standard --remember VirtualBox Command Line Management Interface Version 3.0.8 (C) 2005-2009 Sun Microsystems, Inc. All rights reserved. 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Disk image created. UUID: 641be421-a838-4ac2-9ace-083aa1775f99 scdemo@vorlon$ /opt/VirtualBox/VBoxManage createhd --filename /VirtualBox-Images/S10-U7-SC-32U2-2.vdi --size 30720 --format VDI --variant Standard --remember VirtualBox Command Line Management Interface Version 3.0.8 (C) 2005-2009 Sun Microsystems, Inc. All rights reserved. 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Disk image created. UUID: 34a938d9-9e65-4253-887a-2948d126deef

3.1.2 Virtual Machine Configuration


Determine the MAC addresses used by the vnics configured in section 2.2.1: scdemo@vorlon$ dladm show-vnic LINK OVER SPEED VID vnic11 etherstub1 0 0 vnic12 etherstub1 0 0 vnic13 etherstub1 0 0 vnic21 etherstub2 0 0 vnic22 etherstub2 0 0

MACADDRESS 2:8:20:fa:bf:c 2:8:20:d5:47:9d 2:8:20:e2:99:94 2:8:20:3a:34:a3 2:8:20:d3:bf:1a

MACADDRTYPE random random random random random

The following shows which vnic is used by which VirtualBox guest: VirtualBox Guest Name S10-U7-SC-32U2-1 VNIC used vnic12 MAC address 020820D5479D

Combining technologies to work

Practicing Solaris Cluster using VirtualBox

Page 12 / 42

3 VirtualBox Configuration

vnic21 S10-U7-SC-32U2-2 vnic13 vnic22

0208203A34A3 020820E29994 020820D3BF1A

It is critical that the MAC address configured with the VirtualBox guest exactly matches with the MAC address configured for the corresponding vnic, otherwise network communication will not work. Configure the virtual machines: scdemo@vorlon$ /opt/VirtualBox/VBoxManage createvm --name S10-U7-SC-32U21 --ostype Solaris_64 --register VirtualBox Command Line Management Interface Version 3.0.8 (C) 2005-2009 Sun Microsystems, Inc. All rights reserved. Virtual machine 'S10-U7-SC-32U2-1' is created and registered. UUID: 44b912d0-5e3d-4063-9db4-47b3f5575701 Settings file: '/export/home/scdemo/.VirtualBox/Machines/S10-U7-SC-32U21/S10-U7-SC-32U2-1.xml' scdemo@vorlon$ /opt/VirtualBox/VBoxManage modifyvm S10-U7-SC-32U2-1 --memory 1280 -hda /VirtualBox-Images/S10-U7-SC-32U2-1.vdi --boot1 disk --boot2 dvd --dvd /data/isos/Solaris10/Update7/x86-ga/sol-10-u7-ga-x86dvd.iso --nic1 bridged --nictype1 82540EM --cableconnected1 on --bridgeadapter1 vnic12 --macaddress1 020820D5479D --nic2 bridged --nictype2 82540EM --cableconnected2 on --bridgeadapter2 vnic21 --macaddress2 0208203A34A3 --audio solaudio --audiocontroller ac97 --vrdp on --vrdpport 3390 VirtualBox Command Line Management Interface Version 3.0.8 (C) 2005-2009 Sun Microsystems, Inc. All rights reserved. scdemo@vorlon$ /opt/VirtualBox/VBoxManage createvm --name S10-U7-SC-32U22 --ostype OpenSolaris_64 --register VirtualBox Command Line Management Interface Version 3.0.8 (C) 2005-2009 Sun Microsystems, Inc. All rights reserved. Virtual machine 'S10-U7-SC-32U2-2' is created and registered. UUID: ce23d951-832b-4d50-9707-495c7ce0d30b Settings file: '/export/home/scdemo/.VirtualBox/Machines/S10-U7-SC-32U22/S10-U7-SC-32U2-2.xml' scdemo@vorlon$ /opt/VirtualBox/VBoxManage modifyvm S10-U7-SC-32U2-2 --memory 1280 -hda /VirtualBox-Images/S10-U7-SC-32U2-2.vdi --boot1 disk --boot2 dvd --dvd /data/isos/Solaris10/Update7/x86-ga/sol-10-u7-ga-x86dvd.iso --nic1 bridged --nictype1 82540EM --cableconnected1 on --bridgeadapter1 vnic13 --macaddress1 020820E29994 --nic2 bridged --nictype2 82540EM --cableconnected2 on --bridgeadapter2 vnic22 --macaddress2 020820D3BF1A --audio solaudio --audiocontroller ac97 --vrdp on --vrdpport 3391 VirtualBox Command Line Management Interface Version 3.0.8 (C) 2005-2009 Sun Microsystems, Inc. All rights reserved.
Practicing Solaris Cluster using VirtualBox Combining technologies to work

3 VirtualBox Configuration

Page 13 / 42

3.2 VirtualBox Guest Solaris Configuration


Both VirtualBox guest systems need to get installed with Solaris 10 05/09 (Update 7). For generic information on how to install Solaris 10 05/09 (Update 7) see the official guides at http://docs.sun.com/app/docs/coll/1236.10?l=en. In section 3.1.2 the corresponding ISO image has been configured for the guests.

3.2.1 First Guest Installation (S10-U7-SC-32U2-1)


Start the virtual machine while on a desktop session on the host: scdemo@vorlon$ /opt/VirtualBox/VBoxManage startvm S10-U7-SC-32U2-1 This will start the console for S10-U7-SC-32U2-1 within the VirtualBox GUI. Perform the following steps (rough guidance on non-default selections): Select Installer: -> 3 Keyboard Layout: US-English Language: English Networked: Yes Network Interface: e1000g0 Use DHCP: No Hostname: s10-sc32-1 IP Address: 10.0.2.121 Part of Subnet: Yes Netmask: 255.255.255.0 Enable IPv6: No Configure Kerberos: No Nameservice: None NFSv4 Domain Config: Use the NFSv4 domain derived by the system Timezone: <correct timezone> Time: <correct time> Root Password: <password> Remote services enabled: No Standard Installation Geographic Region: North America (or the region of your choice) Default locale: en_US_ISO8859-15 (or the locale of your choice) Additional Products: None Filesystem: ZFS Solaris software to install: Entire Distribution Disk device: c0d0 Select for swap: 1024, rest leave default values

The next step is to configure the static networking for s10-sc32-1. After the reboot, login as user root and perform the following steps in a terminal window: s10-sc32-1 # vi /etc/inet/hosts
Combining technologies to work Practicing Solaris Cluster using VirtualBox

Page 14 / 42

3 VirtualBox Configuration

::1 localhost loghost 127.0.0.1 localhost loghost # # Internal network for VirtualBox 10.0.2.100 vorlon-int 10.0.2.121 s10-sc32-1 s10-sc32-1.local 10.0.2.122 s10-sc32-2 10.0.2.130 s10-sc32-lh1 10.0.2.131 s10-sc32-lh2 10.0.2.140 zc1-z1 10.0.2.141 zc1-z2 10.0.2.142 zc2-z1 10.0.2.143 zc2-z2 s10-sc32-1 # vi /etc/inet/netmasks 10.0.2.0 255.255.255.0 s10-sc32-1 # vi /etc/hostname.e1000g0 s10-sc32-1 s10-sc32-1 # vi /etc/defaultrouter vorlon-int In case you have the host system connected to external networking, configure a nameservice such as DNS: s10-sc32-1 # vi /etc/resolv.conf nameserver <nameserver-ip> s10-sc32-1 # vi /etc/nsswitch.conf => add dns to the hosts keyword: hosts: files dns In case you want the guest system to not run the graphical login, in order to conserve some main memory, logout from the gnome session and login through the text console as user root: s10-sc32-1 # svcadm disable svc:/application/graphical-login/gdm:default In case you want to allow remote ssh access for the root user (assumed later): s10-sc32-1 # vi /etc/ssh/sshd_config => change the PermitRootLogin setup from no to yes: PermitRootLogin yes s10-sc32-1 # svcadm restart ssh Since the host system is running two VirtualBox guests at the same time, if the system gets loaded, it is possible that the guest Solaris 10 system will send a lot of the following messages to syslog: <date> <nodename> genunix: [ID 313806 kern.notice] NOTICE: pm_tick delay of 3058 ms exceeds 2147 ms
Practicing Solaris Cluster using VirtualBox Combining technologies to work

3 VirtualBox Configuration

Page 15 / 42

This message is also getting send to the system console and can slow down the whole system a lot. To prevent that, make the following modification to the syslog.conf file: s10-sc32-1 # cp -p /etc/syslog.conf /etc/syslog.conf.orig s10-sc32-1 # vi /etc/syslog.conf --- syslog.conf.orig Tue Mar 17 18:41:20 2009 +++ syslog.conf Fri Oct 2 20:35:44 2009 @@ -9,8 +9,8 @@ # that match m4 reserved words. Also, within ifdef's, arguments # containing commas must be quoted. # -*.err;kern.notice;auth.notice /dev/sysmsg -*.err;kern.debug;daemon.notice;mail.crit /var/adm/messages +*.err;kern.warning;auth.notice /dev/sysmsg +*.err;kern.debug;daemon.warning;mail.crit /var/adm/messages *.alert;kern.err;daemon.err *.alert operator root

Note that this will cause all daemon.notice message not being send to the console or /var/adm/messages. Shutdown the guest: s10-sc32-1 # init 5 Remove the OpenSolaris ISO image from future use: scdemo@vorlon$ /opt/VirtualBox/VBoxManage modifyvm S10-U7-SC-32U2-1 --dvd none VirtualBox Command Line Management Interface Version 3.0.8 (C) 2005-2009 Sun Microsystems, Inc. All rights reserved.

3.2.2 Second Guest Installation (S10-U7-SC-32U2-2)


Start the virtual machine while on a desktop session on the host: scdemo@vorlon$ /opt/VirtualBox/VBoxManage startvm S10-U7-SC-32U2-2 This will start the console for S10-U7-SC-32U2-2 within the VirtualBox GUI. Perform the following steps (rough guidance on non-default selections): Select Installer: -> 3 Keyboard Layout: US-English Language: English Networked: Yes Network Interface: e1000g0 Use DHCP: No
Practicing Solaris Cluster using VirtualBox

Combining technologies to work

Page 16 / 42

3 VirtualBox Configuration

Hostname: s10-sc32-2 IP Address: 10.0.2.122 Part of Subnet: Yes Netmask: 255.255.255.0 Enable IPv6: No Configure Kerberos: No Nameservice: None NFSv4 Domain Config: Use the NFSv4 domain derived by the system Timezone: <correct timezone> Time: <correct time> Root Password: <password> Remote services enabled: No Standard Installation Geographic Region: North America (or the region of your choice) Default locale: en_US_ISO8859-15 (or the locale of your choice) Additional Products: None Filesystem: ZFS Solaris software to install: Entire Distribution Disk device: c0d0 Select for swap: 1024, rest leave default values

The next step is to configure the static networking for s10-sc32-2. After the reboot, login as user root and perform the following steps in a terminal window: s10-sc32-2 # vi /etc/inet/hosts ::1 localhost loghost 127.0.0.1 localhost loghost # # Internal network for VirtualBox 10.0.2.100 vorlon-int 10.0.2.121 s10-sc32-1 10.0.2.122 s10-sc32-2 s10-sc32-2.local 10.0.2.130 s10-sc32-lh1 10.0.2.131 s10-sc32-lh2 10.0.2.140 zc1-z1 10.0.2.141 zc1-z2 10.0.2.142 zc2-z1 10.0.2.143 zc2-z2 s10-sc32-2 # vi /etc/inet/netmasks 10.0.2.0 255.255.255.0 s10-sc32-2 # vi /etc/hostname.e1000g0 s10-sc32-2 s10-sc32-2 # vi /etc/defaultrouter vorlon-int In case you have the host system connected to external networking, configure the nameservice like DNS:

Practicing Solaris Cluster using VirtualBox

Combining technologies to work

3 VirtualBox Configuration

Page 17 / 42

s10-sc32-2 # vi /etc/resolv.conf nameserver <nameserver-ip> s10-sc32-2 # vi /etc/nsswitch.conf => add dns to the hosts keyword: hosts: files dns In case you want the guest system to not run the graphical login, in order to conserve some main memory, logout from the gnome session and login through the text console as user root: s10-sc32-2 # svcadm disable svc:/application/graphical-login/gdm:default In case you want to allow remote ssh access for the root user (assumed later): s10-sc32-2 # vi /etc/ssh/sshd_config => change the PermitRootLogin setup from no to yes: PermitRootLogin yes s10-sc32-2 # svcadm restart ssh Since the host system is running two VirtualBox guests at the same time, if the system gets loaded, it is possible that the guest Solaris 10 system will send a lot of the following messages to syslog: <date> <nodename> genunix: [ID 313806 kern.notice] NOTICE: pm_tick delay of 3058 ms exceeds 2147 ms This message is also getting send to the system console and can slow down the whole system a lot. To prevent that, make the following modification to the syslog.conf file: s10-sc32-2 # cp -p /etc/syslog.conf /etc/syslog.conf.orig s10-sc32-2 # vi /etc/syslog.conf --- syslog.conf.orig Tue Mar 17 18:41:20 2009 +++ syslog.conf Fri Oct 2 20:35:44 2009 @@ -9,8 +9,8 @@ # that match m4 reserved words. Also, within ifdef's, arguments # containing commas must be quoted. # -*.err;kern.notice;auth.notice /dev/sysmsg -*.err;kern.debug;daemon.notice;mail.crit /var/adm/messages +*.err;kern.warning;auth.notice /dev/sysmsg +*.err;kern.debug;daemon.warning;mail.crit /var/adm/messages *.alert;kern.err;daemon.err *.alert operator root

Note that this will cause all daemon.notice message not being send to the console or /var/adm/messages. Shutdown the guest:

Combining technologies to work

Practicing Solaris Cluster using VirtualBox

Page 18 / 42

3 VirtualBox Configuration

s10-sc32-2 # init 5 Remove the OpenSolaris ISO image from future use: scdemo@vorlon$ /opt/VirtualBox/VBoxManage modifyvm S10-U7-SC-32U2-2 --dvd none VirtualBox Command Line Management Interface Version 3.0.8 (C) 2005-2009 Sun Microsystems, Inc. All rights reserved.

3.3 Getting Crash dumps from Solaris guests


Sometimes it is necessary for debugging purposes to create a crash dump of a Solaris guest, either because it is hung or there is no other way to interact with it, or because a specific state of the system is of interest for further analysis.

3.3.1 Booting Solaris with kernel debugger enabled


The first step is to boot the Solaris guest with the kernel debugger enabled. This step can be used for a one-time kernel debugger boot: o when the grub line comes up, hit 'e' o go to the kernel$ line and hit 'e' to EDIT it o hit backspace/delete to remove ",console=graphics" o add -k to the line o the line should now look like kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS -k o hit return to enter changes and go back o hit 'b' to boot If you want to always boot with the kernel debugger enabled, the above change needs to be made to the /rpool/boot/grub/menu.lst file to the corresponding entry. Example, add the following: # vi /rpool/boot/grub/menu.lst title Solaris 10 5/09 s10x_u7wos_08 X86 debug findroot (pool_rpool,0,a) kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS -k module /platform/i86pc/boot_archive

3.3.2 How to break into the kernel debugger


On a physical x86 system, the default key combination to break into the kernel debugger is the F1a. This does not work when Solaris is running as a VirtualBox guest. You can either change the default key abort sequence using the kbd(1) command, or use the following in order to send F1-a to a VirtualBox guest: scdemo@vorlon$ /opt/VirtualBox/VBoxManage controlvm <solarisVMname> keyboardputscancode 3b 1e 9e bb

Practicing Solaris Cluster using VirtualBox

Combining technologies to work

3 VirtualBox Configuration

Page 19 / 42

3.3.3 Forcing a crash dump


Once you have entered the kernel debugger prompt, the following will cause a crash dump to be written to the dump device: > $<systemdump See dumpadm(1M) for details on how to configure a dump device and savecore directory. After the system has rebooted, either the svc:/system/dumpadm:default service will automatically save the crash dump into the configured savecore directory, or you need to manually run savecore(1M), if the dumpadm service is disabled. If you want to save a crash dump of the live running Solaris system without breaking into the kernel debugger or requiring a reboot, run within that system: # savecore -L If you want to force a crash dump before rebooting the system, run within that system: # reboot -d

3.3.4 Crash dump analysis with Solaris CAT


While it is possible to perform analysis of crash dumps using mdb(1), the Solaris Crash Analysis Tool (CAT) comes with additional commands and macros, which are useful to get a quick overview of the crash cause. Solaris CAT is available through http://blogs.sun.com/solariscat/, which contains the download link to the most current version. After installation of the corresponding SUNWscat package you can read the documentation at file:///opt/SUNWscat/docs/index.html.

Combining technologies to work

Practicing Solaris Cluster using VirtualBox

Page 20 / 42

4 Solaris Cluster Configuration

4 Solaris Cluster Configuration


The following diagram shows the desired Solaris Cluster configuration:

s10-sc32-1
zc2-z1 RG apache-rg: apache-rs RG shared-ip-rg: shared-ip-rs

2 Node Physical Cluster


zc2-z2

s10-sc32-2

zc2 Zone Cluster

RG apache-rg: apache-rs RG shared-ip-rg: shared-ip-rs

zc1-z1 RG service-rg: service-lh-rs service-hasp-rs mysql-rs tomcat-rs

zc1-z1

zc1 Zone Cluster

RG service-rg: service-lh-rs service-hasp-rs mysql-rs tomcat-rs

zpool: services

Practicing Solaris Cluster using VirtualBox

Combining technologies to work

4 Solaris Cluster Configuration

Page 21 / 42

4.1 Solaris Cluster Installation


Start both nodes. In case you don't want the console window open all time, start the VirtualBox guests by using the VRDP protocol. The following ports got configured for the guests: S10-U7-SC-32U2-1 3390 S10-U7-SC-32U2-2 3391

scdemo@vorlon$ /opt/VirtualBox/VBoxManage startvm S10-U7-SC-32U2-1 --type vrdp scdemo@vorlon$ /opt/VirtualBox/VBoxManage startvm S10-U7-SC-32U2-2 --type vrdp The console can be reached via the the rdesktop application. Console for s10-sc32-1: scdemo@vorlon$ rdesktop localhost:3390 Console for s10-sc32-2: scdemo@vorlon$ rdesktop localhost:3391

4.1.1 First node cluster installation (s10-sc32-1)


Copy the Solaris Cluster archive to the cluster node, unpack the archive and start the Installer. In this case X11 forwarding through ssh is getting used: scdemo@vorlon$ scp /data/SolarisCluster/3.2U2/x86-ga/suncluster_3_2u2-gasolaris-x86.zip root@s10-sc32-1:/var/tmp scdemo@vorlon$ ssh -g -X s10-sc32-1 -l root s10-sc32-1 # cd /var/tmp s10-sc32-1 # mkdir SC s10-sc32-1 # cd SC s10-sc32-1 # unzip ../suncluster_3_2u2-ga-solaris-x86.zip s10-sc32-1 # rm ../suncluster_3_2u2-ga-solaris-x86.zip s10-sc32-1 # cd Solaris_x86 s10-sc32-1 # ./installer Follow instructions on the screen to install Sun Cluster framework software and data services on the node. Select the following for installation: Sun Cluster 3.2 01/09 Sun Cluster Agents 3.2 01/09 All Shared Components

Combining technologies to work

Practicing Solaris Cluster using VirtualBox

Page 22 / 42

4 Solaris Cluster Configuration

Choose Configure Later when prompted whether to configure Sun Cluster framework software. After installation is finished, you can view any available installation log. Add /usr/cluster/bin to $PATH and /usr/cluster/man to $MANPATH within $HOME/.profile for user root.

4.1.2 First node cluster configuration (s10-sc32-1)


Allow RPC communication for external systems: s10-sc32-1 # svccfg -s svc:/network/rpc/bind setprop config/local_only = false s10-sc32-1 # svcadm refresh svc:/network/rpc/bind Enable remote access to webconsole: s10-sc32-1 # svccfg -s system/webconsole setprop options/tcp_listen = true s10-sc32-1 # svcadm refresh system/webconsole Install the first cluster node: the cluster name is set to s10-sc32-demo the lofi option is used for global devices the nodes s10-sc32-1 and s10-sc32-2 are part of the cluster the default IP subnet of 172.16.0.0 is getting used for the cluster interconnect. If you share the interconnect from multiple clusters on the same public IP subnet, you need to make sure to configure a unique IP subnet for each cluster. e1000g1 is the network interface used for the cluster interconnect, which is attached to the switch etherstub2 global fencing is disabled. s10-sc32-1 # /usr/cluster/bin/scinstall \ -i \ -C s10-sc32-demo \ -F \ -G lofi \ -T node=s10-sc32-1,node=s10-sc32-2,authtype=sys \ -w netaddr=172.16.0.0,netmask=255.255.240.0,maxnodes=64,maxprivatenets=10,nu mvirtualclusters=12 \ -A trtype=dlpi,name=e1000g1 \ -B type=switch,name=etherstub2 \ -m endpoint=:e1000g1,endpoint=etherstub2 \ -e global_fencing=nofencing Disable MPxIO for iSCSI: s10-sc32-1 # vi /kernel/drv/iscsi.conf => change the mpxio-disable setup from no to yes: mpxio-disable="yes";
Practicing Solaris Cluster using VirtualBox Combining technologies to work

4 Solaris Cluster Configuration

Page 23 / 42

Reboot the node: s10-sc32-1 # init 6

4.1.3 Second node cluster installation (s10-sc32-2)


Copy the Solaris Cluster archive to the cluster node, unpack the archive and start the Installer. In this case X11 forwarding through ssh is getting used: scdemo@vorlon$ scp /data/SolarisCluster/3.2U2/x86-ga/suncluster_3_2u2-gasolaris-x86.zip root@s10-sc32-2:/var/tmp scdemo@vorlon$ ssh -g -X s10-sc32-2 -l root s10-sc32-2 # cd /var/tmp s10-sc32-2 # mkdir SC s10-sc32-2 # cd SC s10-sc32-2 # unzip ../suncluster_3_2u2-ga-solaris-x86.zip s10-sc32-2 # rm ../suncluster_3_2u2-ga-solaris-x86.zip s10-sc32-2 # cd Solaris_x86 s10-sc32-2 # ./installer Follow instructions on the screen to install Sun Cluster framework software and data services on the node. Select the following for installation: Sun Cluster 3.2 01/09 Sun Cluster Agents 3.2 01/09 All Shared Components Choose Configure Later when prompted whether to configure Sun Cluster framework software. After installation is finished, you can view any available installation log. Add /usr/cluster/bin to $PATH and /usr/cluster/man to $MANPATH within $HOME/.profile for the main user and user root.

4.1.4 Second node cluster configuration (s10-sc32-2)


Allow RPC communication for external systems: s10-sc32-2 # svccfg -s svc:/network/rpc/bind setprop config/local_only = false s10-sc32-2 # svcadm refresh svc:/network/rpc/bind Enable remote access to webconsole: s10-sc32-2 # svccfg -s system/webconsole setprop options/tcp_listen = true s10-sc32-2 # svcadm refresh system/webconsole Add the second node to the cluster:
Combining technologies to work Practicing Solaris Cluster using VirtualBox

Page 24 / 42

4 Solaris Cluster Configuration

the cluster name to join is s10-sc32-demo the sponsoring node is s10-sc32-1 the lofi option is used for global devices e1000g1 is the network interface used for the cluster interconnect, which is attached to the switch etherstub2

s10-sc32-2 # /usr/cluster/bin/scinstall \ -i \ -C s10-sc32-demo \ -N s10-sc32-1 \ -G lofi \ -A trtype=dlpi,name=e1000g1 \ -m endpoint=:e1000g1,endpoint=etherstub2 Disable MPxIO for iSCSI: s10-sc32-2 # vi /kernel/drv/iscsi.conf => change the mpxio-disable setup from no to yes: mpxio-disable="yes"; Reboot the node: s10-sc32-2 # init 6

4.2 iSCSI Initiator Configuration


Configure the iSCSI initiator on both nodes for using the iSCSI target configured in section 2.2.3: both-nodes# iscsiadm modify discovery -s enable both-nodes# iscsiadm add static-config iqn.1986-03.com.sun:02:51720f58cf97-eca4-c86e-9591ed87861c,10.0.2.100 both-nodes# devfsadm -i iscsi both-nodes# cldev refresh both-nodes# cldev populate s10-sc32-1 # cldev list -v DID Device Full Device Path ------------------------d1 s10-sc32-2:/dev/rdsk/c3t2d0 d1 s10-sc32-1:/dev/rdsk/c3t2d0 d2 s10-sc32-1:/dev/rdsk/c0d0 d3 s10-sc32-1:/dev/rdsk/c1t0d0 d4 s10-sc32-2:/dev/rdsk/c1t0d0 d5 s10-sc32-2:/dev/rdsk/c0d0

4.3 ZFS zpool Configuration for Data


If you want to use the storage devices used as quorum device as part of a ZFS zpool, then it is important to create first the zpool, before configuring the device as quorum device. When ZFS is adding a device to a zpool, it writes an EFI label to it, which would overwrite existing quorum device information.
Practicing Solaris Cluster using VirtualBox Combining technologies to work

4 Solaris Cluster Configuration

Page 25 / 42

In this example we use the iSCSI target from section 2.2.3 for both, as part of the zpool and as quorum device. Create the zpool first: s10-sc32-1 # zpool create services /dev/rdsk/c3t2d0 s10-sc32-1 # zpool export services

4.4 Software Quorum Configuration


The software quorum feature will be automatically used if fencing for the device has been disabled. In this example we configure the iSCSI target from section 2.2.3 as software quorum device, since COMSTAR on OpenSolaris 2009.06 does not yet support SCSI3 persistent group reservation for iSCSI targets: s10-sc32-1 s10-sc32-1 s10-sc32-1 s10-sc32-1 # # # # cldevice clquorum clquorum claccess set -p default_fencing=nofencing d1 add d1 reset deny-all

As an alternative, you can configure a quorum device as quorum server. The procedure is explained at http://docs.sun.com/app/docs/doc/820-4677/cihecfab?l=en&a=view. For the laptop configuration it would be possible to configure the quorum server on the host vorlon.

4.5 IPsec Configuration for the cluster interconnect


This step is optional and is a new feature of Solaris Cluster 3.2 01/09. It is now possible to configure IPsec on the cluster interconnect in order to protect the private TCP/IP traffic by encrypting the IP packets. Note that the cluster heartbeat packets are send on the DLPI level lower than IP, which means they are not getting encrypted. The following steps configure IPsec by using the Internet Key Exchange (IKE) method. Prepare /etc/inet/ipsecinit.conf on both nodes: both-nodes# cd /etc/inet both-nodes# cp ipsecinit.sample ipsecinit.conf s10-sc32-1 # ifconfig e1000g1 e1000g1: flags=201008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4,CoS> mtu 1500 index 3 inet 172.16.0.129 netmask ffffff80 broadcast 172.16.0.255 ether 2:8:20:3a:34:a3 s10-sc32-1 # ifconfig clprivnet0 clprivnet0: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 4 inet 172.16.4.1 netmask fffffe00 broadcast 172.16.5.255

Combining technologies to work

Practicing Solaris Cluster using VirtualBox

Page 26 / 42

4 Solaris Cluster Configuration

ether 0:0:0:0:0:1 s10-sc32-2 # ifconfig e1000g1 e1000g1: flags=201008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4,CoS> mtu 1500 index 3 inet 172.16.0.130 netmask ffffff80 broadcast 172.16.0.255 ether 2:8:20:d3:bf:1a s10-sc32-2 # ifconfig clprivnet0 clprivnet0: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 4 inet 172.16.4.2 netmask fffffe00 broadcast 172.16.5.255 ether 0:0:0:0:0:2 s10-sc32-1 # vi ipsecinit.conf {laddr 172.16.0.129 raddr 172.16.0.130} ipsec {auth_algs any encr_algs any sa shared} {laddr 172.16.4.1 raddr 172.16.4.2} ipsec {auth_algs any encr_algs any sa shared} s10-sc32-2 # vi ipsecinit.conf {laddr 172.16.0.130 raddr 172.16.0.129} ipsec {auth_algs any encr_algs any sa shared} {laddr 172.16.4.2 raddr 172.16.4.1} ipsec {auth_algs any encr_algs any sa shared} Prepare /etc/inet/ike/config on both nodes: both-nodes# cd /etc/inet/ike both-nodes# cp config.sample config s10-sc32-1 # vi config { label "clusternode1-priv-physical1-clusternode2-priv-physical1" local_addr 172.16.0.129 remote_addr 172.16.0.130 p1_xform { auth_method preshared oakley_group 5 auth_alg md5 encr_alg 3des} p2_pfs 5 p2_idletime_secs 30 } { label "clusternode1-priv-privnet0-clusternode2-priv-privnet0" local_addr 172.16.4.1 remote_addr 172.16.4.2 p1_xform { auth_method preshared oakley_group 5 auth_alg md5 encr_alg 3des} p2_pfs 5 p2_idletime_secs 30 } s10-sc32-2 # vi config { label "clusternode2-priv-physical1-clusternode1-priv-physical1"

Practicing Solaris Cluster using VirtualBox

Combining technologies to work

4 Solaris Cluster Configuration

Page 27 / 42

local_addr 172.16.0.130 remote_addr 172.16.0.129 p1_xform { auth_method preshared oakley_group 5 auth_alg md5 encr_alg 3des} p2_pfs 5 p2_idletime_secs 30 } { label "clusternode2-priv-privnet0-clusternode1-priv-privnet0" local_addr 172.16.4.2 remote_addr 172.16.4.1 p1_xform { auth_method preshared oakley_group 5 auth_alg md5 encr_alg 3des} p2_pfs 5 p2_idletime_secs 30 } both-nodes# /usr/lib/inet/in.iked -c -f /etc/inet/ike/config in.iked: Configuration file /etc/inet/ike/config syntactically checks out. Setup entries for pre-shared keys in /etc/inet/secret/ike.preshared on both nodes: both-nodes# cd /etc/inet/secret s10-sc32-1 # pktool genkey keystore=file outkey=ikekey keytype=3des keylen=192 print=y Key Value ="329b7f792c5854dfd654674adf9220c45851dc61291c893b" s10-sc32-1 # vi ike.preshared { localidtype IP localid 172.16.0.129 remoteidtype IP remoteid 172.16.0.130 key 329b7f792c5854dfd654674adf9220c45851dc61291c893b } { localidtype IP localid 172.16.4.1 remoteidtype IP remoteid 172.16.4.2 key 329b7f792c5854dfd654674adf9220c45851dc61291c893b } s10-sc32-2 # vi ike.preshared { localidtype IP localid 172.16.0.130 remoteidtype IP remoteid 172.16.0.129 key 329b7f792c5854dfd654674adf9220c45851dc61291c893b } { localidtype IP localid 172.16.4.2 remoteidtype IP

Combining technologies to work

Practicing Solaris Cluster using VirtualBox

Page 28 / 42

4 Solaris Cluster Configuration

remoteid 172.16.4.1 key 329b7f792c5854dfd654674adf9220c45851dc61291c893b } both-nodes# svcadm enable svc:/network/ipsec/ike:default both-nodes# svcadm restart svc:/network/ipsec/policy:default

4.6 Zone Cluster Configuration


Create the zfs file system for the zone root paths on each cluster node: s10-sc32-1 # zfs create -o mountpoint=/zones rpool/zones s10-sc32-2 # zfs create -o mountpoint=/zones rpool/zones

4.6.1 First Zone Cluster Configuration (zc1)


Create configuration file for the first zone cluster, named zc1: s10-sc32-1 # vi /var/tmp/zc1.txt create set zonepath=/zones/zc1 set brand=cluster set enable_priv_net=true set ip-type=shared set autoboot=true add node set physical-host=s10-sc32-1 set hostname=zc1-z1 add net set address=10.0.2.140 set physical=e1000g0 end end add node set physical-host=s10-sc32-2 set hostname=zc1-z2 add net set address=10.0.2.141 set physical=e1000g0 end end add net set address=10.0.2.130 end add dataset set name=services end add sysid set system_locale=C
Practicing Solaris Cluster using VirtualBox Combining technologies to work

4 Solaris Cluster Configuration

Page 29 / 42

set terminal=vt220 set security_policy=NONE set name_service=NONE set nfs4_domain=dynamic set timezone=MET set root_password=<crypted password string> end commit exit Configure the zone cluster zc1: s10-sc32-1 # clzc configure -f /var/tmp/zc1.txt zc1 s10-sc32-1 # clzc verify zc1 Waiting for zone verify commands to complete on all the nodes of the zone cluster "zc1" Install the zone cluster zc1: s10-sc32-1 # clzc install zc1 Waiting for zone install commands to complete on all the nodes of the zone cluster "zc1" Note that this step can take a while. It populates the zone root path with package content. Output is getting send to the console of each node (global zone) where you can monitor the progress. Boot the zone cluster zc1: s10-sc32-1 # clzc boot zc1 => on s10-sc32-1: zlogin -C zc1 => on s10-sc32-2: zlogin -C zc1 Perform the following steps in both zones, zc1-z1 and zc1-z2: Enable SSH root login for user root: both-zones# vi /etc/ssh/sshd_config => change the PermitRootLogin setup from no to yes: PermitRootLogin yes both-zones# svcadm restart ssh Add the cluster IP addresses to /etc/inet/hosts: both-zones# vi /etc/hosts # 10.0.2.140 zc1-z1 10.0.2.141 zc1-z2 #
Combining technologies to work Practicing Solaris Cluster using VirtualBox

Page 30 / 42

4 Solaris Cluster Configuration

# logical hosts 10.0.2.130 s10-sc32-lh1 # # Base cluster nodes 10.0.2.121 s10-sc32-1 10.0.2.122 s10-sc32-2 # # Internal network for VirtualBox 10.0.2.100 vorlon-int Disable unneeded services within both zones in order to conserve some main memory: both-zones# svcadm disable svc:/application/graphical-login/cdelogin:default both-zones# svcadm disable webconsole both-zones# svcadm disable svc:/network/rpc/cde-calendar-manager:default both-zones# svcadm disable svc:/network/rpc/cde-ttdbserver:tcp both-zones# svcadm disable svc:/application/cde-printinfo:default both-zones# svcadm disable svc:/application/font/fc-cache:default both-zones# svcadm disable svc:/application/management/wbem:default both-zones# svcadm disable svc:/application/font/stfsloader:default both-zones# svcadm disable svc:/application/opengl/ogl-select:default both-zones# svcadm disable svc:/application/x11/xfs:default both-zones# svcadm disable svc:/application/print/ppd-cacheupdate:default both-zones# svcadm disable svc:/network/smtp:sendmail both-zones# svcadm disable svc:/application/stosreg:default both-zones# svcadm disable svc:/application/management/seaport:default both-zones# svcadm disable svc:/application/management/sma:default both-zones# svcadm disable svc:/application/management/snmpdx:default both-zones# svcadm disable svc:/application/management/dmi:default

4.6.2 Second Zone Cluster Configuration (zc2)


Create configuration file for the first zone cluster, named zc2: s10-sc32-1 # vi /var/tmp/zc2.txt create set zonepath=/zones/zc2 set brand=cluster set enable_priv_net=true set ip-type=shared set autoboot=true add node set physical-host=s10-sc32-1 set hostname=zc2-z1 add net set address=10.0.2.142 set physical=e1000g0 end end

Practicing Solaris Cluster using VirtualBox

Combining technologies to work

4 Solaris Cluster Configuration

Page 31 / 42

add node set physical-host=s10-sc32-2 set hostname=zc2-z2 add net set address=10.0.2.143 set physical=e1000g0 end end add net set address=10.0.2.131 end add sysid set system_locale=C set terminal=vt220 set security_policy=NONE set name_service=NONE set nfs4_domain=dynamic set timezone=MET set root_password=<crypted password string> end commit exit Configure the zone cluster zc2: s10-sc32-1 # clzc configure -f /var/tmp/zc2.txt zc2 s10-sc32-1 # clzc verify zc2 Waiting for zone verify commands to complete on all the nodes of the zone cluster "zc2" Install the zone cluster zc2: s10-sc32-1 # clzc install zc2 Waiting for zone install commands to complete on all the nodes of the zone cluster "zc2" Note that this step can take a while. It populates the zone root path with package content. Output is getting send to the console of each node (global zone) where you can monitor the progress. Boot the zone cluster zc2: s10-sc32-1 # clzc boot zc2 => on s10-sc32-1: zlogin -C zc2 => on s10-sc32-2: zlogin -C zc2 Perform the following steps in both zones, zc2-z1 and zc2-z2: Enable SSH root login for user root: both-zones# vi /etc/ssh/sshd_config
Combining technologies to work Practicing Solaris Cluster using VirtualBox

Page 32 / 42

4 Solaris Cluster Configuration

=> change the PermitRootLogin setup from no to yes: PermitRootLogin yes both-zones# svcadm restart ssh Add the cluster IP addresses to /etc/inet/hosts: both-zones# vi /etc/hosts # 10.0.2.142 zc2-z1 10.0.2.143 zc2-z2 # # logical hosts 10.0.2.131 s10-sc32-lh2 # # Base cluster nodes 10.0.2.121 s10-sc32-1 10.0.2.122 s10-sc32-2 # # Internal network for VirtualBox 10.0.2.100 vorlon-int Disable unneeded services within both zones in order to conserve some main memory: both-zones# svcadm disable svc:/application/graphical-login/cdelogin:default both-zones# svcadm disable webconsole both-zones# svcadm disable svc:/network/rpc/cde-calendar-manager:default both-zones# svcadm disable svc:/network/rpc/cde-ttdbserver:tcp both-zones# svcadm disable svc:/application/cde-printinfo:default both-zones# svcadm disable svc:/application/font/fc-cache:default both-zones# svcadm disable svc:/application/management/wbem:default both-zones# svcadm disable svc:/application/font/stfsloader:default both-zones# svcadm disable svc:/application/opengl/ogl-select:default both-zones# svcadm disable svc:/application/x11/xfs:default both-zones# svcadm disable svc:/application/print/ppd-cacheupdate:default both-zones# svcadm disable svc:/network/smtp:sendmail both-zones# svcadm disable svc:/application/stosreg:default both-zones# svcadm disable svc:/application/management/seaport:default both-zones# svcadm disable svc:/application/management/sma:default both-zones# svcadm disable svc:/application/management/snmpdx:default both-zones# svcadm disable svc:/application/management/dmi:default

4.7 Resource Group and HA ZFS Configuration (zc1)


Register the SUNW.gds and SUNW.HAStoragePlus resource type and create resource group service-rg, resource service-hasp-rs for the zpool and resource service-lh-rs for the logical host on one node: zc1-z1 # clrg create service-rg zc1-z1 # clrt register SUNW.HAStoragePlus
Practicing Solaris Cluster using VirtualBox Combining technologies to work

4 Solaris Cluster Configuration

Page 33 / 42

zc1-z1 # clrt register SUNW.gds zc1-z1 # clrs create -g service-rg -t HAStoragePlus -p Zpools=services service-hasp-rs zc1-z1 # clrslh create -g service-rg -h s10-sc32-lh1 service-lh-rs zc1-z1 # clrg online -eM service-rg

4.8 HA MySQL Configuration (zc1)


This example uses the MySQL 4.0.31 package installed by default into /usr/sfw when using Solaris 10 05/09 (Update 7). Install Patch 126033 on both nodes (s10-sc32-1 and s10-sc32-2): vorlon# scp /data/SolarisCluster/126033-09.zip root@s10-sc32-1:/var/tmp vorlon# scp /data/SolarisCluster/126033-09.zip root@s10-sc32-2:/var/tmp both-nodes# cd /var/tmp both-nodes# unzip 126033-09.zip both-nodes# patchadd 126033-09 Configure the mysql user and group on both zones: zc1-z1 zc1-z1 zc1-z2 zc1-z2 # # # # groupadd -g 1000 mysql useradd -g 1000 -d /services/mysql -s /bin/ksh mysql groupadd -g 1000 mysql useradd -g 1000 -d /services/mysql -s /bin/ksh mysql

Create a link from /usr/sfw/sbin/mysqld to /usr/sfw/bin/mysqld on both zones. This is required since the HA MySQL agent either expects mysqld within bin or libexec: s10-sc32-1 # ln -s /usr/sfw/sbin/mysqld /usr/sfw/bin/mysqld s10-sc32-2 # ln -s /usr/sfw/sbin/mysqld /usr/sfw/bin/mysqld Configure MySQL on the node where the services-rg resource group is online: zc1-z1 # clrg status service-rg === Cluster Resource Groups === Group Name ---------service-rg Node Name --------zc1-z1 zc1-z2 Suspended --------No No Status -----Online Offline

s10-sc32-1 # zfs create services/mysql zc1-z1 # mkdir -p /services/mysql/logs zc1-z1 # mkdir -p /services/mysql/innodb zc1-z1 # cp /usr/sfw/share/mysql/my-small.cnf /services/mysql/my.cnf zc1-z1 # vi /services/mysql/my.cnf --- /usr/sfw/share/mysql/my-small.cnf Thu Jun 12 14:10:10 2008
Combining technologies to work Practicing Solaris Cluster using VirtualBox

Page 34 / 42

4 Solaris Cluster Configuration

+++ /services/mysql/my.cnf Wed Oct 14 18:14:17 2009 @@ -18,7 +18,7 @@ [client] #password = your_password port = 3306 -socket = /tmp/mysql.sock +socket = /tmp/s10-sc32-lh1.sock # Here follows entries for some specific programs @@ -25,7 +25,7 @@ # The MySQL server [mysqld] port = 3306 -socket = /tmp/mysql.sock +socket = /tmp/s10-sc32-lh1.sock skip-locking key_buffer = 16K max_allowed_packet = 1M @@ -50,19 +50,19 @@ #skip-bdb # Uncomment the following if you are using InnoDB tables -#innodb_data_home_dir = /var/mysql/ -#innodb_data_file_path = ibdata1:10M:autoextend -#innodb_log_group_home_dir = /var/mysql/ -#innodb_log_arch_dir = /var/mysql/ +innodb_data_home_dir = /services/mysql/innodb +innodb_data_file_path = ibdata1:10M:autoextend +innodb_log_group_home_dir = /services/mysql/innodb +innodb_log_arch_dir = /services/mysql/innodb # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high -#innodb_buffer_pool_size = 16M -#innodb_additional_mem_pool_size = 2M +innodb_buffer_pool_size = 16M +innodb_additional_mem_pool_size = 2M # Set .._log_file_size to 25 % of buffer pool size -#innodb_log_file_size = 5M -#innodb_log_buffer_size = 8M -#innodb_flush_log_at_trx_commit = 1 -#innodb_lock_wait_timeout = 50 +innodb_log_file_size = 5M +innodb_log_buffer_size = 8M +innodb_flush_log_at_trx_commit = 1 +innodb_lock_wait_timeout = 50 [mysqldump] quick @@ -83,3 +83,6 @@ [mysqlhotcopy] interactive-timeout
Practicing Solaris Cluster using VirtualBox Combining technologies to work

4 Solaris Cluster Configuration

Page 35 / 42

+ +bind-address=s10-sc32-lh1 + zc1-z1 # /usr/sfw/bin/mysql_install_db --datadir=/services/mysql Preparing db table Preparing host table Preparing user table Preparing func table Preparing tables_priv table Preparing columns_priv table Installing all prepared tables 091014 18:29:33 /usr/sfw/sbin/mysqld: Shutdown Complete To start mysqld at boot time you have to copy support-files/mysql.server to the right place for your system PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER ! To do so, start the server, then issue the following commands: /usr/sfw/bin/mysqladmin -u root password 'new-password' /usr/sfw/bin/mysqladmin -u root -h zc1-z1 password 'new-password' See the manual for more instructions. You can start the MySQL daemon with: /usr/sfw/bin/mysqld_safe & You can test the MySQL daemon with the tests in the 'mysql-test' directory: cd /usr/sfw/mysql/mysql-test; ./mysql-test-run Please report any problems with the /usr/sfw/bin/mysqlbug script! The latest information about MySQL is available on the web at http://www.mysql.com Support MySQL by buying support/licenses at http://shop.mysql.com zc1-z1 # chown -R mysql:mysql /services/mysql Manually test the MySQL configuration: zc1-z1 # /usr/sfw/sbin/mysqld --defaults-file=/services/mysql/my.cnf --basedir=/usr/sfw --datadir=/services/mysql --user=mysql --pidfile=/services/mysql/mysqld.pid & zc1-z1 # /usr/sfw/bin/mysql -S /tmp/s10-sc32-lh1.sock -uroot Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 3 to server version: 4.0.31 Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> exit; Bye

Combining technologies to work

Practicing Solaris Cluster using VirtualBox

Page 36 / 42

4 Solaris Cluster Configuration

Configure the MySQL admin password for the admin user: zc1-z1 # /usr/sfw/bin/mysqladmin -S /tmp/s10-sc32-lh1.sock password 'mysqladmin' Allow access to the database for both cluster nodes for user root: zc1-z1 # /usr/sfw/bin/mysql -S /tmp/s10-sc32-lh1.sock -uroot -p'mysqladmin' Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 3 to server version: 4.0.31 Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> use mysql; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql> GRANT ALL ON *.* TO 'root'@'zc1-z1' IDENTIFIED BY 'mysqladmin'; Query OK, 0 rows affected (0.01 sec) mysql> GRANT ALL ON *.* TO 'root'@'zc1-z2' IDENTIFIED BY 'mysqladmin'; Query OK, 0 rows affected (0.00 sec) mysql> UPDATE user SET Grant_priv='Y' WHERE User='root' AND Host='zc1z1'; Query OK, 1 row affected (0.02 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql> UPDATE user SET Grant_priv='Y' WHERE User='root' AND Host='zc1z2'; Query OK, 0 rows affected (0.01 sec) Rows matched: 1 Changed: 0 Warnings: 0 mysql> exit; Bye Create and setup the HA MySQL resource configuration file: zc1-z1 zc1-z1 zc1-z1 zc1-z1 # # # # mkdir /services/mysql/cluster-config cd /services/mysql/cluster-config cp /opt/SUNWscmys/util/ha_mysql_config . cp /opt/SUNWscmys/util/mysql_config .

zc1-z1 # vi ha_mysql_config RS=mysql-rs RG=service-rg PORT=3306 LH=service-lh-rs SCALABLE=


Practicing Solaris Cluster using VirtualBox Combining technologies to work

4 Solaris Cluster Configuration

Page 37 / 42

LB_POLICY= HAS_RS=service-hasp-rs ZONE= ZONE_BT= PROJECT= BASEDIR=/usr/sfw DATADIR=/services/mysql MYSQLUSER=mysql MYSQLHOST=s10-sc32-lh1 FMUSER=fmuser FMPASS=fmuser LOGDIR=/services/mysql/logs CHECK=YES NDB_CHECK= zc1-z1 # vi mysql_config MYSQL_BASE=/usr/sfw MYSQL_USER=root MYSQL_PASSWD=mysqladmin MYSQL_HOST=s10-sc32-lh1 FMUSER=fmuser FMPASS=fmuser MYSQL_SOCK=/tmp/s10-sc32-lh1.sock MYSQL_NIC_HOSTNAME="zc1-z1 zc1-z2" MYSQL_DATADIR=/services/mysql NBD_CHECK= zc1-z1 # /opt/SUNWscmys/util/mysql_register -f /services/mysql/clusterconfig/mysql_config MySQL version 4 detected on 5.10 Check if the MySQL server is running and accepting connections Add faulmonitor user (fmuser) with password (fmuser) with Process-,Select-, Reload- and Shutdown-privileges to user table for mysql database for host zc1-z1 Add SUPER privilege for fmuser@zc1-z1 Add faulmonitor user (fmuser) with password (fmuser) with Process-,Select-, Reload- and Shutdown-privileges to user table for mysql database for host zc1-z2 Add SUPER privilege for fmuser@zc1-z2

Combining technologies to work

Practicing Solaris Cluster using VirtualBox

Page 38 / 42

4 Solaris Cluster Configuration

Create test-database sc3_test_database Grant all privileges to sc3_test_database for faultmonitor-user fmuser for host zc1-z1 Grant all privileges to sc3_test_database for faultmonitor-user fmuser for host zc1-z2 Flush all privileges Mysql configuration for HA is done zc1-z1 # kill -TERM `cat /services/mysql/mysqld.pid` zc1-z1 # /opt/SUNWscmys/util/ha_mysql_register -f /services/mysql/cluster-config/ha_mysql_config sourcing /services/mysql/cluster-config/ha_mysql_config and create a working copy under /opt/SUNWscmys/util/ha_mysql_config.work Registration of resource mysql-rs succeeded. remove the working copy /opt/SUNWscmys/util/ha_mysql_config.work zc1-z1 # clrs enable mysql-rs Verify that the services-rg works on both nodes: zc1-z1 # clrs status mysql-rs === Cluster Resources === Resource Name ------------mysql-rs Node Name --------zc1-z1 zc1-z2 State ----Online Offline Status Message -------------Online Offline

zc1-z1 # clrg switch -n zc1-z2 service-rg zc1-z1 # clrs status mysql-rs === Cluster Resources === Resource Name ------------mysql-rs Node Name --------zc1-z1 zc1-z2 State ----Offline Online Status Message -------------Offline Online

Practicing Solaris Cluster using VirtualBox

Combining technologies to work

4 Solaris Cluster Configuration

Page 39 / 42

4.9 HA Tomcat Configuration (zc1)


Install Patch 126072 on both nodes (s10-sc32-1 and s10-sc32-2): vorlon# scp /data/SolarisCluster/126072-02.zip root@s10-sc32-1:/var/tmp vorlon# scp /data/SolarisCluster/126072-02.zip root@s10-sc32-2:/var/tmp both-nodes# cd /var/tmp both-nodes# unzip 126072-02.zip both-nodes# patchadd 126072-02 Configure Tomcat on the node where the services-rg resource group is online: zc1-z1 # clrg status service-rg === Cluster Resource Groups === Group Name ---------service-rg Node Name --------zc1-z1 zc1-z2 Suspended --------No No Status -----Online Offline

s10-sc32-1 # zfs create services/tomcat zc1-z1 # vi /services/tomcat/env.ksh #!/bin/ksh CATALINA_HOME=/usr/apache/tomcat55 CATALINA_BASE=/services/tomcat JAVA_HOME=/usr/java export CATALINA_HOME CATALINA_BASE JAVA_HOME zc1-z1 # chown webservd:webservd /services/tomcat/env.ksh zc1-z1 # cd /var/apache/tomcat55 zc1-z1 # tar cpf - . | ( cd /services/tomcat ; tar xpf -) zc1-z1 # cp /services/tomcat/conf/server-minimal.xml /services/tomcat/conf/server.xml zc1-z1 # cd /services/tomcat zc1-z1 # mkdir cluster-config zc1-z1 # chown webservd:webservd cluster-config zc1-z1 # cd cluster-config zc1-z1 # cp /opt/SUNWsctomcat/util/sctomcat_config . zc1-z1 # cp /opt/SUNWsctomcat/bin/pfile . zc1-z1 # chown webservd:webservd pfile zc1-z1 # vi pfile EnvScript=/services/tomcat/env.ksh User=webservd Basepath=/usr/apache/tomcat55 Host=s10-sc32-lh1 Port=8080 TestCmd="get /index.jsp" ReturnString="CATALINA"
Combining technologies to work Practicing Solaris Cluster using VirtualBox

Page 40 / 42

4 Solaris Cluster Configuration

Startwait=20 zc1-z1 # vi sctomcat_config RS=tomcat-rs RG=service-rg PORT=8080 LH=service-lh-rs NETWORK=true SCALABLE=false PFILE=/services/tomcat/cluster-config/pfile HAS_RS=service-hasp-rs ZONE= ZONE_BT= PROJECT= zc1-z1 # /opt/SUNWsctomcat/util/sctomcat_register -f /services/tomcat/cluster-config/sctomcat_config sourcing /services/tomcat/cluster-config/sctomcat_config and create a working copy under /opt/SUNWsctomcat/util/sctomcat_config.work Registration of resource tomcat-rs succeeded. remove the working copy /opt/SUNWsctomcat/util/sctomcat_config.work zc1-z1 # clrs enable tomcat-rs Verify that the services-rg works on both nodes: zc1-z1 # clrs status tomcat-rs === Cluster Resources === Resource Name ------------tomcat-rs Node Name --------zc1-z1 zc1-z2 State ----Online Offline Status Message -------------Online Offline

zc1-z1 # clrg switch -n zc1-z2 service-rg zc1-z1 # clrs status tomcat-rs === Cluster Resources === Resource Name ------------tomcat-rs Node Name --------zc1-z1 zc1-z2 State ----Offline Online Status Message -------------Offline Online

Start firefox on vorlon and verify the tomcat page at http://s10-sc32-lh1:8080/.

4.10 Scalable Apache Configuration (zc2)


Create failover resource group for the shared address: zc2-z1 # clrg create shared-ip-rg

Practicing Solaris Cluster using VirtualBox

Combining technologies to work

4 Solaris Cluster Configuration

Page 41 / 42

zc2-z1 # clrssa create -g shared-ip-rg -h s10-sc32-lh2 shared-ip-rs zc2-z1 # clrg online -eM shared-ip-rg Prepare the apache configuration file: both-zones# cd /etc/apache2/ both-zones# cp httpd.conf-example httpd.conf both-zones# vi httpd.conf --- httpd.conf-example Sat Jan 24 17:01:06 2009 +++ httpd.conf Tue Oct 6 13:28:10 2009@@ -60,7 +60,7 @@ # <IfModule !mpm_winnt.c> <IfModule !mpm_netware.c> -#LockFile /var/apache2/logs/accept.lock +LockFile /var/apache2/logs/accept.lock </IfModule> </IfModule> @@ -84,7 +84,7 @@ # identification number when it starts. # <IfModule !mpm_netware.c> -PidFile /var/run/apache2/httpd.pid +PidFile /var/apache2/logs/httpd.pid </IfModule> # @@ -343,7 +343,7 @@ # You will have to access it by its address anyway, and this will make # redirections work in a sensible way. # -ServerName 127.0.0.1 +ServerName 10.0.2.131 # # UseCanonicalName: Determines how Apache constructs self-referencing The default httpd.conf file uses /var/apache/2.2/htdocs as DocumentRoot. Configure the scalable resource group for apache: zc2-z1 # clrt register SUNW.apache zc2-z1 # clrg create -p Maximum_primaries=2 -p Desired_primaries=2 -p RG_dependencies=shared-ip-rg apache-rg zc2-z1 # clrs create -g apache-rg -t SUNW.apache -p Bin_dir=/usr/apache2/bin -p Resource_dependencies=shared-ip-rs -p Scalable=True -p Port_list=80/tcp apache-rs zc2-z1 # clrg online -eM apache-rg Start firefox on vorlon and open the demo URL at http://s10-sc32-lh2/scdemo/. Default is a 1:1 weight for the nodes. You can change the weight to e.g. 4:3 by: zc2-z1 # clrs set -p Load_balancing_weights=4@1,3@2 apache-rs

Combining technologies to work

Practicing Solaris Cluster using VirtualBox

Seite 42 / 42

A References

A References
1. VirtualBox Download Page: http://www.virtualbox.org/wiki/Downloads 2. Solaris Cluster documentation: http://docs.sun.com/app/docs/prod/sun.cluster32#hic 3. Solaris Cluster Blog: http://blogs.sun.com/SC 4. Solaris OS Hardware Compatibility Lists: http://www.sun.com/bigadmin/hcl/ 5. Toshiba OpenSolaris Laptops: http://www.opensolaris.com/toshibanotebook/index.html 6. Blueprint: Zone Clusters - How to deploy virtual clusters and why: https://www.sun.com/offers/details/820-7351.xml 7. Blueprint: Deploying Oracle Real Application Clusters (RAC) on Solaris Zone Clusters: https://www.sun.com/offers/details/820-7661.xml 8. Blueprint: High Availability MySQL Database Replication with Solaris Zone Cluster: https://www.sun.com/offers/details/820-7582.xml

Practicing Solaris Cluster using VirtualBox

Combining technologies to work

You might also like