Professional Documents
Culture Documents
Whitepaper PracticingSolarisClusterUsingVirtualBox Extern
Whitepaper PracticingSolarisClusterUsingVirtualBox Extern
Example configuration to run a training and development cluster environment on a single system
This white paper describes how to configure a training and development environment for Solaris 10 and Solaris Cluster 3.2 on a physical system running OpenSolaris, using technologies like VirtualBox, software quorum, Solaris Container Clusters (Zone Clusters), Crossbow, IPsec and COMSTAR iSCSI.
Table of Contents
1 2 2.1 2.2 Introduction............................................................................................................. 3 Host Configuration.................................................................................................. 4 BIOS Configuration.................................................................................................4 OpenSolaris Configuration......................................................................................4
2.2.1Network Configuration............................................................................................ 4 2.2.2Filesystem Configuration........................................................................................ 7 2.2.3COMSTAR / iSCSI Target Configuration.................................................................8 2.3 2.4 2.5 2.6 3 3.1 Install VirtualBox..................................................................................................... 9 Install rdesktop........................................................................................................9 Download Solaris 10 05/09 (Update 7) ISO image................................................. 9 Download Solaris Cluster 3.2 01/09 archive...........................................................9 VirtualBox Configuration....................................................................................... 10 VirtualBox Guest Configuration.............................................................................10 3.1.1Virtual Disk Configuration......................................................................................11 3.1.2Virtual Machine Configuration............................................................................... 11 3.2 VirtualBox Guest Solaris Configuration.................................................................13 3.2.1First Guest Installation (S10-U7-SC-32U2-1)....................................................... 13 3.2.2Second Guest Installation (S10-U7-SC-32U2-2).................................................. 15 3.3 Getting Crash dumps from Solaris guests............................................................ 18 3.3.1Booting Solaris with kernel debugger enabled..................................................... 18 3.3.2How to break into the kernel debugger.................................................................18 3.3.3Forcing a crash dump........................................................................................... 19 3.3.4Crash dump analysis with Solaris CAT................................................................. 19 4 4.1 Solaris Cluster Configuration................................................................................ 20 Solaris Cluster Installation.................................................................................... 21 4.1.1First node cluster installation (s10-sc32-1)........................................................... 21 4.1.2First node cluster configuration (s10-sc32-1)....................................................... 22 4.1.3Second node cluster installation (s10-sc32-2)...................................................... 23 4.1.4Second node cluster configuration (s10-sc32-2).................................................. 23 4.2 4.3 4.4 4.5 4.6 iSCSI Initiator Configuration................................................................................. 24 ZFS zpool Configuration for Data......................................................................... 24 Software Quorum Configuration........................................................................... 25 IPsec Configuration for the cluster interconnect................................................... 25 Zone Cluster Configuration...................................................................................28
4.6.1First Zone Cluster Configuration (zc1)..................................................................28 4.6.2Second Zone Cluster Configuration (zc2).............................................................30 4.7 4.8 4.9 A Resource Group and HA ZFS Configuration (zc1)............................................... 32 HA MySQL Configuration (zc1).............................................................................33 HA Tomcat Configuration (zc1)............................................................................. 39 References........................................................................................................... 42
1 Introduction
Page 3 / 42
1 Introduction
For developers it is often convenient to have all tools necessary for their work in one place, ideally on a laptop for maximum mobility. For system administrators, it is often critical to have a test system on which to try out things and learn about new features. Of course the system needs to be low cost and transportable to anywhere they need to be. HA Clusters are often perceived as complex to setup and resource hungry in terms of hardware requirements. This white paper explains how to setup a single x86 based system (like a laptop) with OpenSolaris, configuring a training and development environment for Solaris 10 / Solaris Cluster 3.2 and using VirtualBox to setup a two node cluster. The configuration can then be used to practice various technologies: OpenSolaris technologies like Crossbow (to create virtual networking adapters), COMSTAR (to export iSCSI targets from the host being used as iSCSI initiators by the Solaris Cluster nodes as shared storage and quorum device), ZFS (to export a ZFS volume as iSCSI targets and as failover file system within the cluster) and IPsec (to secure the cluster private interconnect traffic) are used for the host system and VirtualBox guests to configure Solaris 10 / Solaris Cluster 3.2. Solaris Cluster technologies like software quorum and zone clusters are getting used to setup HA MySQL and HA Tomcat as failover services running in one virtual cluster. A second virtual cluster is being used to show how to setup Apache as a scalable service. The instructions can be used as a step-by-step guide to setup any x86 based system that is capable to run OpenSolaris. In order to try out if your system works, simply boot the OpenSolaris live CDROM and confirm with the Device Driver Utility (DDU) that all required components are able to run. The hardware compatibility list can be found at http://www.sun.com/bigadmin/hcl/.
Page 4 / 42
2 Host Configuration
2 Host Configuration
The example host system used throughout this white paper is a Toshiba Tecra M10 Laptop with the following hardware specifications: 4 GB main memory Intel Coretm2 Duo P8400@2.26Ghz 160 GB SATA hard disk 1 physical network nic (1000 Mbit) e1000g0 1 wireless network nic (54 Mbit) iwh0
The system should at least have a minimum of 3GB of main memory in order to host two VirtualBox OpenSolaris guest systems.
2 Host Configuration
Page 5 / 42
VirtualBox guest s10-sc32-1 Solaris 10 05/09 (Update 7) vnic12 vnic21 e1000g0 e1000g1 clprivnet0
e1000g0
NAT
vnic11
etherstub1
etherstub2 VirtualBox guest s10-sc32-2 Solaris 10 05/09 (Update 7) vnic22 vnic13 e1000g1 clprivnet0 e1000g0
The following IP addresses will be used: IP Address 10.0.2.100 10.0.2.121 10.0.2.122 10.0.2.130 10.0.2.131 10.0.2.140 10.0.2.141 10.0.2.142 10.0.2.143 vorlon-int s10-sc32-1 s10-sc32-2 s10-sc32-lh1 s10-sc32-lh2 zc1-z1 zc1-z2 zc2-z1 zc2-z2 Alias vnic11 e1000g0 / vnic12 e1000g0 / vnic13 Comment
Page 6 / 42
2 Host Configuration
Disable the NWAM service: vorlon# svcadm disable nwam Create the virtual network: vorlon# vorlon# vorlon# vorlon# vorlon# vorlon# vorlon# dladm dladm dladm dladm dladm dladm dladm create-etherstub etherstub1 create-vnic -l etherstub1 vnic11 create-vnic -l etherstub1 vnic12 create-vnic -l etherstub1 vnic13 create-etherstub etherstub2 create-vnic -l etherstub2 vnic21 create-vnic -l etherstub2 vnic22
Add the IP addresses and aliases to /etc/inet/hosts: vorlon# vi ::1 vorlon 127.0.0.1 # # Internal 10.0.2.100 10.0.2.121 10.0.2.122 10.0.2.130 10.0.2.131 10.0.2.140 10.0.2.141 10.0.2.142 10.0.2.143 /etc/inet/hosts vorlon.local localhost loghost vorlon.local localhost loghost network for VirtualBox vorlon-int s10-sc32-1 s10-sc32-2 s10-sc32-lh1 s10-sc32-lh2 zc1-z1 zc1-z2 zc2-z1 zc2-z2
Add the default netmasks for the used subnets to /etc/inet/netmasks: vorlon# vi /etc/inet/netmasks 10.0.1.0 255.255.255.0 10.0.2.0 255.255.255.0 Configure the internal host IP used to access the network to the VirtualBox guest: vorlon# vi /etc/hostname.vnic11 vorlon-int Always plumb the vnics used by the VirtualBox guests when booting: vorlon# touch /etc/hostname.vnic12 /etc/hostname.vnic13 /etc/hostname.vnic21 /etc/hostname.vnic22 If you want the VirtualBox guests to be able to reach the external network connected to either e1000g0 or iwh0, then setup ipfilter to perform Network Address Translation (NAT) for the internal virtual network:
Practicing Solaris Cluster using VirtualBox Combining technologies to work
2 Host Configuration
Page 7 / 42
vorlon# vi /etc/ipf/ipf.conf pass in all pass out all vorlon# vi /etc/ipf/ipnat.conf map e1000g0 10.0.2.0/24 -> 0/32 portmap tcp/udp auto map e1000g0 10.0.2.0/24 -> 0/32 map iwh0 10.0.2.0/24 -> 0/32 portmap tcp/udp auto map iwh0 10.0.2.0/24 -> 0/32 If you want to make e.g. the tomcat URL configured later in section 4.9 accessible from outside of the hosts external network, add the following line to /etc/ipf/ipnat.conf: rdr e1000g0 0.0.0.0/0 port 8080 -> 10.0.2.130 port 8080 tcp Configure the public network on e1000g0 depending on your individual setup. The following example assumes a static IP configuration: vorlon# vi /etc/hostname.e1000g0 10.0.1.42 vorlon# vi /etc/defaultrouter 10.0.1.1 vorlon# vi /etc/resolv.conf nameserver 10.0.1.1 vorlon# vi /etc/nsswitch.conf => add dns to the hosts keyword: hosts: files dns Enable the static networking configuration: vorlon# svcadm enable svc:/network/physical:default Enable the service for ipfilter: vorlon# svcadm enable svc:/network/ipfilter:default Enable IPv4 forwarding: vorlon# routeadm -u -e ipv4-forwarding
Page 8 / 42
2 Host Configuration
downloads of various files (/data) VirtualBox Images (/VirtualBox-Images) zfs create -o mountpoint=/var/crash -o compression=on rpool/crash mkdir /var/crash/vorlon zfs create -o mountpoint=/data rpool/data zfs create -o mountpoint=/VirtualBox-Images rpool/vbox-images chown scdemo:staff /data /VirtualBox-Images
2 Host Configuration
Page 9 / 42
Page 10 / 42
3 VirtualBox Configuration
3 VirtualBox Configuration
3.1 VirtualBox Guest Configuration
The following diagram describes the desired disk configuration:
Laptop vorlon
VBox Guest s10-sc32-1 VBox Guest s10-sc32-2
Zpool services
c3t2d0
c3t2d0
d1
d1
c0d0
rpool
iSCSI Initiator
iSCSI Initiator
c0d0
rpool
S10-U7-SC32U2-1.vdi
iSCSI Target
c2t0d0
S10-U7-SC32U2-2.vdi
3 VirtualBox Configuration
Page 11 / 42
The following shows which vnic is used by which VirtualBox guest: VirtualBox Guest Name S10-U7-SC-32U2-1 VNIC used vnic12 MAC address 020820D5479D
Page 12 / 42
3 VirtualBox Configuration
It is critical that the MAC address configured with the VirtualBox guest exactly matches with the MAC address configured for the corresponding vnic, otherwise network communication will not work. Configure the virtual machines: scdemo@vorlon$ /opt/VirtualBox/VBoxManage createvm --name S10-U7-SC-32U21 --ostype Solaris_64 --register VirtualBox Command Line Management Interface Version 3.0.8 (C) 2005-2009 Sun Microsystems, Inc. All rights reserved. Virtual machine 'S10-U7-SC-32U2-1' is created and registered. UUID: 44b912d0-5e3d-4063-9db4-47b3f5575701 Settings file: '/export/home/scdemo/.VirtualBox/Machines/S10-U7-SC-32U21/S10-U7-SC-32U2-1.xml' scdemo@vorlon$ /opt/VirtualBox/VBoxManage modifyvm S10-U7-SC-32U2-1 --memory 1280 -hda /VirtualBox-Images/S10-U7-SC-32U2-1.vdi --boot1 disk --boot2 dvd --dvd /data/isos/Solaris10/Update7/x86-ga/sol-10-u7-ga-x86dvd.iso --nic1 bridged --nictype1 82540EM --cableconnected1 on --bridgeadapter1 vnic12 --macaddress1 020820D5479D --nic2 bridged --nictype2 82540EM --cableconnected2 on --bridgeadapter2 vnic21 --macaddress2 0208203A34A3 --audio solaudio --audiocontroller ac97 --vrdp on --vrdpport 3390 VirtualBox Command Line Management Interface Version 3.0.8 (C) 2005-2009 Sun Microsystems, Inc. All rights reserved. scdemo@vorlon$ /opt/VirtualBox/VBoxManage createvm --name S10-U7-SC-32U22 --ostype OpenSolaris_64 --register VirtualBox Command Line Management Interface Version 3.0.8 (C) 2005-2009 Sun Microsystems, Inc. All rights reserved. Virtual machine 'S10-U7-SC-32U2-2' is created and registered. UUID: ce23d951-832b-4d50-9707-495c7ce0d30b Settings file: '/export/home/scdemo/.VirtualBox/Machines/S10-U7-SC-32U22/S10-U7-SC-32U2-2.xml' scdemo@vorlon$ /opt/VirtualBox/VBoxManage modifyvm S10-U7-SC-32U2-2 --memory 1280 -hda /VirtualBox-Images/S10-U7-SC-32U2-2.vdi --boot1 disk --boot2 dvd --dvd /data/isos/Solaris10/Update7/x86-ga/sol-10-u7-ga-x86dvd.iso --nic1 bridged --nictype1 82540EM --cableconnected1 on --bridgeadapter1 vnic13 --macaddress1 020820E29994 --nic2 bridged --nictype2 82540EM --cableconnected2 on --bridgeadapter2 vnic22 --macaddress2 020820D3BF1A --audio solaudio --audiocontroller ac97 --vrdp on --vrdpport 3391 VirtualBox Command Line Management Interface Version 3.0.8 (C) 2005-2009 Sun Microsystems, Inc. All rights reserved.
Practicing Solaris Cluster using VirtualBox Combining technologies to work
3 VirtualBox Configuration
Page 13 / 42
The next step is to configure the static networking for s10-sc32-1. After the reboot, login as user root and perform the following steps in a terminal window: s10-sc32-1 # vi /etc/inet/hosts
Combining technologies to work Practicing Solaris Cluster using VirtualBox
Page 14 / 42
3 VirtualBox Configuration
::1 localhost loghost 127.0.0.1 localhost loghost # # Internal network for VirtualBox 10.0.2.100 vorlon-int 10.0.2.121 s10-sc32-1 s10-sc32-1.local 10.0.2.122 s10-sc32-2 10.0.2.130 s10-sc32-lh1 10.0.2.131 s10-sc32-lh2 10.0.2.140 zc1-z1 10.0.2.141 zc1-z2 10.0.2.142 zc2-z1 10.0.2.143 zc2-z2 s10-sc32-1 # vi /etc/inet/netmasks 10.0.2.0 255.255.255.0 s10-sc32-1 # vi /etc/hostname.e1000g0 s10-sc32-1 s10-sc32-1 # vi /etc/defaultrouter vorlon-int In case you have the host system connected to external networking, configure a nameservice such as DNS: s10-sc32-1 # vi /etc/resolv.conf nameserver <nameserver-ip> s10-sc32-1 # vi /etc/nsswitch.conf => add dns to the hosts keyword: hosts: files dns In case you want the guest system to not run the graphical login, in order to conserve some main memory, logout from the gnome session and login through the text console as user root: s10-sc32-1 # svcadm disable svc:/application/graphical-login/gdm:default In case you want to allow remote ssh access for the root user (assumed later): s10-sc32-1 # vi /etc/ssh/sshd_config => change the PermitRootLogin setup from no to yes: PermitRootLogin yes s10-sc32-1 # svcadm restart ssh Since the host system is running two VirtualBox guests at the same time, if the system gets loaded, it is possible that the guest Solaris 10 system will send a lot of the following messages to syslog: <date> <nodename> genunix: [ID 313806 kern.notice] NOTICE: pm_tick delay of 3058 ms exceeds 2147 ms
Practicing Solaris Cluster using VirtualBox Combining technologies to work
3 VirtualBox Configuration
Page 15 / 42
This message is also getting send to the system console and can slow down the whole system a lot. To prevent that, make the following modification to the syslog.conf file: s10-sc32-1 # cp -p /etc/syslog.conf /etc/syslog.conf.orig s10-sc32-1 # vi /etc/syslog.conf --- syslog.conf.orig Tue Mar 17 18:41:20 2009 +++ syslog.conf Fri Oct 2 20:35:44 2009 @@ -9,8 +9,8 @@ # that match m4 reserved words. Also, within ifdef's, arguments # containing commas must be quoted. # -*.err;kern.notice;auth.notice /dev/sysmsg -*.err;kern.debug;daemon.notice;mail.crit /var/adm/messages +*.err;kern.warning;auth.notice /dev/sysmsg +*.err;kern.debug;daemon.warning;mail.crit /var/adm/messages *.alert;kern.err;daemon.err *.alert operator root
Note that this will cause all daemon.notice message not being send to the console or /var/adm/messages. Shutdown the guest: s10-sc32-1 # init 5 Remove the OpenSolaris ISO image from future use: scdemo@vorlon$ /opt/VirtualBox/VBoxManage modifyvm S10-U7-SC-32U2-1 --dvd none VirtualBox Command Line Management Interface Version 3.0.8 (C) 2005-2009 Sun Microsystems, Inc. All rights reserved.
Page 16 / 42
3 VirtualBox Configuration
Hostname: s10-sc32-2 IP Address: 10.0.2.122 Part of Subnet: Yes Netmask: 255.255.255.0 Enable IPv6: No Configure Kerberos: No Nameservice: None NFSv4 Domain Config: Use the NFSv4 domain derived by the system Timezone: <correct timezone> Time: <correct time> Root Password: <password> Remote services enabled: No Standard Installation Geographic Region: North America (or the region of your choice) Default locale: en_US_ISO8859-15 (or the locale of your choice) Additional Products: None Filesystem: ZFS Solaris software to install: Entire Distribution Disk device: c0d0 Select for swap: 1024, rest leave default values
The next step is to configure the static networking for s10-sc32-2. After the reboot, login as user root and perform the following steps in a terminal window: s10-sc32-2 # vi /etc/inet/hosts ::1 localhost loghost 127.0.0.1 localhost loghost # # Internal network for VirtualBox 10.0.2.100 vorlon-int 10.0.2.121 s10-sc32-1 10.0.2.122 s10-sc32-2 s10-sc32-2.local 10.0.2.130 s10-sc32-lh1 10.0.2.131 s10-sc32-lh2 10.0.2.140 zc1-z1 10.0.2.141 zc1-z2 10.0.2.142 zc2-z1 10.0.2.143 zc2-z2 s10-sc32-2 # vi /etc/inet/netmasks 10.0.2.0 255.255.255.0 s10-sc32-2 # vi /etc/hostname.e1000g0 s10-sc32-2 s10-sc32-2 # vi /etc/defaultrouter vorlon-int In case you have the host system connected to external networking, configure the nameservice like DNS:
3 VirtualBox Configuration
Page 17 / 42
s10-sc32-2 # vi /etc/resolv.conf nameserver <nameserver-ip> s10-sc32-2 # vi /etc/nsswitch.conf => add dns to the hosts keyword: hosts: files dns In case you want the guest system to not run the graphical login, in order to conserve some main memory, logout from the gnome session and login through the text console as user root: s10-sc32-2 # svcadm disable svc:/application/graphical-login/gdm:default In case you want to allow remote ssh access for the root user (assumed later): s10-sc32-2 # vi /etc/ssh/sshd_config => change the PermitRootLogin setup from no to yes: PermitRootLogin yes s10-sc32-2 # svcadm restart ssh Since the host system is running two VirtualBox guests at the same time, if the system gets loaded, it is possible that the guest Solaris 10 system will send a lot of the following messages to syslog: <date> <nodename> genunix: [ID 313806 kern.notice] NOTICE: pm_tick delay of 3058 ms exceeds 2147 ms This message is also getting send to the system console and can slow down the whole system a lot. To prevent that, make the following modification to the syslog.conf file: s10-sc32-2 # cp -p /etc/syslog.conf /etc/syslog.conf.orig s10-sc32-2 # vi /etc/syslog.conf --- syslog.conf.orig Tue Mar 17 18:41:20 2009 +++ syslog.conf Fri Oct 2 20:35:44 2009 @@ -9,8 +9,8 @@ # that match m4 reserved words. Also, within ifdef's, arguments # containing commas must be quoted. # -*.err;kern.notice;auth.notice /dev/sysmsg -*.err;kern.debug;daemon.notice;mail.crit /var/adm/messages +*.err;kern.warning;auth.notice /dev/sysmsg +*.err;kern.debug;daemon.warning;mail.crit /var/adm/messages *.alert;kern.err;daemon.err *.alert operator root
Note that this will cause all daemon.notice message not being send to the console or /var/adm/messages. Shutdown the guest:
Page 18 / 42
3 VirtualBox Configuration
s10-sc32-2 # init 5 Remove the OpenSolaris ISO image from future use: scdemo@vorlon$ /opt/VirtualBox/VBoxManage modifyvm S10-U7-SC-32U2-2 --dvd none VirtualBox Command Line Management Interface Version 3.0.8 (C) 2005-2009 Sun Microsystems, Inc. All rights reserved.
3 VirtualBox Configuration
Page 19 / 42
Page 20 / 42
s10-sc32-1
zc2-z1 RG apache-rg: apache-rs RG shared-ip-rg: shared-ip-rs
s10-sc32-2
zc1-z1
zpool: services
Page 21 / 42
scdemo@vorlon$ /opt/VirtualBox/VBoxManage startvm S10-U7-SC-32U2-1 --type vrdp scdemo@vorlon$ /opt/VirtualBox/VBoxManage startvm S10-U7-SC-32U2-2 --type vrdp The console can be reached via the the rdesktop application. Console for s10-sc32-1: scdemo@vorlon$ rdesktop localhost:3390 Console for s10-sc32-2: scdemo@vorlon$ rdesktop localhost:3391
Page 22 / 42
Choose Configure Later when prompted whether to configure Sun Cluster framework software. After installation is finished, you can view any available installation log. Add /usr/cluster/bin to $PATH and /usr/cluster/man to $MANPATH within $HOME/.profile for user root.
Page 23 / 42
Page 24 / 42
the cluster name to join is s10-sc32-demo the sponsoring node is s10-sc32-1 the lofi option is used for global devices e1000g1 is the network interface used for the cluster interconnect, which is attached to the switch etherstub2
s10-sc32-2 # /usr/cluster/bin/scinstall \ -i \ -C s10-sc32-demo \ -N s10-sc32-1 \ -G lofi \ -A trtype=dlpi,name=e1000g1 \ -m endpoint=:e1000g1,endpoint=etherstub2 Disable MPxIO for iSCSI: s10-sc32-2 # vi /kernel/drv/iscsi.conf => change the mpxio-disable setup from no to yes: mpxio-disable="yes"; Reboot the node: s10-sc32-2 # init 6
Page 25 / 42
In this example we use the iSCSI target from section 2.2.3 for both, as part of the zpool and as quorum device. Create the zpool first: s10-sc32-1 # zpool create services /dev/rdsk/c3t2d0 s10-sc32-1 # zpool export services
As an alternative, you can configure a quorum device as quorum server. The procedure is explained at http://docs.sun.com/app/docs/doc/820-4677/cihecfab?l=en&a=view. For the laptop configuration it would be possible to configure the quorum server on the host vorlon.
Page 26 / 42
ether 0:0:0:0:0:1 s10-sc32-2 # ifconfig e1000g1 e1000g1: flags=201008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4,CoS> mtu 1500 index 3 inet 172.16.0.130 netmask ffffff80 broadcast 172.16.0.255 ether 2:8:20:d3:bf:1a s10-sc32-2 # ifconfig clprivnet0 clprivnet0: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 4 inet 172.16.4.2 netmask fffffe00 broadcast 172.16.5.255 ether 0:0:0:0:0:2 s10-sc32-1 # vi ipsecinit.conf {laddr 172.16.0.129 raddr 172.16.0.130} ipsec {auth_algs any encr_algs any sa shared} {laddr 172.16.4.1 raddr 172.16.4.2} ipsec {auth_algs any encr_algs any sa shared} s10-sc32-2 # vi ipsecinit.conf {laddr 172.16.0.130 raddr 172.16.0.129} ipsec {auth_algs any encr_algs any sa shared} {laddr 172.16.4.2 raddr 172.16.4.1} ipsec {auth_algs any encr_algs any sa shared} Prepare /etc/inet/ike/config on both nodes: both-nodes# cd /etc/inet/ike both-nodes# cp config.sample config s10-sc32-1 # vi config { label "clusternode1-priv-physical1-clusternode2-priv-physical1" local_addr 172.16.0.129 remote_addr 172.16.0.130 p1_xform { auth_method preshared oakley_group 5 auth_alg md5 encr_alg 3des} p2_pfs 5 p2_idletime_secs 30 } { label "clusternode1-priv-privnet0-clusternode2-priv-privnet0" local_addr 172.16.4.1 remote_addr 172.16.4.2 p1_xform { auth_method preshared oakley_group 5 auth_alg md5 encr_alg 3des} p2_pfs 5 p2_idletime_secs 30 } s10-sc32-2 # vi config { label "clusternode2-priv-physical1-clusternode1-priv-physical1"
Page 27 / 42
local_addr 172.16.0.130 remote_addr 172.16.0.129 p1_xform { auth_method preshared oakley_group 5 auth_alg md5 encr_alg 3des} p2_pfs 5 p2_idletime_secs 30 } { label "clusternode2-priv-privnet0-clusternode1-priv-privnet0" local_addr 172.16.4.2 remote_addr 172.16.4.1 p1_xform { auth_method preshared oakley_group 5 auth_alg md5 encr_alg 3des} p2_pfs 5 p2_idletime_secs 30 } both-nodes# /usr/lib/inet/in.iked -c -f /etc/inet/ike/config in.iked: Configuration file /etc/inet/ike/config syntactically checks out. Setup entries for pre-shared keys in /etc/inet/secret/ike.preshared on both nodes: both-nodes# cd /etc/inet/secret s10-sc32-1 # pktool genkey keystore=file outkey=ikekey keytype=3des keylen=192 print=y Key Value ="329b7f792c5854dfd654674adf9220c45851dc61291c893b" s10-sc32-1 # vi ike.preshared { localidtype IP localid 172.16.0.129 remoteidtype IP remoteid 172.16.0.130 key 329b7f792c5854dfd654674adf9220c45851dc61291c893b } { localidtype IP localid 172.16.4.1 remoteidtype IP remoteid 172.16.4.2 key 329b7f792c5854dfd654674adf9220c45851dc61291c893b } s10-sc32-2 # vi ike.preshared { localidtype IP localid 172.16.0.130 remoteidtype IP remoteid 172.16.0.129 key 329b7f792c5854dfd654674adf9220c45851dc61291c893b } { localidtype IP localid 172.16.4.2 remoteidtype IP
Page 28 / 42
remoteid 172.16.4.1 key 329b7f792c5854dfd654674adf9220c45851dc61291c893b } both-nodes# svcadm enable svc:/network/ipsec/ike:default both-nodes# svcadm restart svc:/network/ipsec/policy:default
Page 29 / 42
set terminal=vt220 set security_policy=NONE set name_service=NONE set nfs4_domain=dynamic set timezone=MET set root_password=<crypted password string> end commit exit Configure the zone cluster zc1: s10-sc32-1 # clzc configure -f /var/tmp/zc1.txt zc1 s10-sc32-1 # clzc verify zc1 Waiting for zone verify commands to complete on all the nodes of the zone cluster "zc1" Install the zone cluster zc1: s10-sc32-1 # clzc install zc1 Waiting for zone install commands to complete on all the nodes of the zone cluster "zc1" Note that this step can take a while. It populates the zone root path with package content. Output is getting send to the console of each node (global zone) where you can monitor the progress. Boot the zone cluster zc1: s10-sc32-1 # clzc boot zc1 => on s10-sc32-1: zlogin -C zc1 => on s10-sc32-2: zlogin -C zc1 Perform the following steps in both zones, zc1-z1 and zc1-z2: Enable SSH root login for user root: both-zones# vi /etc/ssh/sshd_config => change the PermitRootLogin setup from no to yes: PermitRootLogin yes both-zones# svcadm restart ssh Add the cluster IP addresses to /etc/inet/hosts: both-zones# vi /etc/hosts # 10.0.2.140 zc1-z1 10.0.2.141 zc1-z2 #
Combining technologies to work Practicing Solaris Cluster using VirtualBox
Page 30 / 42
# logical hosts 10.0.2.130 s10-sc32-lh1 # # Base cluster nodes 10.0.2.121 s10-sc32-1 10.0.2.122 s10-sc32-2 # # Internal network for VirtualBox 10.0.2.100 vorlon-int Disable unneeded services within both zones in order to conserve some main memory: both-zones# svcadm disable svc:/application/graphical-login/cdelogin:default both-zones# svcadm disable webconsole both-zones# svcadm disable svc:/network/rpc/cde-calendar-manager:default both-zones# svcadm disable svc:/network/rpc/cde-ttdbserver:tcp both-zones# svcadm disable svc:/application/cde-printinfo:default both-zones# svcadm disable svc:/application/font/fc-cache:default both-zones# svcadm disable svc:/application/management/wbem:default both-zones# svcadm disable svc:/application/font/stfsloader:default both-zones# svcadm disable svc:/application/opengl/ogl-select:default both-zones# svcadm disable svc:/application/x11/xfs:default both-zones# svcadm disable svc:/application/print/ppd-cacheupdate:default both-zones# svcadm disable svc:/network/smtp:sendmail both-zones# svcadm disable svc:/application/stosreg:default both-zones# svcadm disable svc:/application/management/seaport:default both-zones# svcadm disable svc:/application/management/sma:default both-zones# svcadm disable svc:/application/management/snmpdx:default both-zones# svcadm disable svc:/application/management/dmi:default
Page 31 / 42
add node set physical-host=s10-sc32-2 set hostname=zc2-z2 add net set address=10.0.2.143 set physical=e1000g0 end end add net set address=10.0.2.131 end add sysid set system_locale=C set terminal=vt220 set security_policy=NONE set name_service=NONE set nfs4_domain=dynamic set timezone=MET set root_password=<crypted password string> end commit exit Configure the zone cluster zc2: s10-sc32-1 # clzc configure -f /var/tmp/zc2.txt zc2 s10-sc32-1 # clzc verify zc2 Waiting for zone verify commands to complete on all the nodes of the zone cluster "zc2" Install the zone cluster zc2: s10-sc32-1 # clzc install zc2 Waiting for zone install commands to complete on all the nodes of the zone cluster "zc2" Note that this step can take a while. It populates the zone root path with package content. Output is getting send to the console of each node (global zone) where you can monitor the progress. Boot the zone cluster zc2: s10-sc32-1 # clzc boot zc2 => on s10-sc32-1: zlogin -C zc2 => on s10-sc32-2: zlogin -C zc2 Perform the following steps in both zones, zc2-z1 and zc2-z2: Enable SSH root login for user root: both-zones# vi /etc/ssh/sshd_config
Combining technologies to work Practicing Solaris Cluster using VirtualBox
Page 32 / 42
=> change the PermitRootLogin setup from no to yes: PermitRootLogin yes both-zones# svcadm restart ssh Add the cluster IP addresses to /etc/inet/hosts: both-zones# vi /etc/hosts # 10.0.2.142 zc2-z1 10.0.2.143 zc2-z2 # # logical hosts 10.0.2.131 s10-sc32-lh2 # # Base cluster nodes 10.0.2.121 s10-sc32-1 10.0.2.122 s10-sc32-2 # # Internal network for VirtualBox 10.0.2.100 vorlon-int Disable unneeded services within both zones in order to conserve some main memory: both-zones# svcadm disable svc:/application/graphical-login/cdelogin:default both-zones# svcadm disable webconsole both-zones# svcadm disable svc:/network/rpc/cde-calendar-manager:default both-zones# svcadm disable svc:/network/rpc/cde-ttdbserver:tcp both-zones# svcadm disable svc:/application/cde-printinfo:default both-zones# svcadm disable svc:/application/font/fc-cache:default both-zones# svcadm disable svc:/application/management/wbem:default both-zones# svcadm disable svc:/application/font/stfsloader:default both-zones# svcadm disable svc:/application/opengl/ogl-select:default both-zones# svcadm disable svc:/application/x11/xfs:default both-zones# svcadm disable svc:/application/print/ppd-cacheupdate:default both-zones# svcadm disable svc:/network/smtp:sendmail both-zones# svcadm disable svc:/application/stosreg:default both-zones# svcadm disable svc:/application/management/seaport:default both-zones# svcadm disable svc:/application/management/sma:default both-zones# svcadm disable svc:/application/management/snmpdx:default both-zones# svcadm disable svc:/application/management/dmi:default
Page 33 / 42
zc1-z1 # clrt register SUNW.gds zc1-z1 # clrs create -g service-rg -t HAStoragePlus -p Zpools=services service-hasp-rs zc1-z1 # clrslh create -g service-rg -h s10-sc32-lh1 service-lh-rs zc1-z1 # clrg online -eM service-rg
Create a link from /usr/sfw/sbin/mysqld to /usr/sfw/bin/mysqld on both zones. This is required since the HA MySQL agent either expects mysqld within bin or libexec: s10-sc32-1 # ln -s /usr/sfw/sbin/mysqld /usr/sfw/bin/mysqld s10-sc32-2 # ln -s /usr/sfw/sbin/mysqld /usr/sfw/bin/mysqld Configure MySQL on the node where the services-rg resource group is online: zc1-z1 # clrg status service-rg === Cluster Resource Groups === Group Name ---------service-rg Node Name --------zc1-z1 zc1-z2 Suspended --------No No Status -----Online Offline
s10-sc32-1 # zfs create services/mysql zc1-z1 # mkdir -p /services/mysql/logs zc1-z1 # mkdir -p /services/mysql/innodb zc1-z1 # cp /usr/sfw/share/mysql/my-small.cnf /services/mysql/my.cnf zc1-z1 # vi /services/mysql/my.cnf --- /usr/sfw/share/mysql/my-small.cnf Thu Jun 12 14:10:10 2008
Combining technologies to work Practicing Solaris Cluster using VirtualBox
Page 34 / 42
+++ /services/mysql/my.cnf Wed Oct 14 18:14:17 2009 @@ -18,7 +18,7 @@ [client] #password = your_password port = 3306 -socket = /tmp/mysql.sock +socket = /tmp/s10-sc32-lh1.sock # Here follows entries for some specific programs @@ -25,7 +25,7 @@ # The MySQL server [mysqld] port = 3306 -socket = /tmp/mysql.sock +socket = /tmp/s10-sc32-lh1.sock skip-locking key_buffer = 16K max_allowed_packet = 1M @@ -50,19 +50,19 @@ #skip-bdb # Uncomment the following if you are using InnoDB tables -#innodb_data_home_dir = /var/mysql/ -#innodb_data_file_path = ibdata1:10M:autoextend -#innodb_log_group_home_dir = /var/mysql/ -#innodb_log_arch_dir = /var/mysql/ +innodb_data_home_dir = /services/mysql/innodb +innodb_data_file_path = ibdata1:10M:autoextend +innodb_log_group_home_dir = /services/mysql/innodb +innodb_log_arch_dir = /services/mysql/innodb # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high -#innodb_buffer_pool_size = 16M -#innodb_additional_mem_pool_size = 2M +innodb_buffer_pool_size = 16M +innodb_additional_mem_pool_size = 2M # Set .._log_file_size to 25 % of buffer pool size -#innodb_log_file_size = 5M -#innodb_log_buffer_size = 8M -#innodb_flush_log_at_trx_commit = 1 -#innodb_lock_wait_timeout = 50 +innodb_log_file_size = 5M +innodb_log_buffer_size = 8M +innodb_flush_log_at_trx_commit = 1 +innodb_lock_wait_timeout = 50 [mysqldump] quick @@ -83,3 +83,6 @@ [mysqlhotcopy] interactive-timeout
Practicing Solaris Cluster using VirtualBox Combining technologies to work
Page 35 / 42
+ +bind-address=s10-sc32-lh1 + zc1-z1 # /usr/sfw/bin/mysql_install_db --datadir=/services/mysql Preparing db table Preparing host table Preparing user table Preparing func table Preparing tables_priv table Preparing columns_priv table Installing all prepared tables 091014 18:29:33 /usr/sfw/sbin/mysqld: Shutdown Complete To start mysqld at boot time you have to copy support-files/mysql.server to the right place for your system PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER ! To do so, start the server, then issue the following commands: /usr/sfw/bin/mysqladmin -u root password 'new-password' /usr/sfw/bin/mysqladmin -u root -h zc1-z1 password 'new-password' See the manual for more instructions. You can start the MySQL daemon with: /usr/sfw/bin/mysqld_safe & You can test the MySQL daemon with the tests in the 'mysql-test' directory: cd /usr/sfw/mysql/mysql-test; ./mysql-test-run Please report any problems with the /usr/sfw/bin/mysqlbug script! The latest information about MySQL is available on the web at http://www.mysql.com Support MySQL by buying support/licenses at http://shop.mysql.com zc1-z1 # chown -R mysql:mysql /services/mysql Manually test the MySQL configuration: zc1-z1 # /usr/sfw/sbin/mysqld --defaults-file=/services/mysql/my.cnf --basedir=/usr/sfw --datadir=/services/mysql --user=mysql --pidfile=/services/mysql/mysqld.pid & zc1-z1 # /usr/sfw/bin/mysql -S /tmp/s10-sc32-lh1.sock -uroot Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 3 to server version: 4.0.31 Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> exit; Bye
Page 36 / 42
Configure the MySQL admin password for the admin user: zc1-z1 # /usr/sfw/bin/mysqladmin -S /tmp/s10-sc32-lh1.sock password 'mysqladmin' Allow access to the database for both cluster nodes for user root: zc1-z1 # /usr/sfw/bin/mysql -S /tmp/s10-sc32-lh1.sock -uroot -p'mysqladmin' Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 3 to server version: 4.0.31 Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> use mysql; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql> GRANT ALL ON *.* TO 'root'@'zc1-z1' IDENTIFIED BY 'mysqladmin'; Query OK, 0 rows affected (0.01 sec) mysql> GRANT ALL ON *.* TO 'root'@'zc1-z2' IDENTIFIED BY 'mysqladmin'; Query OK, 0 rows affected (0.00 sec) mysql> UPDATE user SET Grant_priv='Y' WHERE User='root' AND Host='zc1z1'; Query OK, 1 row affected (0.02 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql> UPDATE user SET Grant_priv='Y' WHERE User='root' AND Host='zc1z2'; Query OK, 0 rows affected (0.01 sec) Rows matched: 1 Changed: 0 Warnings: 0 mysql> exit; Bye Create and setup the HA MySQL resource configuration file: zc1-z1 zc1-z1 zc1-z1 zc1-z1 # # # # mkdir /services/mysql/cluster-config cd /services/mysql/cluster-config cp /opt/SUNWscmys/util/ha_mysql_config . cp /opt/SUNWscmys/util/mysql_config .
Page 37 / 42
LB_POLICY= HAS_RS=service-hasp-rs ZONE= ZONE_BT= PROJECT= BASEDIR=/usr/sfw DATADIR=/services/mysql MYSQLUSER=mysql MYSQLHOST=s10-sc32-lh1 FMUSER=fmuser FMPASS=fmuser LOGDIR=/services/mysql/logs CHECK=YES NDB_CHECK= zc1-z1 # vi mysql_config MYSQL_BASE=/usr/sfw MYSQL_USER=root MYSQL_PASSWD=mysqladmin MYSQL_HOST=s10-sc32-lh1 FMUSER=fmuser FMPASS=fmuser MYSQL_SOCK=/tmp/s10-sc32-lh1.sock MYSQL_NIC_HOSTNAME="zc1-z1 zc1-z2" MYSQL_DATADIR=/services/mysql NBD_CHECK= zc1-z1 # /opt/SUNWscmys/util/mysql_register -f /services/mysql/clusterconfig/mysql_config MySQL version 4 detected on 5.10 Check if the MySQL server is running and accepting connections Add faulmonitor user (fmuser) with password (fmuser) with Process-,Select-, Reload- and Shutdown-privileges to user table for mysql database for host zc1-z1 Add SUPER privilege for fmuser@zc1-z1 Add faulmonitor user (fmuser) with password (fmuser) with Process-,Select-, Reload- and Shutdown-privileges to user table for mysql database for host zc1-z2 Add SUPER privilege for fmuser@zc1-z2
Page 38 / 42
Create test-database sc3_test_database Grant all privileges to sc3_test_database for faultmonitor-user fmuser for host zc1-z1 Grant all privileges to sc3_test_database for faultmonitor-user fmuser for host zc1-z2 Flush all privileges Mysql configuration for HA is done zc1-z1 # kill -TERM `cat /services/mysql/mysqld.pid` zc1-z1 # /opt/SUNWscmys/util/ha_mysql_register -f /services/mysql/cluster-config/ha_mysql_config sourcing /services/mysql/cluster-config/ha_mysql_config and create a working copy under /opt/SUNWscmys/util/ha_mysql_config.work Registration of resource mysql-rs succeeded. remove the working copy /opt/SUNWscmys/util/ha_mysql_config.work zc1-z1 # clrs enable mysql-rs Verify that the services-rg works on both nodes: zc1-z1 # clrs status mysql-rs === Cluster Resources === Resource Name ------------mysql-rs Node Name --------zc1-z1 zc1-z2 State ----Online Offline Status Message -------------Online Offline
zc1-z1 # clrg switch -n zc1-z2 service-rg zc1-z1 # clrs status mysql-rs === Cluster Resources === Resource Name ------------mysql-rs Node Name --------zc1-z1 zc1-z2 State ----Offline Online Status Message -------------Offline Online
Page 39 / 42
s10-sc32-1 # zfs create services/tomcat zc1-z1 # vi /services/tomcat/env.ksh #!/bin/ksh CATALINA_HOME=/usr/apache/tomcat55 CATALINA_BASE=/services/tomcat JAVA_HOME=/usr/java export CATALINA_HOME CATALINA_BASE JAVA_HOME zc1-z1 # chown webservd:webservd /services/tomcat/env.ksh zc1-z1 # cd /var/apache/tomcat55 zc1-z1 # tar cpf - . | ( cd /services/tomcat ; tar xpf -) zc1-z1 # cp /services/tomcat/conf/server-minimal.xml /services/tomcat/conf/server.xml zc1-z1 # cd /services/tomcat zc1-z1 # mkdir cluster-config zc1-z1 # chown webservd:webservd cluster-config zc1-z1 # cd cluster-config zc1-z1 # cp /opt/SUNWsctomcat/util/sctomcat_config . zc1-z1 # cp /opt/SUNWsctomcat/bin/pfile . zc1-z1 # chown webservd:webservd pfile zc1-z1 # vi pfile EnvScript=/services/tomcat/env.ksh User=webservd Basepath=/usr/apache/tomcat55 Host=s10-sc32-lh1 Port=8080 TestCmd="get /index.jsp" ReturnString="CATALINA"
Combining technologies to work Practicing Solaris Cluster using VirtualBox
Page 40 / 42
Startwait=20 zc1-z1 # vi sctomcat_config RS=tomcat-rs RG=service-rg PORT=8080 LH=service-lh-rs NETWORK=true SCALABLE=false PFILE=/services/tomcat/cluster-config/pfile HAS_RS=service-hasp-rs ZONE= ZONE_BT= PROJECT= zc1-z1 # /opt/SUNWsctomcat/util/sctomcat_register -f /services/tomcat/cluster-config/sctomcat_config sourcing /services/tomcat/cluster-config/sctomcat_config and create a working copy under /opt/SUNWsctomcat/util/sctomcat_config.work Registration of resource tomcat-rs succeeded. remove the working copy /opt/SUNWsctomcat/util/sctomcat_config.work zc1-z1 # clrs enable tomcat-rs Verify that the services-rg works on both nodes: zc1-z1 # clrs status tomcat-rs === Cluster Resources === Resource Name ------------tomcat-rs Node Name --------zc1-z1 zc1-z2 State ----Online Offline Status Message -------------Online Offline
zc1-z1 # clrg switch -n zc1-z2 service-rg zc1-z1 # clrs status tomcat-rs === Cluster Resources === Resource Name ------------tomcat-rs Node Name --------zc1-z1 zc1-z2 State ----Offline Online Status Message -------------Offline Online
Page 41 / 42
zc2-z1 # clrssa create -g shared-ip-rg -h s10-sc32-lh2 shared-ip-rs zc2-z1 # clrg online -eM shared-ip-rg Prepare the apache configuration file: both-zones# cd /etc/apache2/ both-zones# cp httpd.conf-example httpd.conf both-zones# vi httpd.conf --- httpd.conf-example Sat Jan 24 17:01:06 2009 +++ httpd.conf Tue Oct 6 13:28:10 2009@@ -60,7 +60,7 @@ # <IfModule !mpm_winnt.c> <IfModule !mpm_netware.c> -#LockFile /var/apache2/logs/accept.lock +LockFile /var/apache2/logs/accept.lock </IfModule> </IfModule> @@ -84,7 +84,7 @@ # identification number when it starts. # <IfModule !mpm_netware.c> -PidFile /var/run/apache2/httpd.pid +PidFile /var/apache2/logs/httpd.pid </IfModule> # @@ -343,7 +343,7 @@ # You will have to access it by its address anyway, and this will make # redirections work in a sensible way. # -ServerName 127.0.0.1 +ServerName 10.0.2.131 # # UseCanonicalName: Determines how Apache constructs self-referencing The default httpd.conf file uses /var/apache/2.2/htdocs as DocumentRoot. Configure the scalable resource group for apache: zc2-z1 # clrt register SUNW.apache zc2-z1 # clrg create -p Maximum_primaries=2 -p Desired_primaries=2 -p RG_dependencies=shared-ip-rg apache-rg zc2-z1 # clrs create -g apache-rg -t SUNW.apache -p Bin_dir=/usr/apache2/bin -p Resource_dependencies=shared-ip-rs -p Scalable=True -p Port_list=80/tcp apache-rs zc2-z1 # clrg online -eM apache-rg Start firefox on vorlon and open the demo URL at http://s10-sc32-lh2/scdemo/. Default is a 1:1 weight for the nodes. You can change the weight to e.g. 4:3 by: zc2-z1 # clrs set -p Load_balancing_weights=4@1,3@2 apache-rs
Seite 42 / 42
A References
A References
1. VirtualBox Download Page: http://www.virtualbox.org/wiki/Downloads 2. Solaris Cluster documentation: http://docs.sun.com/app/docs/prod/sun.cluster32#hic 3. Solaris Cluster Blog: http://blogs.sun.com/SC 4. Solaris OS Hardware Compatibility Lists: http://www.sun.com/bigadmin/hcl/ 5. Toshiba OpenSolaris Laptops: http://www.opensolaris.com/toshibanotebook/index.html 6. Blueprint: Zone Clusters - How to deploy virtual clusters and why: https://www.sun.com/offers/details/820-7351.xml 7. Blueprint: Deploying Oracle Real Application Clusters (RAC) on Solaris Zone Clusters: https://www.sun.com/offers/details/820-7661.xml 8. Blueprint: High Availability MySQL Database Replication with Solaris Zone Cluster: https://www.sun.com/offers/details/820-7582.xml