Professional Documents
Culture Documents
HADR Always On Setup For SAP ASE Database Servers 1702514819
HADR Always On Setup For SAP ASE Database Servers 1702514819
Servers
8 7 11,045
Setting up HADR (Always-ON) for an already existing pair of SAP ASE servers
Since the HADR (High Availability Disaster Recovery) option of SAP ASE option was released, I was eager to set it up for
the ASE servers in our production environment. The existing servers were having either log-shipping or SAN based
replication or Sybase WarmStandby replication as a DR solution.
All of them have been tried and tested in multiple environments for many years. However, each had it’s shortcomings, and
never offered a fully automated High Availability Solution.
HADR or Always ON promised to be a game changer and possibly a lasting HA solution for any large DBMS system.
Our organization was looking for a DR (Disaster Recovery) solution that would guarantee quick recovery time objective
(RTO) and a reliable recovery point objective (RPO). Our organization has a highly critical application that required a fail
safe Disaster Recovery System.
I started studying the HADR installation manual sometime in August 2018. To be honest, the implementation of HADR was
not as straight forward as it appeared initially. I had to struggle through multiple components and many small bugs that
turned up every step of the way. I worked many hours with SAP engineers to identify and fix many new bugs which had
come up when I was doing extensive testing along with our application teams.
Here, I am sharing the steps I followed in the installation of HADR for an existing set of ASE (non-BS) servers in our
organization. I hope, some of you will find this useful and can use it for your own setup.
Disclaimer: The examples here has the server names and other component names that is fictional as I did not want to share
our organization’s server names to the general public. Such names are displayed in Italics. Feel free to replace it with
your given names. This blog does not explain HADR architecture or its components. The blog documents a
comprehensive list of steps to install HADR in an existing ASE server setup.
Assumptions: There is already a pair of ASE Version 16.0 SP03 PL05 or above running on a Red Hat Linux OS or SUSE
Linux OS.
Architecture:
Ask UNIX SA to check and install the following libraries in each host :
For Red Hat Linux:
rpm -q openmotif
rpm -q libXp
rpm -q libXt
rpm -q libXtst #(both i686 and x86_64)
rpm -q libXi
rpm -q libXmu
rpm -q libXext #(both i686 and x86_64)
rpm -q libSM
rpm -q libICE
rpm -q libX11
rpm -q libXtst-devel
rpm -q libXi-devel
rpm -q openmotif-devel
rpm -q libXmu-devel
rpm -q libXt-devel
rpm -q libXext-devel
rpm -q libXp-devel
rpm -q libX11-devel
rpm -q libSM-devel
rpm -q libICE-devel
rpm -q gtk2 #(both i686 and x86_64)
rpm -q libgcc #(both i686 and x86_64)
The HADR system requires two hosts. Installing Fault Manager requires a separate third host. I chose the third host from
one of the application servers. FM doesn’t take much space and do not consume any CPU or memory.
For Linux, Fault Manager requires GLIBC version 2.7 or later. So, have that installed in the third host where the Fault
Manager is going to be installed.
Add fully qualified hostnames to each hosts /etc/hosts file. For e.g.
In nysybprim_host
In njsybsec_host
The databases in both primary and DR servers that will be part of HADR should be identical in size and shape.
Ensure that all the configuration parameters are identical for both the servers.
IMP: Assign “replication_role” to the “sa” login on both the servers.
Run sp_configure “replication agent memory size”, 5120000” on both the primary and DR servers.
SAPHOSTAGENT software that comes with the software does not work properly.
* You will need to download SAP Host Agent Version 7.21or later for your OS from the following site:
SAPCAR is required to extract the saphostagent.sar file. This can also be downloaded from SAP download center. The
downloaded SAPCAR file could be named like SAPCAR_1320-80000941.EXE. After the download, rename it to just
SAPCAR.
Install the SAP Host Agent 7.21 (To be done by Unix SA)
Login as ‘root’
create temporary directory /tmp/hostagent
Copy the Host Agent Software to the temporary directory
cp /sybsoftware/downloads/SAPHOSTAGENT721/SAPHOSTAGENT40_40-20009394.SAR /tmp/hostagent
Extract the SAP Host Agent software
When asked for the password give password like “Syb@se123” or anything else that you may want. Remember the
password as it will be needed later.
Check the status of SAP Host Agent
/usr/sap/hostctrl/exe/saphostexec -status
rm -r /tmp/hostagent
Copy server entries of primary and DR servers into each other’s interfaces file
To install HADR using the resource file, sample resource file is available at the following location:
$SYBASE/$SYBASE_ASE/init/sample_resource_files/setup_hadr.rs
# Set installation_mode
# Valid values: true, false
# If set to true, installation_mode will be set to "BS"
# If set to false, installation_mode will be set to "nonB
setup_bs=false
# cluster ID database
participating_database_1=C99
materialize_participating_database_1=true
# user database
participating_database_2=<dbname>
materialize_participating_database_2=true
#### <copy paste the above 3 lines for any number of data
may want to add to the HADR system> ###
#########################################################
# Entries for Site "site1" on host host1 with primary rol
#########################################################
site1.ase_host_name=nysybprim
site1.ase_user_data_directory=/sybsoftware/sybase/NYSYBPR
site1.ase_server_name=NYSYBPRIM
site1.ase_server_port=4100
site1.backup_server_name=NYSYBPRIM_back
site1.backup_server_port=4200
<$SYBASE>/DM/RMA-15_5/instances/AgentContainer/config/boo
site1.rma_tds_port=4909
site1.rma_rmi_port=7001
# Make sure next two ports (+1 and +2) are also available
site1.srs_port=5005
#########################################################
# Entries for Site "site2" on host host2 with companion
#########################################################
# Site name
site2.site_name=NJSEC
# Site role
site2.site_role=companion
*********************************************************
Copy this same file to $SYBASE/$SYBASE_ASE/bin in the DR server and modify only the portions marked “red”. Rest
everything will remain same.
For secondary server (njsybsec)
###############################################################
# This will be the resource file entries for the “secondary server” =~ site2
###############################################################
# ID that identifies this cluster
cluster_id=C99# Which site being configured
setup_site=site2
is_secondary_site_setup=true
synchronization_mode=sync..
The setup in primary will not take much time, but if for some reason,the installation is unsuccessful, you can remove the
entire HADR setup using the command “sap teardown” and re-run the setup after fixing the problem.
SAP Teardown steps:
rename directories under the following path: (they get recreated upon reinstall)
$SYBASE/DM/RMA-16_0/instance/AgentContainer/configdb
$SYBASE/DM/RMA-16_0/instance/AgentContainer/logs
reinstall using
If HADR installation is successful, proceed to next step i.e. ASE Cockpit installation
Note: ASE Cockpit works with Flash Plugin of the internet browser. As flash will no longer be supported by any browsers
by end of Dec 2020, ASE Cockpit will also get deprecated. ASE Cockpit is not mandatory to run HADR, so you can
disregard in case, the following do not work properly in your environment. Keep a lookout for a replacement of ASE
Cockpit that SAP could rollout in future.
After the installation of HADR software as instructed above, you can configure the ASE Cockpit to manage HADR.
Shutdown the Cockpit server, if it is already started. Or start the cockpit server and shut it down once, which will then
pick up all the HADR relevant parameters in its XML plugins.
Similarly, you will need to generate the encrypted password for hadr_admin_user and use it in the XML file.
Edit the $SYBASE/COCKPIT-4/plugins/NYSYBPRIM/agent-plugin.xml file and enter ASE plugin profile information.
Change the items in shown in “blue” here.
<dependencies />
<properties>
<set-property property="ase.heartbeat.timer" value="60"
<set-property property="ase.heartbeat.update.time" valu
<set-property property="ase.home" value="/sybsoftware/s
<set-property property="ase.interfaces.pathspec" value=
<set-property property="ase.login.timeout" value="30" /
<set-property property="ase.maintain.connection" value=
<set-property property="ase.password" value="Encrypted
<set-property property="ase.port" value="4100" />
<set-property property="ase.query.timeout" value="60" /
<set-property property="ase.server.log" value="/sybsoft
<set-property property="ase.server.name" value="NYSYBPR
<set-property property="ase.start.command" value="/sybs
<set-property property="ase.user" value="sa" />
<set-property property="com.sybase.home" value="/sybsof
<set-property property="rma.home" value="/sybsoftware/s
<set-property property="rma.log.dir" value="/sybsoftwar
<set-property property="rma.password" value=" Encrypted
<set-property property="rma.port" value="4909" />
<set-property property="rma.start.command" value="/sybs
<set-property property="rma.user" value="hadr_admin_use
<set-property property="rs.home" value="/sybsoftware/sy
<set-property property="rs.interfaces.pathspec" value="
<set-property property="rs.password" value=" Encrypted
<set-property property="rs.port" value="5005" />
e.g: $SYBASE/COCKPIT-4/bin/cockpit.sh
Connect to ASE Cockpit through a browser that has flash plugin support. You should see the NYSYBPRIM server in the
ASE Cockpit drop down menu for connection.
STEP 5 – Configuring ASE Cockpit for HADR
After successfully installing ASE Cockpit as mentioned above, you can login to the ASE Cockpit web-based GUI using
the URL which will be given in the COCKPIT log.
It should be <hostname>:4283
You should login using “sa” and password that you have given when configuring the cockpit.
Once in the portal, you will see three tabs: MONITOR, EXPLORE and ALERT. Go to EXPLORE and select ASE
Servers in the left panel.
On the right panel, when you select the servername, you will get a drop down list with various options. Select
“Properties” option.
In the next screen, select “Agent” in the left panel and fill in the agent information i.e. user name (uafadmin)and
password (uafadmin) and click on authenticate.
Again in the left panel, select “HADR” and fill in RMA details like Port Number (4909), User Name (hadr_admin_user)
and Password and then click on authenticate
After the authentications are successful, you can now see the HADR details in the MONITOR tab as shown below.
After you extract the ASE binary, there should be a separate directory listed as FaultManager
For e.g.: /sybsoftware/ASE16/ebf28996/FaultManager
It should have the following files/sub-directories …
archives
sample_response.txt
setup.bin
The setup is a command line menu driven tool. All we need to do is run setup.bin and fill in the details … (For ease of
use, gather all important values beforehand and keep it separately)
sybase@nysybprim: /sybsoftware/sybase/FaultManager ==
Preparing to install
Extracting the JRE from the installer archive...
Unpacking the JRE...
Extracting the installation resources from the instal
Configuring the installer for this system's environme
Launching installer...
Graphical installers are not supported by the VM. The
=====================================================
Fault Manager (created with InstallAnywhere)
-----------------------------------------------------
Preparing CONSOLE Mode Installation...
=====================================================
Introduction
----------------
InstallAnywhere will guide you through the installati
=====================================================
End-user License Agreement
-------------------------------------
1) Americas and Asia Pacific
.
35) Any Other Locations
LICENSE AGREEMENT
General (applies to all countries, except those for w
=====================================================
Choose Install Folder
---------------------------
Where would you like to install?
Default Install Folder: /opt/sap
Install Folder:
/sybsoftware/sybase/FaultManager
Product Features:
Fault Manager
/sybsoftware/sybase/FaultManager
=====================================================
Installing...
-------------
[===============|================|==================|
[------------------|------------------|--------------
=====================================================
Configure Fault Manager
--------------------------------
=====================================================
SAP ASE on Companion
------------------------------
Site name (Default: ): NJSEC
SAP ASE host name (Default: ): njsybsec.mycomp.com
SAP ASE Name (Default: SYBWREUAT): NJSYBSEC
SAP ASE port (Default: ): 4100
SAP ASE installed directory (Default: /opt/sap): /syb
SAP ASE installed user (Default: sybase): sybase
=====================================================
Virtual IP for SAP ASE
----------------------------
Do want to enable virtual IP support for the SAP ASE?
=====================================================
ASE Cockpit
----------------
=====================================================
Replication Management Agent on Primary
-----------------------------------------------------
RMA TDS port (Default: 7001): 4909
=====================================================
Replication Management Agent on Companion
-----------------------------------------------------
or
/sybsoftware/sybase/NYSYBPRIM/ASE-16_0/install/<servername>.log
/sybsoftware/sybase/NYSYBPRIM/DM/C99_REP_NYSYBPRIM/C99_REP_NYSYBPRIM.log
/sybsoftware/sybase/NYSYBPRIM/DM/RMA-
16_0/instances/AgentContainer/logs/RMA_<mmddyyyy>.log
/sybsoftware/sybase/NYSYBPRIM/COCKPIT-4/log/agent.log
FaultManager errorlog
/sybsoftware/sybase/FaultManager/dev_sybdbfm
Starting & Stopping the various components
ASE Cockpit can be used to restart various components without having to login to the host. However,
sometimes the ASE Cockpit may not function very well. So, we will have to depend on manual intervention.
ASE servers can be started in the usual way from the $SYBASE/$SYBASE_ASE/install directory using the
starteserver command.
To shut down the ASE server that is part of the HADR system, follow these steps …
* If you do not follow the above steps, then the Fault Manager will failover the primary server.
/sybsoftware/sybase/NYSYBPRIM/DM/SYBASE.sh
/sybsoftware/sybase/NYSYBPRIM/DM/RMA-16_0/bin/RunContainer.sh AgentContainer&
shutdown
/sybsoftware/sybase/NYSYBPRIM/DM/SYBASE.sh
/sybsoftware/sybase/NYSYBPRIM/DM/C99_REP_NYSYBPRIM/RUN_C99_REP_NYSYBPRIM.s
h
cd /sybsoftware/sybase/NYSYBPRIM/COCKPIT-4/bin
disown
/sybsoftware/sybase/NYSYBPRIM/COCKPIT-4/bin/cockpit.sh –stop
/sybsoftware/sybase/FaultManager/SYBASE.sh
/sybsoftware/sybase/FaultManager/sybdbfm_C03 &
/sybsoftware/sybase/FaultManager/FaultManager/bin/sybdbfm stop
HADR: A Glimpse at ASE HADR Instance & beyond…
2 5 1,860
Now that SAP ASE comes with it own brand-new Always On option it is worth having a closer look at it.
HADR has three basic components: ASE, RS and Fault Manager. I skip the Fault Manager for the moment (although it
reduces all of self-management to nil) and concentrate on the ASE|RS components. ASE setup with HADR enabled requires
super user (sudo) credentials – for those who wishes to install the whole bundle at once. It is also essential to remember that
HADR is not designed to run on a single host.
Setup Phase:
Setup phase is pretty straightforward. ASE 16 SP02 installation comes with a sample resource file for hadr in addition to the
sample resource files it traditionally had for other ASE components
($SYBASE/$SYBASE_ASE/init/sample_resource_files/setup_hadr.rs). You may need to make two copies of it – one for the
primary side and another for the replicate – to make life easier. The sample resource file is commented throughout – you
just need to make sure that the resource file for each side is modified correctly following the guidelines in the template
script. Otherwise things may go wrong…
With the resource file fixed – the HADR setup consists of calling the setuphadr utility twice – once for the primary side and
once for the replicate side. DB synchronization is performed by the script itself (through calls to the RMA component –
which has to be started on each host before calling setuphadr).
All in all it takes 2 simple steps to set things up (again, not counting FM installation).
When done, you may verify installation by connecting to RMA through its rma_tds_port – or you may connect to RS through
its srs_port – both defined in the resource file. To understand if all is good or not – beyond the success message setuphadr
prints out – you will need to read documentation first. RMA command line options are brand new set of sap_… commands.
RS has a lot of new components – some in suspended|down state even when the system is operating well. There is also RS
log file (to be found in $SYBASE/DM/{CID} directory. And there is an RMA log file (to be found in $SYBASE/DM/RMA-
15_5/instances/AgentContainer/logs).
Yet another option is … ASE Cockpit (you will have to start it first from $SYBASE/COCKPIT-4/bin/cockpit.sh – default
port is 4283 for https connection – but connectivity information is printed out so you don’t need to worry remembering the
right path)…
Monitoring Phase:
ASE 16 Sp02+ is now managed through ASE Cockpit – a new kid on the block. The system that has HADR enabled will
look like this:
You will find that under the system status there is another status: site mode status. This one indicates which side of the
HADR system it is (primary|standby) and what is its health status (active|stopped|unknown).
HADR has its own screen in the Monitor section as well. This will look like:
Here too one will see the state of the system (active|stopped|unknown) and in addition what the system components are
with their corresponding states:
The “path” here is replicating from ASESRV1 to ASESRV2 for asedb, master and DA1 databases (master and DA1 are
“default” databases to be replicated, with DA1 is the name of the cluster you choose in the hadr resource file). one may see
the direction of replication, state and health – across various parameters (components state, primary ASE log state,
replication throughput, replication backlog and latency).
When the system will be up enough time to collect metrics ASE Cockpit will be able to show various handy statistics on the
HADR.
Backlog:
RS Latency:
Primary ASE Log Status:
Replication Throughput:
This is a pretty nice way to visualize your HADR health status – including a neat breakdown of latency into RS components
– once visible through manual
rs_ticket interface. STP state is also visible through primary log status.
In addition to this it is possible to see other HADR performance statistics through the ASE Cockpit “Statistics Chart”
interface across various RS related metrics:
I must confess that for those used to run manual commands to see what’s going on with the replicated environment this is a
massive step forward. Looks neat. Still things to be polished but hey, this is just a start! It already provides a decent way to
monitor replicated environment “out-of-the-box”.
Operation Phase:
ASE Cockpit allows to manage the system as well as to monitor it. This is done from the EXPLORE tab of the ASE
Cockpit.
For the fail over ASE Cockpit will also display the fail over status (log) during the operation:
For those attentive to detail… there is a blue-yellow motive adhered to….
Round Up:
HADR option looks like a valuable addition to ASE environment. Although setting up replication server has never been a
big issue for ASE DBAs – an easily scripted task – having it all done for you is nice. On the other hand, managing &
monitoring the environment has been a thorny thing. With HADR option this task has been addressed and elevated to a new
level. Everything ASE Cockpit displays may be still done manually (either using the new RMA interface or using the old
good RS commands). At the same time having the things in front of one’s eyes easily accessible has a great value.
There are still things to be improved – like having embedded RS to work on auto-expanding partition rather than a fixed size
one, having the cluster id database auto-expanding rather than fixed size, having more technical documentation on how
various components work and how the new system shall be troubleshooted in case things go wrong, having an ability to
interact with ASE cockpit better, having more elastic licensing model, having more than one replicated nodes in the same
configuration – but as a starting point this feature is definitely worth a try. Big thumps to ASE & RS engineers (I’ve heard
rumors that the throughput is much faster for HADR than what we are used to and it looks like there are several ASO|HVAR
options turned on by default).