Download as pdf or txt
Download as pdf or txt
You are on page 1of 27

HADR (Always-ON) Setup for SAP ASE Database

Servers
8 7 11,045

Setting up HADR (Always-ON) for an already existing pair of SAP ASE servers
Since the HADR (High Availability Disaster Recovery) option of SAP ASE option was released, I was eager to set it up for
the ASE servers in our production environment. The existing servers were having either log-shipping or SAN based
replication or Sybase WarmStandby replication as a DR solution.

All of them have been tried and tested in multiple environments for many years. However, each had it’s shortcomings, and
never offered a fully automated High Availability Solution.

HADR or Always ON promised to be a game changer and possibly a lasting HA solution for any large DBMS system.

Our organization was looking for a DR (Disaster Recovery) solution that would guarantee quick recovery time objective
(RTO) and a reliable recovery point objective (RPO). Our organization has a highly critical application that required a fail
safe Disaster Recovery System.

I started studying the HADR installation manual sometime in August 2018. To be honest, the implementation of HADR was
not as straight forward as it appeared initially. I had to struggle through multiple components and many small bugs that
turned up every step of the way. I worked many hours with SAP engineers to identify and fix many new bugs which had
come up when I was doing extensive testing along with our application teams.

Here, I am sharing the steps I followed in the installation of HADR for an existing set of ASE (non-BS) servers in our
organization. I hope, some of you will find this useful and can use it for your own setup.

Disclaimer: The examples here has the server names and other component names that is fictional as I did not want to share
our organization’s server names to the general public. Such names are displayed in Italics. Feel free to replace it with
your given names. This blog does not explain HADR architecture or its components. The blog documents a
comprehensive list of steps to install HADR in an existing ASE server setup.

Assumptions: There is already a pair of ASE Version 16.0 SP03 PL05 or above running on a Red Hat Linux OS or SUSE
Linux OS.

Architecture:

Primary Host Name → nysybprim

Secondary Host Name → njsybsec

Primary DB Server Name → NYSYBPRIM

Secondary Host Name → NJSYBSEC


STEP 1 – Preparing Primary and DR servers at the OS level
Check Supported OS versions for HADR (Always ON) Installation
Check for the latest HADR installation manual for the compatible OS versions for your environment.
The existing OS versions will have to be upgraded if they are not the compatible version as listed in the manual.
Go through the Prerequisites section of the HADR installation manual to understand certain pre-requisites that
would be required for a HADR setup.
SAP Help Portal – HADR Installation Guide – Pre-requisites

Ask UNIX SA to check and install the following libraries in each host :
For Red Hat Linux:

Check for the following Linux packages:

rpm -q openmotif
rpm -q libXp
rpm -q libXt
rpm -q libXtst #(both i686 and x86_64)
rpm -q libXi
rpm -q libXmu
rpm -q libXext #(both i686 and x86_64)
rpm -q libSM
rpm -q libICE
rpm -q libX11
rpm -q libXtst-devel
rpm -q libXi-devel
rpm -q openmotif-devel
rpm -q libXmu-devel
rpm -q libXt-devel
rpm -q libXext-devel
rpm -q libXp-devel
rpm -q libX11-devel
rpm -q libSM-devel
rpm -q libICE-devel
rpm -q gtk2 #(both i686 and x86_64)
rpm -q libgcc #(both i686 and x86_64)

Install any missing packages using yum (below are examples)

Run the following as root:

yum install gtk2.i686


yum install libgcc.i686
yum install glibc.i686
yum install libXext.i686
yum install libXtst.i686
yum install libpng12
yum install libXft
yum install libXp
yum install libXt
yum install libXtst
yum install libXmu

For SUSE Linux

Run the following commands:

zypper install glibc


zypper install libgcc_s1

The HADR system requires two hosts. Installing Fault Manager requires a separate third host. I chose the third host from
one of the application servers. FM doesn’t take much space and do not consume any CPU or memory.
For Linux, Fault Manager requires GLIBC version 2.7 or later. So, have that installed in the third host where the Fault
Manager is going to be installed.
Add fully qualified hostnames to each hosts /etc/hosts file. For e.g.
In nysybprim_host

172.99.99.999 nysybprim_host nysybprim_host.mycomp.com

In njsybsec_host

172.99.99.999 njsybsec_host njsybsec_host.mycomp.com

The databases in both primary and DR servers that will be part of HADR should be identical in size and shape.
Ensure that all the configuration parameters are identical for both the servers.
IMP: Assign “replication_role” to the “sa” login on both the servers.
Run sp_configure “replication agent memory size”, 5120000” on both the primary and DR servers.

STEP 2 – Get Unix SA to Install SAP Host Agent in each server

SAPHOSTAGENT software that comes with the software does not work properly.

* You will need to download SAP Host Agent Version 7.21or later for your OS from the following site:

SAP Software Center Download

SAPCAR is required to extract the saphostagent.sar file. This can also be downloaded from SAP download center. The
downloaded SAPCAR file could be named like SAPCAR_1320-80000941.EXE. After the download, rename it to just
SAPCAR.
Install the SAP Host Agent 7.21 (To be done by Unix SA)
Login as ‘root’
create temporary directory /tmp/hostagent
Copy the Host Agent Software to the temporary directory

cp /sybsoftware/downloads/SAPHOSTAGENT721/SAPHOSTAGENT40_40-20009394.SAR /tmp/hostagent
Extract the SAP Host Agent software

/sybsoftware/downloads/SAPHOSTAGENT721/SAPCAR -xvf /tmp/hostagent/SAPHOSTAGENT40_40-


20009394.SAR -manifest SIGNATURE.SMF

Install the SAP Host Agent software using below parameters ..

/tmp/hostagent/saphostexec -install -passwd

When asked for the password give password like “Syb@se123” or anything else that you may want. Remember the
password as it will be needed later.
Check the status of SAP Host Agent

/usr/sap/hostctrl/exe/saphostexec -status

Give sudo permission for /usr/sap/hostctrl/exe/saphostexec to sapadm login.


Change group of all files under /usr/sap to the group “sapsys”
Remove the temporary directory with all its content

rm -r /tmp/hostagent

Create (touch) a file called “sapservices” under /usr/sap


For the file /usr/sap/hostctrl/exe/host_profile, give “read_only” permissions.
Add “/usr/sap/hostctrl/exe/saphostexec –stop” in the Unix shutdown script
Add “/usr/sap/hostctrl/exe/saphostexec –restart” in the Unix startup script
DBA should be able to login to the host as “sapadm” and run –

“sudo /usr/sap/hostctrl/exe/saphostexec –status”

STEP 3 – Install HADR using the Resource File


Firstly, ensure that you have installed version ASE 16.0 SP03 PL05 or above in both primary and DR hosts. If that is not
the case, you will have to upgrade these servers to these versions first. HADR in previous ASE versions has lot of bugs
and may not work properly.
Create the HADR database (3 alphanumeric characters that you prefer) in each server. As an example, I have given the
name here as C99

create database C99 on datadevice1 = 100 log on logdevice1 = 25

sp_dboption C99 ‘trunc log on chkpt’,true

Copy server entries of primary and DR servers into each other’s interfaces file
To install HADR using the resource file, sample resource file is available at the following location:

$SYBASE/$SYBASE_ASE/init/sample_resource_files/setup_hadr.rs

Copy this file to $SYBASE/$SYBASE_ASE/bin and modify it as follows:


For the Primary server (nysybprim)
#########################################################
# This will be the resource file entries for the “primary
#########################################################
# ID that identifies this cluster
cluster_id=C99

# Which site being configured


setup_site=site1
is_secondary_site_setup=false
synchronization_mode=sync
ase_sa_password=*******

# Set installation_mode
# Valid values: true, false
# If set to true, installation_mode will be set to "BS"
# If set to false, installation_mode will be set to "nonB
setup_bs=false

# BACKUP server system administrator user/password


bs_admin_user=sa
bs_admin_password=

# ASE HADR maintenance user/password


hadr_maintenance_user=hadr_maint
hadr_maintenance_password=hadr_maint_ps

# Replication Management Agent administrator user/passwo


rma_admin_user=hadr_maint
rma_admin_password=hadr_maint_ps

# Databases that will participate in replication and "aut

# cluster ID database
participating_database_1=C99
materialize_participating_database_1=true

# user database
participating_database_2=<dbname>
materialize_participating_database_2=true

#### <copy paste the above 3 lines for any number of data
may want to add to the HADR system> ###

#########################################################
# Entries for Site "site1" on host host1 with primary rol
#########################################################
site1.ase_host_name=nysybprim

# Enter value that identifies this site, like a geographi


site1.site_name=NYPRIM

# Valid values: primary, companion


site1.site_role=primary

# directory where SAP ASE installed


site1.ase_release_directory=/sybsoftware/sybase/NYSYBPRIM

# Directory that stored SAP ASE user data files

# (interfaces, RUN_<server>, error log, etc. files).

site1.ase_user_data_directory=/sybsoftware/sybase/NYSYBPR
site1.ase_server_name=NYSYBPRIM
site1.ase_server_port=4100
site1.backup_server_name=NYSYBPRIM_back
site1.backup_server_port=4200

# Directory to store database dumps in materialization


site1.backup_server_dump_directory=/backups

# Port numbers for Replication Server and Replication Man

# See "rsge.bootstrap.tds.port.number" properties in

<$SYBASE>/DM/RMA-15_5/instances/AgentContainer/config/boo
site1.rma_tds_port=4909
site1.rma_rmi_port=7001

# Starting port number to use when setup Replication Serv

# Make sure next two ports (+1 and +2) are also available
site1.srs_port=5005

# Device buffer for Replication Server on host1

# Recommend size = 128 * N


# where N is the number of databases to replicate,
# including the master and cluster ID databases.
site1.device_buffer_dir=/sybdata/RS_HADR_DeviceBuffer
site1.device_buffer_size=2048

# Persistent queue directory for Replication Server runni


site1.simple_persistent_queue_dir=/sybdata/RS_HADR_QueueD
site1.simple_persistent_queue_size=4096

#########################################################
# Entries for Site "site2" on host host2 with companion
#########################################################

# Host name where SAP ASE run


site2.ase_host_name=njsybsec

# Site name
site2.site_name=NJSEC

# Site role
site2.site_role=companion

# directory where SAP ASE installed


site2.ase_release_directory=/sybsoftware/sybase/NJSYBSEC

# Directory that stored SAP ASE user data files


# (interfaces, RUN_<server>, error log, etc. files).
site2.ase_user_data_directory=/sybsoftware/sybase/NJSYBSE
site2.ase_server_name=NJSYBSEC
site2.ase_server_port=4100
site2.backup_server_name=NJSYBSEC_back
site2.backup_server_port=4200

# Directory to store database dumps in materialzation


# Backup server must able to access this directory
site2.backup_server_dump_directory=/backups/DB_BACKUP_DIR

# Port numbers for Replication Server and Replication Man


# See "rsge.bootstrap.tds.port.number" properties in
# <$SYBASE>/DM/RMA-15_5/instances/AgentContainer/config/b
site2.rma_tds_port=4909
site2.rma_rmi_port=7001
# Starting port number to use when setup Replication Serv
# Make sure next two ports (+1 and +2) are also available
site2.srs_port=5005

# Device buffer for Replication Server on host2


# Recommend size = 128 * N
# where N is the number of databases to replicate,
# including the master and cluster ID databases.
site2.device_buffer_dir=/sybdata/RS_HADR_DeviceBuffer
site2.device_buffer_size=2048

# Persistent queue directory for Replication Server runni


site2.simple_persistent_queue_dir=/sybdata/RS_HADR_QueueD
site2.simple_persistent_queue_size=4096

*********************************************************

Copy this same file to $SYBASE/$SYBASE_ASE/bin in the DR server and modify only the portions marked “red”. Rest
everything will remain same.
For secondary server (njsybsec)

###############################################################
# This will be the resource file entries for the “secondary server” =~ site2
###############################################################
# ID that identifies this cluster
cluster_id=C99# Which site being configured
setup_site=site2
is_secondary_site_setup=true
synchronization_mode=sync..

*** Everything else remains same as above ***

Run the setup using the resource file as follows :


$SYBASE/$SYBASE_ASE/setuphadr <resource file name>

The setup logs will be created automatically in the following location:


$SYBASE/$SYBASE_ASE/init/logs/setuphadr???.???
Monitor the setup logs for the progress and keep an eye on any errors.

The setup in primary will not take much time, but if for some reason,the installation is unsuccessful, you can remove the
entire HADR setup using the command “sap teardown” and re-run the setup after fixing the problem.
SAP Teardown steps:

Login to primary server RMA using isql

isql –Uhadr_admin_user –S<primary_name>:4909 –Phadr_admin_ps


In the command prompt run “sap_teardown”
drop hadr_maint_user login from primary and companion servers
drop hadr_admin_user login from primary and companion servers
shutdown RMA server on both primary and companion
isql –Uhadr_admin_user –S<host_name>:4909 –Phadr_admin_ps
> shutdown

rename directories under the following path: (they get recreated upon reinstall)

$SYBASE/DM/RMA-16_0/instance/AgentContainer/configdb

$SYBASE/DM/RMA-16_0/instance/AgentContainer/logs

reinstall using

$SYBASE/$SYBASE_ASE/setuphadr <resource file name>

If HADR installation is successful, proceed to next step i.e. ASE Cockpit installation

STEP 4 – Installing ASE Cockpit for HADR Manually


Each ASE 16.0 server comes with an admin tool called ASE Cockpit. It is a useful tool to monitor, alert and manage
the HADR system.

Note: ASE Cockpit works with Flash Plugin of the internet browser. As flash will no longer be supported by any browsers
by end of Dec 2020, ASE Cockpit will also get deprecated. ASE Cockpit is not mandatory to run HADR, so you can
disregard in case, the following do not work properly in your environment. Keep a lookout for a replacement of ASE
Cockpit that SAP could rollout in future.

After the installation of HADR software as instructed above, you can configure the ASE Cockpit to manage HADR.

Shutdown the Cockpit server, if it is already started. Or start the cockpit server and shut it down once, which will then
pick up all the HADR relevant parameters in its XML plugins.

If you have the cockpit prompt – type: shutdown


If it is running in the back ground – $SYBASE/COCKPIT-4/bin/cockpit.sh -stop

Go to the directory $SYBASE/COCKPIT-4/plugins.


Create a new directory. The name of the directory will be the name of the ASE server.
e.g: mkdir NYSYBPRIM
Copy the com.sybase.ase templates to the $SYBASE/COCKPIT-4/plugins/NYSYBPRIM directory to create the
NYSYBPRIM plugin profile.

e.g: cp -R $SYBASE/COCKPIT-4/templates/com.sybase.ase/* $SYBASE/COCKPIT-


4/plugins/NYSYBPRIM
Generate the encrypted password for ASE plugin profile. (Create encrypted passwords and copy & paste the encryption
properly. Each time you run the tool, a new encryption code is generated)
Run $SYBASE/COCKPIT-4/bin/passencrypt.
The password should be sa or an user with sa_role in ASE.
e.g: $SYBASE/COCKPIT-4/bin/passencrypt
Password: your password
<crypted value>

Similarly, you will need to generate the encrypted password for hadr_admin_user and use it in the XML file.
Edit the $SYBASE/COCKPIT-4/plugins/NYSYBPRIM/agent-plugin.xml file and enter ASE plugin profile information.
Change the items in shown in “blue” here.

<?xml version="1.0" encoding="ISO-8859-1"?>


<agent-plugin id="com.sybase.ase" version="16.0.0" name
in" provider-name="SAP AG or an SAP affiliate company"
tor="mbean-descriptor.xml" instance="1">

<dependencies />
<properties>
<set-property property="ase.heartbeat.timer" value="60"
<set-property property="ase.heartbeat.update.time" valu
<set-property property="ase.home" value="/sybsoftware/s
<set-property property="ase.interfaces.pathspec" value=
<set-property property="ase.login.timeout" value="30" /
<set-property property="ase.maintain.connection" value=
<set-property property="ase.password" value="Encrypted
<set-property property="ase.port" value="4100" />
<set-property property="ase.query.timeout" value="60" /
<set-property property="ase.server.log" value="/sybsoft
<set-property property="ase.server.name" value="NYSYBPR
<set-property property="ase.start.command" value="/sybs
<set-property property="ase.user" value="sa" />
<set-property property="com.sybase.home" value="/sybsof
<set-property property="rma.home" value="/sybsoftware/s
<set-property property="rma.log.dir" value="/sybsoftwar
<set-property property="rma.password" value=" Encrypted
<set-property property="rma.port" value="4909" />
<set-property property="rma.start.command" value="/sybs
<set-property property="rma.user" value="hadr_admin_use
<set-property property="rs.home" value="/sybsoftware/sy
<set-property property="rs.interfaces.pathspec" value="
<set-property property="rs.password" value=" Encrypted
<set-property property="rs.port" value="5005" />

Start the Cockpit server

e.g: $SYBASE/COCKPIT-4/bin/cockpit.sh

Connect to ASE Cockpit through a browser that has flash plugin support. You should see the NYSYBPRIM server in the
ASE Cockpit drop down menu for connection.
STEP 5 – Configuring ASE Cockpit for HADR

After successfully installing ASE Cockpit as mentioned above, you can login to the ASE Cockpit web-based GUI using
the URL which will be given in the COCKPIT log.

It should be <hostname>:4283

You should login using “sa” and password that you have given when configuring the cockpit.

Once in the portal, you will see three tabs: MONITOR, EXPLORE and ALERT. Go to EXPLORE and select ASE
Servers in the left panel.
On the right panel, when you select the servername, you will get a drop down list with various options. Select
“Properties” option.

In the next screen, select “Agent” in the left panel and fill in the agent information i.e. user name (uafadmin)and
password (uafadmin) and click on authenticate.

Again in the left panel, select “HADR” and fill in RMA details like Port Number (4909), User Name (hadr_admin_user)
and Password and then click on authenticate
After the authentications are successful, you can now see the HADR details in the MONITOR tab as shown below.

STEP 6 – Configuring the FaultManager in the THIRD host


As I mentioned earlier, choose an application server as a third host or use any host that you have as a spare. This is a
small software component so it would not take much CPU or memory.

After you extract the ASE binary, there should be a separate directory listed as FaultManager
For e.g.: /sybsoftware/ASE16/ebf28996/FaultManager
It should have the following files/sub-directories …

archives

sample_response.txt

setup.bin

The setup is a command line menu driven tool. All we need to do is run setup.bin and fill in the details … (For ease of
use, gather all important values beforehand and keep it separately)
sybase@nysybprim: /sybsoftware/sybase/FaultManager ==
Preparing to install
Extracting the JRE from the installer archive...
Unpacking the JRE...
Extracting the installation resources from the instal
Configuring the installer for this system's environme
Launching installer...
Graphical installers are not supported by the VM. The
=====================================================
Fault Manager (created with InstallAnywhere)
-----------------------------------------------------
Preparing CONSOLE Mode Installation...
=====================================================
Introduction
----------------
InstallAnywhere will guide you through the installati

It is strongly recommended that you quit all programs


this installation.Before you proceed, make sure that:
* SAP ASE, Replication Management Agent (RMA), Replic
Host Agent are set up and running on the primary and
* "sapadm" operating system user has a valid password

Respond to each prompt to proceed to the next step in


You may cancel this installation at any time by typin

PRESS <ENTER> TO CONTINUE:

=====================================================
End-user License Agreement
-------------------------------------
1) Americas and Asia Pacific
.
35) Any Other Locations

Please enter the number of the location you are insta


1): 1

LICENSE AGREEMENT
General (applies to all countries, except those for w

IMPORTANT NOTICE: READ THIS LICENSE AGREEMENT CAREFUL


DAYS TO REQUEST A REFUND. THIS IS A LICENSE AND NOT A

Press ENTER to read the text [Type 'back' and press EN


: back

I agree to the terms of the SAP license for the insta


(Y/N): Y

=====================================================
Choose Install Folder
---------------------------
Where would you like to install?
Default Install Folder: /opt/sap

ENTER AN ABSOLUTE PATH, OR PRESS <ENTER> TO ACCEPT TH

INSTALL FOLDER IS: /sybsoftware/sybase/FaultManager


IS THIS CORRECT? (Y/N): Y
=====================================================
Pre-Installation Summary
---------------------------------

Please Review the Following Before Continuing:


Product Name:
Fault Manager

Install Folder:
/sybsoftware/sybase/FaultManager

Product Features:
Fault Manager

Disk Space Information (for Installation Target):


Required: 25,870,191 Bytes
Available: 34,533,335,040 Bytes

PRESS <ENTER> TO CONTINUE:


=====================================================
Ready To Install
---------------------

InstallAnywhere is now ready to install Fault Manager

/sybsoftware/sybase/FaultManager

=====================================================
Installing...
-------------

[===============|================|==================|
[------------------|------------------|--------------
=====================================================
Configure Fault Manager
--------------------------------

Installer has successfully unloaded Fault Manager to


configure it for ASE HADR. If you choose not to confi
you can run "sybdbfm install" to configure it at late

Do you want to configure Fault Manager? (Y/N): Y


=====================================================
Cluster ID
-------------
Cluster ID (Default: ): C99
=====================================================
Failover
-----------

For ASE HADR synchronous replication, do you want aut


happen if primary SAP ASE is unreachable? (Y/N): Y
=====================================================
SAP ASE on Primary
--------------------------

Site name (Default: ): NYPRIM


SAP ASE host name (Default: ): nysybprim.mycomp.com
SAP ASE Name (Default: ): NYSYBPRIM
SAP ASE port (Default: ): 4100
SAP ASE installed directory (Default: /opt/sap): /syb
SAP ASE installed user (Default: sybase): sybase

=====================================================
SAP ASE on Companion
------------------------------
Site name (Default: ): NJSEC
SAP ASE host name (Default: ): njsybsec.mycomp.com
SAP ASE Name (Default: SYBWREUAT): NJSYBSEC
SAP ASE port (Default: ): 4100
SAP ASE installed directory (Default: /opt/sap): /syb
SAP ASE installed user (Default: sybase): sybase
=====================================================
Virtual IP for SAP ASE
----------------------------
Do want to enable virtual IP support for the SAP ASE?
=====================================================
ASE Cockpit
----------------

Do you want to use ASE Cockpit to manage ASE HADR? (Y


primary Cockpit TDS port (Default: 4998):
companion Cockpit TDS port (Default: 4998):

=====================================================
Replication Management Agent on Primary
-----------------------------------------------------
RMA TDS port (Default: 7001): 4909
=====================================================
Replication Management Agent on Companion
-----------------------------------------------------

RMA TDS port (Default: 7001): 4909


=====================================================
Fault Manager Host and Ports
--------------------------------------

Fault Manager host (Default: nyapphost.mycomp.com): n


Fault Manager heartbeat to heartbeat port (Default: 1
Primary Fault Manager heartbeat port (Default: 13777)
Companion Fault Manager heartbeat port (Default: 1378
=====================================================
Secure Store Directory
-----------------------------
Secure Store directory (Default: /sybsoftware/sybase/
=====================================================
Enable SSL for the Fault Manager (Y/N): N
=====================================================
Users for ASE HADR
--------------------------

SAP ASE system administrator user (Default: sa):


SAP ASE system administrator password:
Confirm SAP ASE system administrator password:
RMA administrator user (Default: DR_admin):
RMA administrator password:
Confirm RMA administrator password:
SAP Host Agent user (Default: sapadm):
SAP Host Agent user password:
Confirm SAP Host Agent user password:
Cockpit administrator user (Default: sccadmin): uafad
Cockpit administrator password:
Confirm Cockpit administrator password:
=====================================================
Fault Manager Configuration Summary
--------------------------------------------------

The installer will now configure the Fault Manager wi


STEP 7 – Administration and Troubleshooting Tips
For administrative tasks, you may have to login to the various components individually and check its status.

Following are the methods to login to the various components :


SAP ASE Primary or Companion Servers

$SYBASE/$SYBASE_OCS/bin/isql -U<login_name> -P<password> -S<server_name>

SAP Replication Primary or Companion Servers

isql -Uhadr_admin_user -SC99_REP_NYSYBPRIM -I/sybsoftware/sybase/NYSYBPRIM/DM/interfaces

or

isql -Uhadr_admin_user –S<hostname>:5005

SAP RMA Login in Primary or Companion Servers

isql -Uhadr_admin_user –S<hostname>:4909

Logs for trouble shooting is located at the following places:


SAP ASE Primary or Companion Server

/sybsoftware/sybase/NYSYBPRIM/ASE-16_0/install/<servername>.log

SAP Replication or Companion Servers

/sybsoftware/sybase/NYSYBPRIM/DM/C99_REP_NYSYBPRIM/C99_REP_NYSYBPRIM.log

SAP RMA errorlogs

/sybsoftware/sybase/NYSYBPRIM/DM/RMA-
16_0/instances/AgentContainer/logs/RMA_<mmddyyyy>.log

ASE Cockpit errorlogs

/sybsoftware/sybase/NYSYBPRIM/COCKPIT-4/log/agent.log

FaultManager errorlog

/sybsoftware/sybase/FaultManager/dev_sybdbfm
Starting & Stopping the various components

ASE Cockpit can be used to restart various components without having to login to the host. However,
sometimes the ASE Cockpit may not function very well. So, we will have to depend on manual intervention.

Starting & Stopping SAP ASE servers

ASE servers can be started in the usual way from the $SYBASE/$SYBASE_ASE/install directory using the
starteserver command.

To shut down the ASE server that is part of the HADR system, follow these steps …

First shutdown the FaultManager (steps given below)


Login to the companion server and shutdown that first using the “shutdown with wait/nowait” command.
Login to the primary server and shutdown using the command “shutdown with wait no_hadr”

* If you do not follow the above steps, then the Fault Manager will failover the primary server.

* To Re-start the servers, restart in the reverse order of the above.

Starting & Stopping the RMA


Start the RMA using the following command

/sybsoftware/sybase/NYSYBPRIM/DM/SYBASE.sh

/sybsoftware/sybase/NYSYBPRIM/DM/RMA-16_0/bin/RunContainer.sh AgentContainer&

Stop the RMA

isql -Uhadr_admin_user –S<hostname>:4909

shutdown

Starting & Stopping the Rep Server


Starting the Rep Server:

/sybsoftware/sybase/NYSYBPRIM/DM/SYBASE.sh

/sybsoftware/sybase/NYSYBPRIM/DM/C99_REP_NYSYBPRIM/RUN_C99_REP_NYSYBPRIM.s
h

Stopping the Rep Server:

isql -Uhadr_admin_user -SC99_REP_NYSYBPRIM


shutdown

Starting & Stopping the ASE Cockpit


To start the cockpit use this command

cd /sybsoftware/sybase/NYSYBPRIM/COCKPIT-4/bin

nohup ./cockpit.sh 2>&1 > cockpit-console.out &

disown

To Stop the ASE Cockpit

/sybsoftware/sybase/NYSYBPRIM/COCKPIT-4/bin/cockpit.sh –stop

Starting & Stopping the FaultManager


To start the FaultManager

/sybsoftware/sybase/FaultManager/SYBASE.sh

/sybsoftware/sybase/FaultManager/sybdbfm_C03 &

To stop the FaultManager

/sybsoftware/sybase/FaultManager/FaultManager/bin/sybdbfm stop
HADR: A Glimpse at ASE HADR Instance & beyond…
2 5 1,860

Now that SAP ASE comes with it own brand-new Always On option it is worth having a closer look at it.

Below is a peep-view at Sybase HADR Option.

HADR has three basic components: ASE, RS and Fault Manager. I skip the Fault Manager for the moment (although it
reduces all of self-management to nil) and concentrate on the ASE|RS components. ASE setup with HADR enabled requires
super user (sudo) credentials – for those who wishes to install the whole bundle at once. It is also essential to remember that
HADR is not designed to run on a single host.

Setup Phase:

Setup phase is pretty straightforward. ASE 16 SP02 installation comes with a sample resource file for hadr in addition to the
sample resource files it traditionally had for other ASE components
($SYBASE/$SYBASE_ASE/init/sample_resource_files/setup_hadr.rs). You may need to make two copies of it – one for the
primary side and another for the replicate – to make life easier. The sample resource file is commented throughout – you
just need to make sure that the resource file for each side is modified correctly following the guidelines in the template
script. Otherwise things may go wrong…

With the resource file fixed – the HADR setup consists of calling the setuphadr utility twice – once for the primary side and
once for the replicate side. DB synchronization is performed by the script itself (through calls to the RMA component –
which has to be started on each host before calling setuphadr).

All in all it takes 2 simple steps to set things up (again, not counting FM installation).

When done, you may verify installation by connecting to RMA through its rma_tds_port – or you may connect to RS through
its srs_port – both defined in the resource file. To understand if all is good or not – beyond the success message setuphadr
prints out – you will need to read documentation first. RMA command line options are brand new set of sap_… commands.
RS has a lot of new components – some in suspended|down state even when the system is operating well. There is also RS
log file (to be found in $SYBASE/DM/{CID} directory. And there is an RMA log file (to be found in $SYBASE/DM/RMA-
15_5/instances/AgentContainer/logs).

Yet another option is … ASE Cockpit (you will have to start it first from $SYBASE/COCKPIT-4/bin/cockpit.sh – default
port is 4283 for https connection – but connectivity information is printed out so you don’t need to worry remembering the
right path)…

Monitoring Phase:

ASE 16 Sp02+ is now managed through ASE Cockpit – a new kid on the block. The system that has HADR enabled will
look like this:
You will find that under the system status there is another status: site mode status. This one indicates which side of the
HADR system it is (primary|standby) and what is its health status (active|stopped|unknown).

HADR has its own screen in the Monitor section as well. This will look like:
Here too one will see the state of the system (active|stopped|unknown) and in addition what the system components are
with their corresponding states:

The “path” here is replicating from ASESRV1 to ASESRV2 for asedb, master and DA1 databases (master and DA1 are
“default” databases to be replicated, with DA1 is the name of the cluster you choose in the hadr resource file). one may see
the direction of replication, state and health – across various parameters (components state, primary ASE log state,
replication throughput, replication backlog and latency).

When the system will be up enough time to collect metrics ASE Cockpit will be able to show various handy statistics on the
HADR.

Backlog:

RS Latency:
Primary ASE Log Status:
Replication Throughput:
This is a pretty nice way to visualize your HADR health status – including a neat breakdown of latency into RS components
– once visible through manual

rs_ticket interface. STP state is also visible through primary log status.

In addition to this it is possible to see other HADR performance statistics through the ASE Cockpit “Statistics Chart”
interface across various RS related metrics:
I must confess that for those used to run manual commands to see what’s going on with the replicated environment this is a
massive step forward. Looks neat. Still things to be polished but hey, this is just a start! It already provides a decent way to
monitor replicated environment “out-of-the-box”.

Operation Phase:

ASE Cockpit allows to manage the system as well as to monitor it. This is done from the EXPLORE tab of the ASE
Cockpit.

The management interface looks like this:


ASE Cockpit covers:

Manual fail over


Re-materialization
Forcing the server to become primary if HADR state is unknown
Stopping|starting basic RMA|RS components

For the fail over ASE Cockpit will also display the fail over status (log) during the operation:
For those attentive to detail… there is a blue-yellow motive adhered to….

Round Up:

HADR option looks like a valuable addition to ASE environment. Although setting up replication server has never been a
big issue for ASE DBAs – an easily scripted task – having it all done for you is nice. On the other hand, managing &
monitoring the environment has been a thorny thing. With HADR option this task has been addressed and elevated to a new
level. Everything ASE Cockpit displays may be still done manually (either using the new RMA interface or using the old
good RS commands). At the same time having the things in front of one’s eyes easily accessible has a great value.

There are still things to be improved – like having embedded RS to work on auto-expanding partition rather than a fixed size
one, having the cluster id database auto-expanding rather than fixed size, having more technical documentation on how
various components work and how the new system shall be troubleshooted in case things go wrong, having an ability to
interact with ASE cockpit better, having more elastic licensing model, having more than one replicated nodes in the same
configuration – but as a starting point this feature is definitely worth a try. Big thumps to ASE & RS engineers (I’ve heard
rumors that the throughput is much faster for HADR than what we are used to and it looks like there are several ASO|HVAR
options turned on by default).

You might also like